When the boss of the world’s largest tech firm (depending on which day of the week it is and the mood of the stock market) decides to speak to regulators, it is always going to make headlines. When that CEO talks about “harmful, even deadly effects” from data harvesting, you can understand why some observers have been shocked.
Yet that is exactly what Tim Cook, boss of Apple, told the assembled commissioners, MEPs and data protection authorities gathered for the 40th International Conference of Data Protection and Privacy Commissioners. Cook’s message was very direct: personal data is being "weaponised against us with military efficiency.” He went on to praise the General Data Protection Regulation (GDPR) and called for an equivalent to be passed in the US as soon as possible.
It might be possible to accuse him of playing to his audience - the theme of the event is “Debating ethics: Dignity and respect in data-driven life”. But the intensity of his delivery suggested this was for real. Apple has been playing on the privacy side of Silicon Valley for some years and there is clearly a view that, as a brand which puts enabling technology into the hands of consumers and therefore creates opportunities for the “mass surveillance” he was condemning, it needs to adopt an ethical position.
"Organisations are on a collision course with their customers."
This puts the company in line with a school of thought that is gaining considerable traction. In a report published by Boston Consulting Group in March 2018, “Bridging the trust gap in personal data markets,” the authors wrote: “company leaders at the highest levels must develop new ways to manage and use data, rather than confining the discussion to legal or IT, as it is at most companies. Even organisations that use data for completely legal and fully disclosed reasons are on a collision course with their customers.”
They point to a decline in trust metrics, such as the fall from 62% to 58% between 2013 and 2015 in the number of consumers who say they are willing to allow companies they trust to use their personal data. And that is among the positively-minded - where trust is absent, the level who are happy for companies to use their data drops to 7%.
For digital platforms whose model is based on monetising personal information in order to pay for free services, this is a worrying trend. While Facebook has been trying to shrug off the impact of the Cambridge Analytica scandal by pointing to stable user numbers and growing ad revenues, it has also been busy changing some dimensions of how it works. Google has also been putting its privacy controls front and centre, without even the pressure of a major investigation by regulators. Both firms’ CEOs are also in Brussels this week speaking at the same conference as Cook.
Platforms that have to sell data are in conflict with shifting expectations.
But theirs is more likely to be either a defensive message or an attempt to distract attention away from the underlying problem they face - as free services, they have to sell users’ data to advertisers and this will always bring them into a degree of conflict with shifting expectations and ethical concerns.
More challenging yet, both platforms enable millions of app developers to introduce new content (games, services, feeds) subject to rules and codes of conduct that have been shown to be only loosely policed in the past. Many of these apps operate in ways that are well outside of users’ expectations and in ethical grey areas, such as tracking users who have deleted an app and retargeting them across devices.
So Silicon Valley is divided into two camps - tech brands that are enabling privacy (such as Mozilla which has introduced cross-device cookie blocking into its latest release of the browser Firefox) and platforms or apps that have a requirement at base to intrude, track and monetise.
The tone of the discussion here in Brussels (where DataIQ is today interviewing the Information Commissioner, Elizabeth Denham, and her deputies, Steve Wood and James Dipple-Johnstone, about “Democracy disrupted”) is about how to control and regulate for privacy, not whether this is necessary. As California has shown with its CCPA, it’s a message that can get traction even in tech’s heartland.