According to various news accounts, a professor at Cambridge University built a Facebook app around 2014 that involved a personality quiz. About 270,000 users of the app agreed to share some of their Facebook information, as well as data from people on their friends list. As a result, tens of millions ended up part of this data-mining operation.
Cambridge Analytica is a data science company that was hired by the Trump campaign to create psychographic models of the American electorate. In order to do so, the company needed data – more data than it could feasibly collect on its own, and specifically the kind of data that only Facebook and Google would have.
Both Facebook and Google are frictionless aggregators of user data, but Facebook’s business model renders it inherently more vulnerable to this kind of data “breach” than Google. Google’s value comes from its position as gatekeeper of the internet – if a website does not appear on Google’s search results, it for all intents and purposes does not exist. This means Google can collect revenue and third-party data without exchanging its own data. Indeed, the company has gotten into trouble with regulators over anticompetitive practices such as scraping data from third party websites to use on its own platforms.
On the other hand, Facebook’s value is derived from the trove of proprietary personal data generated by its over 2 billion monthly users. However, the data has no value in and of itself until it is monetised through targeted advertising and third-party websites or applications. It is through a third-party application that Cambridge Analytica ultimately got its hands on 50 million Facebook profiles.
In 2014, 270,000 Americans downloaded a survey app and consented to having their Facebook data harvested. At the time, Facebook permissions allowed users to “consent” to their friends’ data being collected by third parties without their friends’ knowledge (this permission has since been removed). Since each user had on average several hundred friends, it resulted in 50 million users unknowingly, and without informed consent, allowing their personal data to be profiled for political targeting by the Trump campaign.
The current backlash against Facebook has to do with how it failed to prevent the personal data of 50 million users from landing in Cambridge Analytica’s hands without the users’ knowledge, and what can be done to prevent such breaches (in the practical sense) from happening again. While Cambridge Analytica has denied the efficacy of its psychographic targeting on the Trump campaign, the thought that Facebook’s usergenerated data can be manipulated to sway said users’ voting decisions hints at just how powerful and comprehensive Facebook’s data are.
Facebook is already under fire for its alleged failure to prevent Russian interference during the 2016 Election, and this new revelation further fuels the flames. The most obvious risk right now is regulation designed to curb Facebook’s power, although such regulation would be unprecedented as Facebook’s model is itself unprecedented.
Whatever happens to be the political or regulatory fallout from these recent developments, this will be an interesting space to watch, not only with respect to Facebook, but also for the broader group of megacap technology companies
The EU’s General Data Protection Regulation (GDPR) takes a huge step towards restricting what companies can do with personal data without explicit user consent and enforcing data portability to ensure greater individual control over personal data.
Privacy regulators across the EU are join together to investigate. National authorities from across the EU should form a joint taskforce to determine whether the social media giant and Cambridge Analytica broke the bloc’s strict data protection laws.
On Wednesday (21 March), Andrea Jelinek, the chair of the article 29 working party, the umbrella group of national data protection authorities from EU member states, said the organisation is working together to investigate the incident. The UK authority is leading the group’s inquiry.
“As a rule personal data cannot be used without full transparency on how it is used and with whom it is shared. This is therefore a very serious allegation with far-reaching consequences for data protection rights of individuals and the democratic process. ICO, the UK’s data protection authority, is conducting the investigation into this matter. As Chair of the Article 29 Working Party, I fully support their investigation. The members of the Article 29 Working Party will work together in this process,” Jelinek said.
The UK data protection authority ICO opened an investigation last year into how data analytics companies were used in the leadup to the Brexit referendum, after reports first circulated about Cambridge Analytica’s analysis of Facebook profiles for political clients. On March 19, the regulator’s office said it would look into new evidence, referring to the reports about Facebook’s knowledge of the data use.
However, regulation usually means trade-off, and heavy-handed regulation in the area of data privacy could inadvertently result in incumbents such as Facebook and Google accruing even more market power (and thus control over more data). Consider that one oftcited remedy for the concentration of power in social media is the portability of social graphs – that this, the ability to transfer your existing friends list and content to a new network. The idea here is that such social graph portability would allow new startups to better compete against the incumbents, as switching costs created by the network effect would not be so high. However, stricter regulations on the sharing of personal data (i.e. GDPR’s goal) would undermine this, as each friend would have to explicitly consent to their data and user content being shared across networks.
Whatever happens to be the political or regulatory fallout from these recent developments, this will be an interesting space to watch, not only with respect to Facebook, but also for the broader group of megacap technology companies.