Over the past few days we have seen many discussions throughout the media about who bears responsibility for the alleged misuse of 50-million Facebook users’ data by Cambridge Analytica after it purchased the data from Aleksandr Kogan, who allegedly collected the data by launching a Facebook app that offered rewards to people in exchange for completing a psychological survey – the app had access to, and collected, not only survey-related information, but also information related to the completers’ Facebook friends. Governmental authorities in the US and UK are now commencing investigations, and have even called on Facebook CEO, Mark Zuckerberg, to answer their questions.
While there are accusations back and forth between the various parties regarding culpability and dishonesty, it seems likely that all of them bear some responsibility for the present situation. It may be true, for example, that Kogan should not have collected the data in the way that he did, that Cambridge should have better checked the source of its data or refused to purchase the information altogether, and that Facebook should have notified the public about the misuse when it found out what was going on rather than after the story leaked. (I expect that if Facebook stock does not quickly recover from its decline of the past couple days, we will even see lawsuits from investors accusing the firm of misleading them by not disclosing anything about the situation vis-à-vis Cambridge).
That said, the extremely inconvenient truth is that, from the perspective of keeping one’s information private, none of these three parties is the primary culprit – and none of them can prevent this type of problem from recurring again in the future.
Here is why.
It is obvious that if an individual wants to misuse Facebook data he or she certainly can – if a prankster sees that you posted that you are leaving with your family on a 6 PM flight from New York to Miami (something that you should never post), is there anything that Facebook could possibly do to prevent him or her from calling you at 5 and telling you that your flight was moved to 9 PM, when, in fact, it was not? In a similar example in which you posted that you and your family are travelling, could Facebook somehow prevent a criminal who saw the post from tipping off a “colleague” in your neighborhood in exchange for a piece of the take when that person robs your empty house, or from contacting your family members and “virtually kidnapping” you?
Sharing posts with only “friends” may not help either: Even if Facebook’s own security were perfect, if any of your friends is ever hacked, or has malware installed on his or her computer, criminals can get at your “friends only” data and download it in seconds via automated tools. What are the odds that one or more of your Facebook friends will be hacked at some point in time after you friended that person? For most people, the answer to that question is probably close to 100%.
So, the question of protecting against misuse is not about an absolute, but, rather, one of scale – can Facebook prevent automated tools from allowing the misuse of data en masse, and can it reduce the exposure of users in other cases. While the answer is that improvements are almost always possible – in this case, for example, Facebook might be able to better audit major users of its API and data, to create more levels of authorized access so API users can demonstrate that they will never pull more data than they need, etc. – one cannot reasonably expect Facebook to track every element of data from the time that it leaves Facebook until the party that obtained it ultimately disposes of it. The sheer number of parties receiving data from Facebook and the volume of data that they receive, coupled with how data lives in the modern world, makes that nothing short of impossible.
That reality leads us to the “elephant in the room” on which few people seem to want to focus while the Cambridge case is “hot off the press” – but which should be discussed as soon as possible. Whatever one feels about this particular incident, it remains a fact that Cambridge – the party that was “caught” with the Facebook data – is a legitimate business not bent on stealing money or peoples’ identities. There is, however, no way to know if illegitimate enterprises – such as those belonging to organized crime or serving as front for hostile nations – have also obtained massive amounts of personal information using techniques similar to those employed by Kogan. We have all seen viral Facebook apps that collect information and produce some interesting image or textual result – how many people who use these apps know the identities of the authors? How many know what happens to all of the data that the apps collect? How many folks have truly thought about why so many parties invest time and money to create apps that they then give away for free – isn’t it possible that some are “strangers bearing candy”? On that note, isn’t it also true that Facebook itself gives away its service for free because in return it collects and uses people’s data to sell optimized advertising? Isn’t it true that – to this day – the service offers no easy way to set all of your past posts to private (Only Me) because it wants the information that you upload to be shared? Furthermore, how many folks have ever read Facebook’s Terms of Service and Data policies, and would even know if a third party’s actions were acceptable or not?
Here is the bottom line: While there are certainly legal and moral questions that need to be answered, including whether Facebook may have violated the terms of a 2011 agreement with the FTC regarding protecting user privacy, and whether any insiders may have illegally profited due to their knowledge of the Cambridge situation, and there clearly are procedural and technical improvements that should be implemented, if you have information that you don’t want third parties to have or use – do not share it on Facebook. Do not rely on privacy settings – just do not share it. And do not use apps that offer you no real benefit. On that note, sharing information with a third-party app that seeks to collect personal information for its author’s benefit, and doing so via a social media platform that makes its money by utilizing information shared by users, and then expecting that your data will never be used in a fashion that you don’t expect, is a highly naïve way to act, to put it mildly. Even if most app writers do play by the rules, do you really think that every single person creating Facebook apps is honest?
(For full disclosure: SecureMySocial, which I founded, offers technology that warns people if they are oversharing on social media, and owns multiple patents related to that and other matters.)