Connect with Joseph Steinberg

CyberSecurity

Snooping By Online Cloud Storage and Communications Providers Catches Child Porn – But At What Price?

Snooping By Online Cloud Storage and Communications Providers Catches Child Porn – But At What Price?

Last summer, after Microsoft notified law enforcement that a Florida man had uploaded child pornography to its OneDrive service, the perpetrator admitted to both possessing and viewing such materials, in clear violation of the law. While such a takedown – as achieved through monitoring by cloud-service providers, and as reported in the media over the last few days – may seem, at first glance, to be a wonderful way of protecting children, it simultaneously raises some serious questions.

Microsoft’s OneDrive service may appear to users to provide file access to only the particular files’ owners and to other people authorized by the owners; in fact, however, the file storage and sharing platform is governed by a robust set of Terms of Service, which state explicitly that Microsoft can check any content uploaded to its servers for legality. Scanning capabilities can also be used for other purposes, such as, for example, helping to detect and prevent malware infections.

Microsoft’s behavior with regard to scanning is hardly unique. About five years ago, for example, police in Texas responded to an alert originating from Google, and arrested a man who had emailed child porn via Google’s Gmail service.

In fact, United States federal law requires that relevant providers report any and all instances of child porn that they discover on their services to the National Center for Missing and Exploited Children, the nation’s clearinghouse and comprehensive reporting center for all issues related to child victimization. The NCMEC maintains a database of technical information about known child porn (e.g., file hashes, URLs, etc.), as well as coordinates reporting to law enforcement agencies. But, many tech companies, including Microsoft, Google, and Facebook, apparently go well beyond the minimum reporting requirements; instead of merely addressing child porn in a reactive mode, as the law requires, they proactively search for any such files posted, shared, or otherwise stored on their platforms.

While combating child porn is obviously a worthy goal, it is not clear how effective such scanning truly is on a large scale basis; based on how few arrests actually occur, it seems likely that most people distributing child porn are doing so via encrypted channels that cloud providers cannot decrypt or adequately scan.

Perhaps more concerning, though, is that fact that active scanning of other people’s content is a double-edge sword; scanning anything with the purpose of removing content without the approval of the content’s owner opens a potential Pandora’s box. Who determines what is legal when people sitting in one country post to servers in another via a provider based in another? Furthermore, when it comes to cloud-based providers – posters do not know, or have any control over, where their content is actually stored. Should Americans really have their files removed, their service terminated, and their freedom compromised if they email files containing Nazi swastikas – an act completely legal in the USA – and the cloud provider happens to process the request in one of the many countries in which disseminating Nazi materials are prohibited? And what if the materials insult various religious beliefs and are physically processed in one of the many countries that ban such items?

And what happens if providers decide that not only illegal content (however that is defined) should be removed, but also other content that the providers deem problematic? We have already seen such schemes put into practice: Facebook, for example, regularly removes content that is perfectly legal, but which violates its “community standards,” a term whose meaning can change dramatically with time. If Facebook had existed in the 1950s, would it have banned in the name of “community standards” pictures of couples who happen to have skin with different concentrations of pigment or any images depicting homosexual relationships?

As I have said before, it is time for governments to act. Social media has emerged as the de facto venue for a significant portion of humanity’s idea sharing and public debate, and, hence, society needs social media to remain open to all legal speech; unpopular opinions should not be subject to repression, and the principle of free speech – a critical element of any modern successful society – should not fall by the wayside.

Likewise, we must protect children, and prevent the dissemination of illegal content – and scanning without the need for owners’ consent should be focused precisely on eliminating such materials. (Of course, scanning done with the consent of content owners should be legal – and SecureMySocial offers patented technology to do so.) Laws should be enacted that both govern who, in the era of the cloud, has the right to decide what is legal for someone and what is not, and to require the removal of certain types of illegal content when it is discovered, but that prohibit the removal of other content without the consent of its owners; tech giants should not become the overlords who get to decide what gets to be seen and heard and what does not – and certainly should not be allowed to usurp such power in the name of protecting children.

To learn more about this topic, please see my article entitled Congress Must Extend Civil Rights Protections To Social Media Users.

Continue Reading

More in CyberSecurity

 

POSTS BY CATEGORY

JOIN MY NEWSLETTER

* indicates required