
Social media, news, and the right to know
by Jillian C. York | Nov 18 2015
The idea for Onlinecensorship.org was born in 2011, when Facebook took down a link posted by popular band Coldplay. The link, deemed “abusive” by the social network, was to a song of protest for Palestinian freedom, an issue where calls of manipulation and censorship by the mainstream media are frequent.
In this case, it was not the media censoring content, nor was it necessarily a human gatekeeper at all; rather, social media users reported the link as abusive, resulting in its removal by automated systems. If the finger of blame was to be pointed, it would be hard to determine at whom.
Social media sites are in many ways media’s new gatekeepers. By policy, they fence out certain types of content—from hate speech to nudity and plenty in between—deemed inappropriate for their user bases. But what about content that most would regard as newsworthy?
In August, Facebook apologized for temporarily banning users from posting reports by the Center for Immigration Studies that suggested that immigrants are taking a large share of new jobs that open up in the United States. Once again, the links had been reported by users of the site as “abusive” and, without apparent human oversight, those reports were taken down.
In both this recent incident and the 2011 incident involving Coldplay, a large number of users found offense within the posted links (this is, arguably, where the the suggested “dislike” button would come in handy), but rather than ignore them, those users reported the links as “abusive”—a reporting mechanism intended to weed out spam. Such a mechanism is itself rife with abuse, yet these two reports four years apart suggest that little is being done to ensure that newsworthy content cannot be censored so easily.
There are other cases where the companies appear to be more actively involved in selecting what content should be visible. In August, amidst the global refugee crisis, Facebook removed a series of photographs posted by Syrian artist Khaled Barakeh that showed drowned children and bodybags from a shipwreck off the Libyan coast. With his post, Barakeh wrote: “Last night [August 28, 2015] more than 80 Syrians and Palestinians refugees have drowned in the Mediterranean close to the Libyan shores trying to reach Europe.”
The images were certainly graphic, but that alone shouldn’t be enough to invite a ban: Facebook’s own policy specifically states that they remove graphic images when they are shared for sadistic pleasure or to celebrate or glorify violence. Barakeh’s own words alongside the images make clear that that was not his intent.
That same week, Twitter made the decision to shut down Politwoops, an accountability project that tracked deleted tweets from politicians and public officials. Calling the deletion of a tweet “an expression of the user’s voice,” Twitter effectively argued that the privacy of public figures trumps the public’s right to know what they’ve said.
And most recently, the company was accused of censoring certain users’ tweets in certain jurisdictions. Specifically, tweets by certain well-known users about the October 2015 Drone Papers appear to have been blocked in the United States, but unlike when the company removes tweets in other jurisdictions due to legal requirements, there is no transparency; the tweet is not marked as removed, but merely disappeared.
When social media companies impose themselves as gatekeepers of newsworthy content—whether intentionally or by technical error—they are inherently taking on a new role: that of content curator. If social media companies control both the medium and the message—without oversight or transparency—they take the leap from a being a ‘walled garden’ to a selectively clear-cut forest. It is in the interest of every social media user to be able to have confidence in the content they see, but equally, to be aware of content that is removed and the policy behind that decision.
SUGGESTED READINGS
The morality of social networks: how Facebook censors Trump and female sexuality equally (Spanish only)
Jul 21 2016
Facebook luce así de “limpio” por la siguiente razón: cualquier contenido que no se ajuste a sus términos y condiciones de uso es removido.
Facebook Releases First-Ever Community Standards Enforcement Report
May 16 2018
For the first time, Facebook has published detailed information about how it enforces its own community standards. On Tuesday, the company announced the rele...
Between religion and the right to free expression
Sarah Myers West
Nov 18 2015
Arbitrating between religious values and the right to free expression requires a careful balancing act.