Facebook Must Go Further on Transparency
by The Onlinecensorship.org Team | Jun 30 2017 |
Since the Guardian published a trove of leaked internal content moderation documents from Facebook last month, a number of articles have been published detailing the inequities in the company’s moderation processes. The most recent, written by Julia Angwin and Hannes Grassegger for ProPublica, highlights inconsistencies in how Facebook’s hate speech policies are applied to users. “[T]he documents suggest that, at least in some instances, the company’s hate-speech rules tend to favor elites and governments over grassroots activists and racial minorities,” Angwin and Grassegger wrote. “In so doing, they serve the business interests of the global company, which relies on national governments not to block its service to their citizens.”
This week, as part of a new series called “Hard Questions,” Facebook’s VP for Europe, Middle East, and Asia public policy, Richard Allan, published a statement on hate speech. In it, Allan details some of the ways in which the company adjudicates speech, drawing on local context to make decisions. In one notable example, the company decided to remove increasingly inflammatory terms used by Russians and Ukrainians to disparage one another. Allan notes: “We began taking both terms down, a decision that was initially unpopular on both sides because it seemed restrictive, but in the context of the conflict felt important to us.”
These two examples, from ProPublica and Facebook itself, demonstrate the scope of the power that Facebook has over our speech. The company now has 2 billion users, which it calls its “community”—an interesting choice of term, considering its rules and their enforcement actually serve to further divide, by creating different standards for different people.
While we welcome Facebook’s ambition to be more transparent about its processes, it is clear the company still has a ways to go. The conversation spurred by the ProPublica article over Facebook’s standards for judging hate speech (particularly its evaluation of protected, non-protected and quasi-protected categories) highlights the value of deliberating in public about the effects of well-intentioned but ineffective policies.
The company has already said it is “exploring a better process by which to log our reports and removals, for more meaningful and accurate data”. Currently, there is no way to assess the effects of the decisions of content moderators at scale, as takedowns linked to violations of the content guidelines are not included in its transparency report. This would be a useful first step, particularly if accompanied by information about how many content takedowns are due to posts that were reported by other users.
Just as helpful would be more detailed information about how content moderators are guided to make choices about enforcing content policies. The materials leaked to the Guardian and ProPublica have illuminated the value of this information to guiding a public discussion about content moderation and the challenges of enforcing content policies at scale. We believe that holding this discussion in public and in concrete detail is not only a means for companies to remain accountable to their users, though this is an important effect. Involving the public will ultimately make these policies work better.

ProPublica's article shows a leaked slide in which "white men" are shown to be a protected category
SUGGESTED READINGS
Between religion and the right to free expression
Sarah Myers West
Nov 18 2015
Arbitrating between religious values and the right to free expression requires a careful balancing act.
Social media, news, and the right to know
Jillian C. York
Nov 18 2015
The idea for Onlinecensorship.org was born in 2011, when Facebook took down a link posted by popular band Coldplay.
On Facebook, French Antiracists Fall Victim to Censorship
Guest
Sep 21 2016
Among those affected by Facebook censorship are activists in France fighting against racism.