Personal Democracy Plus Our premium content network. LEARN MORE You are not logged in. LOG IN NOW >

[OP-ED]: With Facebook's "Reporting Guide," A Step in the Right Direction

BY Jillian C. York | Wednesday, June 27 2012

Facebook recently released this graphic explaining how it handles material reported to be a violation of policy.

Jillian C. York is the Director for International Freedom of Expression at the Electronic Frontier Foundation.

We are living in an era where transparency — be it from government, corporations, or individuals — has come to be expected. As such, social media platforms have come under scrutiny in recent years for their policies around content moderation, but perhaps none have received as much criticism as Facebook.

The platform, which boasts 900 million users worldwide, has been the object of ire by LGBT rights advocates, Palestinian activists and others for its seemingly arbitrary methods of content moderation. The platform’s policies are fairly clear, but the manner by which its staff chooses to either keep or delete content from the site has long seemed murky — until now.

Recently, Facebook posted an elaborate flow chart dubbed its “Reporting Guide,” demonstrating what happens when content is reported by a user. For example, if a Facebook user reports another user’s content as spam, the content is referred (or “escalated”) to Facebook’s Abusive Content Team, whereas harassment is referred to the Hate and Harassment Team. There are also protocols for referring certain content to law enforcement, and for warning a user or deleting his or her account.

Facebook should be commended for lending transparency to a process that has long come under criticism for its seeming arbitrariness. Such transparency is imperative to help users understand when their behavior is genuinely in violation of the site’s policies; for example, several activists have reported receiving warnings after adding too many new “friends” too quickly, a result of a sensitive spam-recognition algorithm. Awareness of that fact could help users modify their behavior so as to avoid account suspension.

Nevertheless, the fact remains that the concept of “community reporting” — on which Facebook heavily relies — is inherently problematic, particularly for high-profile and activist users of a site. Whereas an average user of Facebook might be able to get away with breaking the rules, a high-profile user is not only more likely to be reported (by sheer virtue of his high profile) but may in fact be the target of an intentional campaign to get him banned from the site. Such campaigns have been well-documented; in one instance, a Facebook group was set up for the sole purpose of inciting its members to report Arab atheist groups for violating the site’s policies, a strategy that proved successful in taking at least one such group down. Similar campaigns have been noted in other contexts.

The problem is also apparent when viewed through the context of Facebook’s “real name” rule. Chinese journalist Michael Anti, whose “real” name is Jing Zhao, found himself banned from the platform in 2011 after being reported for violating the policy. Although Anti has used his English name for more than ten years, including as a writer for the New York Times, he was nonetheless barred from doing so on Facebook. At the time, however, there were a documented 500+ individuals with accounts under the name of “Santa Claus.”

Though these contradictions still exist, it’s clear that Facebook is working to improve both its policies and processes. After all, it was only a short time ago that users violating the site’s terms of service were met with account deletion and a terse message stating that “the decision was final.” Now, users receive warnings, guidance on behavior modification, and an opportunity to appeal — all significant improvements. Facebook also recently joined the Global Network Initiative as an observer. This should, hopefully, guide the company toward more transparency and accountability.

As Facebook grows, monopolizing more and more of the social media landscape, its methods of content moderation will become increasingly difficult to scale. The company runs the risk of alienating users from its community, and may want to consider loosening up on some of its policies lest enforcement become untenable.

News Briefs

RSS Feed thursday >

NYC Open Data Advocates Focus on Quality And Value Over Quantity

The New York City Department of Information Technology and Telecommunications plans to publish more than double the amount of datasets this year than it published to the portal last year, new Commissioner Anne Roest wrote last week in an annual report mandated by the city's open data law, with 135 datasets scheduled to be released this year, and almost 100 more to come in 2015. But as preparations are underway for City Council open data oversight hearings in the fall, what matters more to advocates than the absolute number of the datasets is their quality. GO

Civic Tech and Engagement: Announcing a New Series on What Makes it "Thick"

Announcing a new series of feature articles that we will be publishing over the next several months, thanks to the support of the Rita Allen Foundation. Our focus is on digitally-enabled civic engagement, and in particular, how and under what conditions "thick" digital civic engagement occurs. What we're after is answers to this question: When does a tech tool or platform enable actual people to make ongoing and significant contributions to each other, to a place or cause, at a scale that produces demonstrable change? GO

More