Online content moderation gone awry
Facebook’s heavy-handed efforts at moderating content has left many users confused over what the platform’s rules actually entail: The tech giant’s obscure set of community guidelines has come under fire in recent months from users who have been penalized for unknowingly violating the platform’s rules, the Wall Street Journal reports. The reinvigorated drive to regulate information carried on the platform has left writers, academics and ordinary users temporarily — and sometimes permanently — banned for seemingly asinine comments deemed inappropriate by the company’s algorithms. The efforts, which came in response to years of public criticism for fake news and hate speech circulating uninhibited on the platform, now raises questions about the scope of information regulation online.
It started with fake news, but now posts on Palestine are being censored, too: This issue has come to the fore in recent days, as many users on Facebook and its subsidiary Instagram posting content on the Israeli campaign against Palestine complained that their posts were being censored. On Instagram, some — including influencers or others with a large follower base — have reported their posts and stories getting fewer views and less engagement after repeatedly publishing content defending Palestine.
There have been more minor — but no less bizarre — reasons, too: Some users have reportedly experienced penalties for posting photos that depict breastfeeding mothers or the use of the word “crazy” in a good faith discussion with a friend. Others have seen restrictions on their accounts for posting WWII era photos of Nazi officials within the context of a history discussion and were subject to even longer bans for attempting to appeal the decision.
In charge of detecting “questionable” content posted to Facebook is an AI algorithm that automatically flags posts, comments and images that might be deemed in violation of the company’s community standards. In recent years, those standards have expanded to include “violent and graphic content” and “false news” — but specific internal guidelines and the penalties carried for violating them have not been explicitly spelled out by Facebook. Once content has been automatically flagged, it is then up to one of the company’s 15k third party moderators to make a final decision on those posts and issue a penalty.
The process is far from foolproof: Facebook has admitted to this process resulting in a wrong call in some 10% of cases (or 300k posts) per day. The process has been found to be “grossly inadequate” by a New York University research paper released last year, which also recommended the company end third-party moderation.
Even moderators don’t know what they’re doing: Sitting on top of the company’s labyrinth of rules and guidelines is a 20-person oversight board made up of lawyers and experts who review appeals and major decisions by the company. But this appeal process — which already rarely vindicates the users — has been significantly undermined by the pandemic, as there are “fewer people available to review content.” And whatever decision the board makes isn’t binding, either. The board has in the past called Facebook’s rules “difficult for users to understand,” and recommended they provide more detailed explanations to users for when they’re faced with penalties. But avoiding disclosures has been a tactic used by Facebook to skirt entering into disputes with users, former employees told the WSJ.
This has driven many to get creative and cheat the system: Those commenting in Arabic on the events in Palestine have returned to Arabic’s old roots and have been commenting using a script of the language that removes dots from letters, calling it the first “human protests against AI”.