Facebook unleashed an artificial intelligence program to scour users’ posts for “hate speech” and “misinformation.”
“Facebook on Monday released a new report detailing how it uses a combination of artificial intelligence and human fact-checkers and moderators to enforce its community standards,” The Verge reported. Facebook’s Community Standards Enforcement Report was then followed up by company blogs on how Artificial Intelligence programs or AI have developed to take on a greater role in moderating Facebook. This includes proactive measures against both so-called hate speech and misinformation.
In the Community Standard Enforcement Report’s Hate Speech tab, the report explained how thoroughly “proactive detection technology” has been scouring for offensive rhetoric. The report cited that “Content actioned increased from 5.7 million pieces of content in Q4 2019 to 9.6 million in Q1 2020.”
The Community Standards Enforcement Report defined “hate speech” as “violent or dehumanizing speech, statements of inferiority, calls for exclusion or segregation based on protected characteristics, or slurs.” These protected characteristics include “race, ethnicity, national origin, religious affiliation, sexual orientation, caste, sex, gender, gender identity, and serious disability or disease.”
The report also explained that “When the intent is clear, we may allow people to share someone else's hate speech content to raise awareness or discuss whether the speech is appropriate to use, to use slurs self-referentially in an effort to reclaim the term, or for other similar reasons.”
The Verge’s coverage observed that “AI-trained models have a harder time parsing a meme image or a video due to complexities like wordplay and language differences” and that this software also needs to be trained to find duplicates or even merely “marginally modified versions of that content as it spreads across Facebook.”
Despite these challenges, Facebook reported that its AI-driven purges of the platform’s “hate speech” are well underway:
AI now proactively detects 88.8 percent of the hate speech content we remove, up from 80.2 percent the previous quarter. In the first quarter of 2020, we took action on 9.6 million pieces of content for violating our hate speech policies — an increase of 3.9 million.
Facebook explained how it has worked tirelessly developing “proactive detection tools for hate speech, so we can remove this content before people report it to us — and in some cases before anyone even sees it.” These detection techniques include “text and image matching, which means we’re identifying images and strings of text that are identical to content that’s already been removed as hate speech.”
Facebook made sure to emphasize just how problematic this can be in the case of embarrassing or politically suspicious errors:
Mistakenly classifying content as hate speech can mean preventing people from expressing themselves and engaging with others. Counterspeech — a response to hate speech that may include the same offensive terms — is particularly challenging to classify correctly because it can look so similar to the hate speech itself.
The company made similar commentary in its blog about using AI to handle misinformation: “It’s extremely important that these similarity systems be as accurate as possible, because a mistake can mean taking action on content that doesn’t actually violate our policies.”
The scale of this research is incredibly vast, but Facebook has managed to silence many posts. The company cited that during April “we put warning labels on about 50 million pieces of content related to COVID-19 on Facebook, based on around 7,500 articles by our independent fact-checking partners.” It also added how “Since March 1, we’ve removed more than 2.5 million pieces of content for the sale of masks, hand sanitizers, surface disinfecting wipes and Covid-19 test kits.”