Meta Designs New AI to Detect ‘Harmful Content,’ Despite Past AI Problems

December 10th, 2021 3:49 PM

Meta is doubling down on its use of artificial intelligence (AI) to detect alleged “harmful content,” even though AI algorithms have been shown to be biased or error-prone in censoring content.

Censorship and moderation AI is really “machine learning (ML) powered automation,” which has repeatedly proven to be lacking, according to Reclaim The Net.

“Harmful content can evolve quickly, so we built new AI technology that can adapt more easily to take action on new or evolving types of harmful content faster,” Meta announced.  “To tackle this, we’ve built and recently deployed Few-Shot Learner (FSL), an AI technology that can adapt to take action on new or evolving types of harmful content within weeks instead of months.”

The announcement specifically used COVID-19 vaccination information as an instance of potential “misleading or sensationalized information” targeted by the new AI system. The AI will censor text and images in more than 100 languages, including alleged “hate speech,” according to Meta.

The glaring issue is that Meta will be defining what “harmful content” is, and its platform Facebook has a track record of very biased censorship. Not only that, Facebook’s algorithms in the past have proven defective. Facebook users were sharing an inspirational meme in October with an image of a daisy growing in a sidewalk and the sentence, “Stand up for what you believe in, even if it means standing alone.” The meme received a “sensitive content” censorship restriction. Even Democrat Congresswoman Alexandria Ocasio-Cortez (NY) has asserted that algorithms are biased.

There are multiple similar instances of Facebook’s AI algorithms making astonishing alleged errors or mistakes. A Facebook group belonging to historical reenactment society The Wimborne Militia has had their page twice disabled (the second time in January) because an algorithm reportedly identified it as a real militia group. Facebook has once previously admitted error in The Wimborne Militia’s case.

Facebook also blocked user Rachel Enns from sharing a fundraiser for a wheelchair van for two little girls with a rare life-threatening condition. Facebook initially did not respond but later admitted after journalist inquiries that the block was a mistake. Facebook also reportedly censored a discussion of gardening tools by WNY Gardeners in July, after the platform apparently mistook the word “hoe” for a disparagement of women instead of the name of a garden tool.

Conservatives are under attack. Contact Facebook headquarters at 1-650-308-7300 and demand that Big Tech be held to account to mirror the First Amendment. If you have been censored, contact us using CensorTrack’s contact form, and help us hold Big Tech accountable.