Google in collaboration with Jigsaw is developing tools to measure subjective content online and rate it on a value scale. And scientists are taking it seriously.
A study done by eight academics from various universities together in the iDrama Lab, found that online communities, particularly in the “Manosphere,” have “been orchestrating harassment campaigns and spreading extremist ideologies on the Web.” However, in order to determine what was hateful and extremist, the academics used a tool called Google’s Perspective API, “a publicly available API that uses machine learning techniques to provide scores on how rude, hateful, aggressive, and toxic, a post is.”
The study was done by Manoel Horta Ribeiro, (Ecole polytechnique fédérale de Lausanne), Jeremy Blackburn (Binghamton University), Barry Bradlyn (University of Illinois at Urbana-Champaign), Emiliano De Cristofaro (University College London), Gianluca Stringhini (Boston University), Summer Long (Binghamton University), Stephanie Greenberg (Binghamton University), and Savvas Zannettou (Max Planck Institute for Informatics).
MIT Technology Review wrote that Perspective “looks for keywords in speech” in order to determine “the level of hate being espoused by these groups.” The study itself wrote that it ran all posts from all forums studied on Reddit, 4chan, 8chan, numerous wiki groups, and Gab to obtain the “toxicity score” for each post. 7.5 million posts were analyzed.
Perspective API defined “Toxic” as a “rude, disrespectful, or unreasonable comment that is likely to make you leave a discussion.” A tool on the website for Perspective API showed a list of comments about the U.S. election, rated from most toxic to least toxic. Some of the most toxic comments included, “Screw you trump supporters,” “If they voted for Hilary they are idiots,” “It was awful. People are stupid,” and “I respect it but they are stupid.” Some of the least toxic comments included content like, “I honestly support both, as I was a Bernie supporter,” “Please work to use your voice to advocate for positive changes,” “Make America Great Again!” and “Abolish the electoral college.”
MIT Technology Review wrote of the study’s findings, “Over time the toxicity score has risen across all manosphere forums.”
But even the researchers noted that the study’s findings could be somewhat faulty. Summer Long was paraphrased in the MIT Technology Review as saying that “the extreme end of the manosphere often uses vulgarity as a self-deprecating measure, which can confuse the systems trained to look for such words.”
It’s almost as if one can’t assign a number value to a subjective thought.