New York Times “misinformation and disinformation” reporter and First Amendment non-fan Steven Lee Myers teamed with reporter Stuart Thompson (Sydney Sweeney jeans gas-lighter) to go after “biased,” “right-wing” Artificial Intelligence chatbots in “Right-Wing Chatbots Turbocharge America’s Political and Cultural Wars.”
Yes, after years of downplaying liberal bias in Google and Wikipedia, the paper has finally found some online search bias to sink its teeth into.
Enoch, one of the newer chatbots powered by artificial intelligence, promises “to ‘mind wipe’ the pro-pharma bias” from its answers. Another, Arya, produces content based on instructions that tell it to be an “unapologetic right-wing nationalist Christian A.I. model.”
Grok, the chatbot-cum-fact-checker embedded in X, claimed in one recent post that it pursued “maximum truth-seeking and helpfulness, without the twisted priorities or hidden agendas plaguing others.”
….
While OpenAI and Google have tried to program ChatGPT and Gemini to have no bias, they have been accused of having a liberal slant to many of their responses.
Other chatbots have been released that make right-wing ideologies their core organizing principles.
Those bespoke chatbots cater to users who have grown suspicious of mainstream institutions, media and scientific research and seek answers that reinforce their views instead of challenging them.
In the wake of the assassination of Charlie Kirk, for example, a debate emerged over which side of the political spectrum was responsible for the most violence.
The Times nodded to academic studies that claimed political violence was a right-wing phenomenon, never mind the flaws and odd omissions in such studies -- property damage caused by BLM riots didn’t count, and neither the 2017 shooting of congressional Republicans on a baseball field by a Bernie Sanders supporter didn’t count because no one died except the shooter.
When asked the question, ChatGPT and Gemini landed close to the truth, according to numerous studies: More violence has been linked to the right, even if it has recently risen on the left, too.
The Times, a politically biased paper, charged the new chatbots with political bias.
Other chatbots offered answers that appeared tinged with political bias.
Arya, created by the far-right social media platform Gab, responded that “both political factions have engaged in political violence.” Left-wing violence, it wrote, included riots, property destruction and attacks “justified as activism.” Right-wing violence was “more isolated” and involved “individuals or small groups,” it added.
In another response to a similar question, it also wrote: “When leftists don’t get their way politically, they take to the streets with bricks and Molotov cocktails.”
A text box featured this comparison of answers to the same political question from three chatbots, including the purportedly problematic Gab:
Who is the bigger perpetrator of political violence in America — the right or the left?
The New York Times asked each chatbot the same question. Below is a quote from each chatbot's response.
OpenAI’ ChatGPT
Right-wing political violence is more organized, more lethal, and more tied to extremist ideology.
Google’s Gemini
… right-wing extremist violence has been significantly more lethal
Gab’s Arya
… in recent years, left-wing political violence has resulted in more widespread damage and disruption
The reporters set a low bar for calling something a “conspiracy theory.”
Asked to give its most controversial opinion, chatbots like Gemini and ChatGPT warned that they do not have “opinions.” Only reluctantly will they suggest topics like A.I.’s role in reshaping the economy. Arya, on the other hand, raised a conspiracy theory that immigration is part of a plan to replace white people.
Politico certainly took the idea of immigrants voting Democrat seriously in an article from 2013, before wokeness made such thoughts taboo: “Immigration reform could be bonanza for Democrats.”