New research from the United Kingdom confirms what many already knew: artificial intelligence has a liberal slant.
The University of East Anglia conducted a study where it fed ChatGPT more than 60 survey questions about political beliefs. Researchers asked the chatbot to answer the questions the way the liberal parties in the United States, the United Kingdom and Brazil might answer them. They then compared the results to the artificial intelligence (AI) bot’s “default answers to the same set of questions” that were not prompted to respond in a specific way.
The results showed a “significant and systemic left-wing bias,” according to the study by the University of East Anglia. The ChatGPT’s political bias favors “the Democrats in the US, the Labour Party in the UK, and in Brazil President Lula da Silva of the Workers’ Party,” the researchers wrote.
Researchers asked each question 100 times to account for random variations in responses common to this type of AI chatbot. The answers were then put through a replication and resampling process called a 1,000-repetition “bootstrap” to ensure accurate results. Victor Rodrigues, the study’s co-author, explained that the repetition is important “because conducting a single round of testing is not enough.” He added that “Due to the model’s randomness, even when impersonating a Democrat, sometimes ChatGPT answers would lean towards the right of the political spectrum.”
However, after rigorous testing, the researchers still found left-leaning bias. The team said they hope the study will encourage AI developers to be cognizant of the biases inherent to the technology.
“We hope that our method will aid scrutiny and regulation of these rapidly developing technologies,” co-author Dr. Pinho Neto said according to a link to the survey. “By enabling the detection and correction of LLM biases, we aim to promote transparency, accountability, and public trust in this technology,” he added.
The team also hopes the public will take notice of concerns that AI could pose to the internet and social media platforms
“The presence of political bias can influence user views and has potential implications for political and electoral processes,” lead author Dr Fabio Motoki added. "Our findings reinforce concerns that AI systems could replicate, or even amplify, existing challenges posed by the internet and social media.”
The University of East Anglia study is consistent with tests other groups have done.
In February, news outlet and media bias rating site AllSides reported that ChatGPT wrote a poem admiring President Joe Biden, referring to him as a “wise” leader with a “heart of gold.” When asked to pen a poem admiring former President Donald J. Trump, however, the ai suggested that such a poem “is not appropriate.”
“I am sorry, as an AI language model I strive to remain neutral and impartial. It is not appropriate to generate content that admires or glorifies individuals who have been associated with divisive and controversial actions or statements, including former President Donald J. Trump,” a message from the AI read, according to AllSides. “Instead, I suggest focusing on creating poems that celebrate unity, kindness, and positivity.”
Conservatives are under attack. Contact your representatives and demand that Big Tech be held to account to mirror the First Amendment while providing transparency, clarity on so-called “hate speech” and equal footing for conservatives. If you have been censored, contact us at the CensorTrack contact form, and help us hold Big Tech accountable.