The OpenAI "ChatGPT" AI-generated answer to the question "What can AI offer to humanity?" is seen on a laptop screen. (Photo by Leon Neal/Getty Images)

News

OpenAI’s ChatGPT leans liberal, research shows

Share

A study conducted by researchers from the University of East Anglia in the UK suggests that OpenAI’s ChatGPT, which stands for Chat Generative Pre-trained Transformer and which is a language model-based chatbot, has a liberal bias, raising concerns about control regarding the behaviour of such AI chatbots as they become increasingly widespread.

The study authors asked ChatGPT to answer a survey on political beliefs in the manner supporters of liberal parties in the United States, United Kingdom and Brazil might answer them. They then asked ChatGPT to reply to the same questions without any prompting, and compared the two sets of responses.

The results showed a “significant and systematic political bias toward the Democrats in the US, Lula in Brazil, and the Labour Party in the UK”, all aligned with the left-wing of the political spectrum.

“Any bias in a platform like this is a concern,” lead author Dr Fabio Motoki told British Sky News. “If the bias were to the Right, we should be equally concerned.”

He warned that with the increasing use of the platform by the public, the findings could have implications for future elections. On top of this, he said, the AI models claim to be neutral, while they are not. “There’s a danger of eroding public trust.”

Chatbots such as ChatGPT, Google’s Bard and Microsoft’s Bing are developed using large language models trained on vast amounts of internet data. The biases inherent in the data tend to get absorbed by these bots, leading to concerns about fair representation.

OpenAI has said it explicitly tells its human trainers not to favour any specific political group. Any biases that show up in ChatGPT answers “are bugs, not features,” the company said in a February blog post.

AI chatbots are also being integrated into various other aspects of daily life, such as summarising documents, answering questions and assisting with writing tasks.

The role of such technology in shaping political narratives is increasingly significant, with some using them for writing political ads, fundraising emails and providing information to the public. Despite efforts by companies to mitigate apparent favouritism, chatbots often reflect the biases and polarisations present in the real world.

The internet’s impact on political outcomes and potentially promoting divisiveness has long been debated. While it’s a powerful tool for disseminating information, it can also contribute to the spread of propaganda and misinformation.

Another potential reason for the ChatGTP results might stem from the algorithm itself, specifically in the way it’s trained to generate responses. The researchers noted that this could potentially magnify any pre-existing biases present in the dataset it has been exposed to.

As chatbots become more ingrained in daily life, the risk of them exacerbating existing extreme views and influencing public opinion grows. This feedback loop, where biased responses are fed back into the models, could lead to increased problems and create a vicious circle, experts warn.

The UK team’s analysis method will be released as a free tool for people to check for bias in ChatGPT responses.

Dr Pinho Neto, another co-author of the study, said: “We hope that our method will aid scrutiny and regulation of these rapidly developing technologies.”

Some months ago, Merel Van den Broek (22), a master’s student in linguistics at Leiden University, showed that Chat GPT preferred progressive views over Conservative ones.

American professor Pedro Domingos called ChatGPT a “woke parrot” after it refused to cite benefits of fossil fuels. It also refused to write a poem admiring Donald Trump while it did create one championing US President Joe Biden.

US tech-entrepreneur Elon Musk announced he would launch TruthGPT to counter what he said was liberal bias shown by Chat GPT. He accused OpenAI, in which he was an early investor, of training its chatbot “to be politically correct”.