A chatbot, with some information about its opponent, can debate better than most people. This is according to a new study in Nature Human Behavior. According to scientists, it raises questions about how AI can manipulate opinions online.
For the study, three hundred participants were paired with human opponents in an online debate. Another group of three hundred participants were paired with ChatGPT. They had to debate certain topics.
They had to discuss the usefulness of a school uniform and a ban on fossil fuels in the United States. In half of the debates, both the human and AI participants received some information about the opponent, such as gender, education level, and political preference.
Without that information, humans and AI performed about equally well in a debate. But with some extra knowledge, AI came up with better arguments. In 64 percent of the cases, the chatbot was able to adjust the opponent’s opinion more than the human participants. After the debate, participants had to indicate to what extent they were convinced by their conversation partner.
AI is therefore particularly good at precisely tailoring the speech to the profile of the opponent. “It’s like being in a debate with someone who not only has good arguments, but also knows exactly which buttons to push,” one of the researchers involved told The Guardian.
Concerns About Online Lies and Misinformation
According to the researchers, the experiment shows that AI can be used to manipulate online opinions, especially if it has access to personal data. That brings dangers. “Everyone needs to be aware of the microtargeting (adjusting messages to specific target groups, ed.) that is possible due to the enormous amount of personal data that we spread across the web,” one of the researchers told The Washington Post.
Sandra Wachter, a professor at the University of Oxford who was not involved in the research, called the discovery in the American newspaper “rather alarming.” She is particularly concerned about the possibilities of spreading online lies and misinformation. “Language models do not distinguish between fact and fiction. They are not designed to tell the truth.”