Now+ more AI-Argwaan is in place when you exchange Google for Chatgpt

More AI-Argwaan is in place when you exchange Google for Chatgpt

AI chatbots like ChatGPT answer all your questions, but what they say isn’t always correct. While millions of people ask these programs for help, nonsense and even propaganda are also creeping unnoticed into their advice. Experts advise to keep thinking for yourself and checking information.

“I’ll just ask chat,” you sometimes hear people say. Chat in this case refers to ChatGPT, the popular AI chatbot from the company OpenAI. You get a ready-made answer to every question. Your question doesn’t even have to be written very clearly to find what you’re looking for.

That makes it very convenient to use an AI chatbot, says AI professor Frank van Harmelen from the Vrije Universiteit in Amsterdam. You don’t have to navigate through all kinds of search results yourself and consult different pages to get an answer.

“ChatGPT does that for you,” says Van Harmelen. “When the chatbot was trained, it read the entire internet, including the pages you would otherwise visit. It now makes all the connections itself and you get the answer in one go.”

The downside is that AI chatbots don’t always come up with factually correct information. Because they are largely trained with information that comes from the internet, all the online nonsense also ends up in the data. The chatbot doesn’t have a monopoly on the truth, it accepts everything online as truth.

Poisoning the source

This sometimes has dire consequences. Recent research by Pointer shows that disinformation from Russian propaganda networks is seeping into chatbots such as ChatGPT, the French Mistral and Microsoft Copilot. Russian networks are building hundreds of websites and publishing millions of messages on them. AI chatbots then quote from this in answers to users.

Poisoning the source, Marc van Meel calls this. Van Meel works as an AI expert at consultancy KPMG. This type of large-scale influence probably doesn’t happen often, because it is very labor-intensive. “You have to create a huge number of sites that might be picked up when training language models,” he explains. “Deliberately manipulating via social media is much easier for malicious parties.”

A nonsense answer is also an answer

Although it is difficult and cumbersome, it does not alter the fact that chatbot results can be influenced in this way. They select the truth based on the majority vote, says Van Harmelen. “If a million pages say that vaccines are good for children, you will often get that back if you ask a question about it. But sometimes the less common nonsense answer comes up.”

Chatbots prefer to fantasize than admit their ignorance. “AI chatbots are programmed to be very submissive to the user,” says Roel Dobbe, AI researcher at TU Delft. “These will always give an answer. Even if nothing is factually correct.”

Blindly relying on ChatGPT is therefore not a good idea. “I would always validate the answer,” says Van Meel. “For example, I sometimes make a vacation plan with the help of AI, but then check whether the hotel mentioned still exists.”

Search for sources and maintain control

Checking on Google whether an answer from ChatGPT is correct is usually easier than figuring everything out yourself, says Van Harmelen. “In addition, you can ask chatbots for sources if they don’t mention them themselves. But beware, because those are sometimes made up.”

Incidentally, Google has recently also included AI summaries above the search results. These often already answer part of the search query, with links to the sources. But the results of those AI summaries are not always correct either.

The problem remains all the unfiltered data that is used to train the AI models and the largely non-transparent nature of those chatbots. “The dominant AI companies blindly put a lot of content in the training data and it is not properly filtered in advance,” says Dobbe. “Only when undesirable answers come out do they filter. So afterwards. This means that there is a good chance that incorrect AI-generated content on the internet will be used again for the training of the same AI models. This will reduce rather than increase the quality of these systems.”

You can learn AI

The experts see many advantages in the chatbots, but things still need to be improved. Dobbe has an idea about that. “We could turn AI companies into a kind of public utility, just like road builders. You can use commercial parties for this, but as a society you have control over the standards that the construction must meet.”

In addition, the experts believe that people simply need to learn to deal with it better. AI literacy is still lacking. “I also think that AI technology democratizes knowledge, just like the internet did,” says Van Harmelen. “It is a huge step forward, but we have to learn to deal with it.”

Scroll to Top