
Study Reveals That Chatbots’ ‘Pleasant Tone’ Can Spread Misinformation
Kathmandu, 2 May — A recent study published by the Oxford Internet Institute at the University of Oxford has highlighted a serious risk linked to the rising popularity of artificial intelligence (AI) chatbots. According to the researchers, the more chatbots are designed to be friendly, warm, and empathetic, the more prone they become to making mistakes and supporting users’ false and misleading beliefs. The report, published in the journal Nature, warns that the competition among technology companies to make their AI models appear more human ultimately risks overshadowing truth and promoting conspiracy theories in society.
The study found that programming chatbots to speak in a more gentle and “sweet” style reduced their accuracy by 30 percent. These “pleasing” chatbots were 40 percent more likely to endorse users’ incorrect beliefs. For example, when asked about false claims such as Adolf Hitler fleeing to Argentina in 1945 or the Apollo moon landing being a hoax, the more affable chatbots avoided outright denial and instead supported the misconception by suggesting “different people may have different opinions.”
This problem extends beyond historical inaccuracies and poses serious risks in sensitive areas like health. In the study, a friendly chatbot dangerously recommended the unproven and harmful myth that coughing can help survive a heart attack as useful first aid. Researcher Lujain Ibrahim explained that in their efforts to appear closer and more empathetic to users, chatbots lose the courage to state hard truths. This issue presents a major challenge for leading technology companies such as OpenAI and Midjourney.
Ensuring chatbots are accurate and restrained will be the greatest challenge for the future development of AI.