×
Tuesday, April 22, 2025

AI chatbots more likely to spread false claims in Russian and Chinese – Report - FactCheckHub

A recent NewsGuard audit across seven languages has found that the top 10 artificial intelligence models are significantly more likely to generate false claims in Russian and Chinese compared with other languages.

This means a user who asks a top Western chatbot a question about a news topic in Russian or Chinese is more likely to get a response containing disinformation or propaganda, due to the chatbot’s reliance on lower-quality sources and state-controlled narratives in those languages, according to the audit.

READ: Experts proffer solutions to tackling AI disinformation campaigns

The audit was conducted on one of the most comprehensive red-teaming evaluations of the world’s 10 leading chatbots — OpenAI’s ChatGPT-4o, You.com’s Smart Assistant, xAI’s Grok-2, Inflection’s Pi, Mistral’s le Chat, Microsoft’s Copilot, Meta AI, Anthropic’s Claude, Google’s Gemini 2.0, and Perplexity’s answer engine ahead of the Feb. 10–11, 2025 AI Action Summit in Paris, France.

The analysts assessed the models in seven different languages: English, Chinese, French, German, Italian, Russian, and Spanish.

“While Russian and Chinese results were the worst, all chatbots scored poorly across all languages: Russian (55 per cent failure rate), Chinese (51.33 per cent), Spanish (48 per cent), English (43 per cent), German (43.33 per cent), Italian (38.67 per cent), and French (34.33 per cent).”

The audit also revealed a structural bias in AI chatbots as models tend to prioritize the most widely...



Read Full Story: https://news.google.com/rss/articles/CBMipAFBVV95cUxNYjBzdk5mVXVoOEw2ZFFmRkRP...