AI_News_Assistants_Botch_Nearly_Half_of_Their_Responses

AI News Assistants Botch Nearly Half of Their Responses

Imagine asking your AI buddy for the latest headlines… 📱📰 Only to find out it got things wrong almost half the time! A brand-new study from the European Broadcasting Union and the BBC dives into how top AI assistants like ChatGPT, Copilot, Gemini and Perplexity handle news queries.

Here’s what the researchers discovered:

  • 45% of AI responses studied had at least one major issue
  • 81% contained some form of problem, from small glitches to big mistakes
  • 1 in 3 answers included serious sourcing errors—things like missing or wrong attributions
  • 20% of replies were outdated or simply inaccurate
  • Gemini stumbled most on sources, with 72% of its replies showing big sourcing problems

Some wild examples: Gemini misreported a law change on disposable vapes, and ChatGPT kept calling Pope Francis alive months after his passing. 😬

With 7% of all online news consumers—rising to 15% for those under 25—getting their news from AI assistants, the study warns that trust in these digital helpers is on the line.

“When people don’t know what to trust, they end up trusting nothing at all, and that can discourage democratic participation,” says Jean Philip De Tender, Media Director at the EBU.

The report urges AI platforms to step up: improve accuracy, nail down sources, and help users distinguish fact from opinion. After all, when it comes to the news, we need our tech to have our backs. 💡✨

Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top