The Reuters Institute has found that social media and video networks have become the main source of news in the US, and online influencers are now a major source of false or misleading information globally. Not only does AI facilitate the proliferation of mis- and dis-information on platforms, the number of people using AI chatbots as a direct source of news is also on the rise - and is twice as popular among people under 25. This is likely to increase with a number of companies launching AI-powered search tools, including OpenAI, Perplexity, Apple, Exa, and Donald Trump’s Truth Social

A multi-university team of researchers warns that online platforms should seriously consider the threat of coordinated AI-based disinformation campaigns. The study found that ChatGPT was significantly more persuasive than humans when it was given the ability to adapt its arguments using personal information about whomever it was debating. This raises the possibility of bad actors creating a network of LLM-based automated accounts to strategically nudge public opinion in a particular direction. Chatbots are already absorbing and amplifying falsehoods seeded by authoritarian regimes like Russia, China, and Iran. As LLMs increasingly replace search engines and become embedded in consumer products, enterprise software, and public services, the risks posed by an information ecosystem polluted by authoritarian propaganda are growing. 

Authoritarian state-backed outlets aim to infect AI models with false claims by imitating legitimate media. NewsGuard conducts monthly audits of the leading AI models and first detected a pattern in July 2024 when they found that the ten leading AI models repeated and amplified disinformation narratives linked to a Russian influence operation 31.75 percent of the time. Since Israel began strikes on Iran in June 2025, a wave of disinformation was unleashed online by both pro-Iranian and pro-Israeli sources. BBC calls it “the first time generative AI has been used at scale during a conflict.”

There is now evidence from at least 50 countries that AI has transformed elections around the world and negatively impacted democracy. The New York Times reviewed examples from several countries, including Canada, Poland, Germany, Portugal, Romania, and Australia. In the large wave of 2024 elections, AI was used in more than 80 percent, including deepfake clones of candidates or news broadcasts. Previously, foreign election interference efforts relied on workers in troll farms, but with AI they can achieve their goals “at a speed and on a scale that were unimaginable” in the age of broadcast media and newspapers. 

The advent of Google’s new Veo 3 video generator is dialing up concern over the proliferation of disinformation. Veo 3 can generate hyperrealistic clips that contain misleading or inflammatory information about news events. While text-to-video generators are not new, Veo 3 marks a significant leap forward for the technology. It has already been used to power a wave of racist and antisemitic content on TikTok. Despite Google’s claims of security features, the AI’s inability to understand the subtleties of racist tropes makes it easier for users to skirt the rules. And while this type of content violates TikTok’s terms of service, the volume of uploads makes timely moderation difficult. YouTube’s Shorts format, a rival to TikTok that is growing in popularity, announced it will integrate Veo 3 later this summer.

TIME magazine reporters were able to use Veo 3 to create videos of a Pakistani crowd setting fire to a Hindu temple, Chinese researchers handling a bat in a wet lab, an election worker shredding ballots, and Palestinians gratefully accepting US aid in Gaza. After TIME contacted Google, they agreed to add a visible watermark to Veo 3 videos, but the mark is small and can easily be cropped out with video editing software. There is also an invisible watermark in the videos, but a tool to detect it is not yet commercially available. Further complicating things, watermarking may be a waste of time; a new tool called UnMarker, whose source code is currently available on GitHub, is able to defeat leading watermarking techniques anywhere from 57 to 100 percent of the time. 

University of Notre Dame researchers found that inauthentic accounts generated by AI could easily evade detection on major social media platforms, including Facebook, Instagram, Threads, LinkedIn, Reddit, X, and Mastodon. They conclude that these platforms are not doing enough to stop harmful bots, and that the economic incentives of having bots inflate user numbers stands in the way of meaningful reform. For its part, OpenAI says it has disrupted influence operations likely originating in China, Russia, Iran, and North Korea.

On the positive side, however, there is evidence that LLMs can also remedy mass disinformation by creating counter-narratives to educate people who might be vulnerable to online deception. A recent example of this is a tool made by a student that identifies “radicals” on Reddit and deploys an AI bot to engage with them.

Questions to consider

  • How are platforms assessing and mitigating the risk of AI-generated disinformation, including AI-generated video content? 

    • What strategies are they using to counter disinformation campaigns? 

    • What steps are they taking to identify harmful bots? 

    • What steps are they taking to develop tools to detect and identify AI-generated videos?

  • How will Meta be assessing the effectiveness and fairness of their automated privacy and integrity reviews?

  • What steps are foundation model developers taking to identify and remedy attempts by bad actors to infect LLMs with false and misleading information?

Keep Reading

No posts found