On March 10, 2026, Open MIC hosted a webinar entitled “AI and Misinformation: How Investors Can Address the Threat.” Our panelists, Gordon Crovitz, Michael Khoo, and Shuwei Fang, discussed the ways in which AI is changing our information ecosystem, how investors can avoid the material risks associated with proliferating mis- and dis-information, and opportunities for investing in better alternatives. Below is a summary of the conversation, which is also accessible in full here.

Misinformation and disinformation are at an all-time high. In the US, 40 percent of the news sources rated by NewsGuard received a low reliability score. This has several adverse consequences: a decline in trust of news and other institutions; social polarization, political instability, and weakening democracy; the delegitimization of health, climate, and other science; and an overabundance of low quality “slop” content. 

The conditions for the spread of mis- and dis-information are directly tied to the business models underlying social media and search engines. Platforms like Google, YouTube, Instagram, X, and TikTok act as information intermediaries, using algorithms to recommend websites to users. This creates a direct financial incentive to produce content favored by the algorithm: highly engaging, regardless of accuracy. 

On top of this, programmatic advertising (the automated real-time placement of advertising using algorithms) allows for the average ad campaign to appear on 40,000 different websites, many of which are created specifically for advertising and display content that is highly unreliable. This means that companies are effectively funding the proliferation of mis- and dis-information, leaving them vulnerable to wasted ad spend and reputational damage from having their brand associated with content that is dubious, offensive, or harmful. 

And then comes generative AI, which has served as an accelerant for mis- and dis-information. Large language models (LLMs) make the creation of fake images, audio, video, and websites easier than ever and facilitate astroturfing and the harassment of journalists, scientists, and advocates. LLMs also repeat false information taken from unreliable sources and present it as factual. Bad actors like authoritarian governments deliberately “infect” LLMs with a high volume of false online content that they repeat when prompted. Compounding the problem, AI search tools like Google Overviews and Perplexity repurpose online content into “snippets” in a way that is opaque and subject to manipulation.

But AI also offers a path away from the business model driving mis- and dis-information. Many signs point to chatbots, AI agents, AI search, and similar apps becoming the new intermediaries of our information economy, effectively restructuring the financial incentives around online content creation. Unlike platforms, these tools don’t simply present a user with links to different content; they extract its meaning, interpret it, and recompose it at the individual user interface. In this new paradigm, there is no value in creating content tailored to platform algorithms. When information is abundant because AI can consume and repurpose it instantly, value will accrue to verification, trust, and provenance – and the infrastructure that will need to be built in support of these goals. 

Because mis- and dis-information is primarily a problem of financial incentives, investors as capital stewards can play an important role in its solution:

  • De-risk ad spend: Approximately $2.5 billion worth of advertising a year ends up on unreliable websites. When engaging with portfolio companies, particularly those in sectors that require high levels of consumer trust (e.g. healthcare, pharmaceuticals, financial services, infrastructure, automotive), investors can ask what steps these companies are taking to ensure their brand is not affiliated with potentially damaging content and whether their advertisers are using exclusion lists like NewsGuard’s to screen out bad placements.

  • Strengthen trust and safety: Many social media platforms had content moderation policies and robust trust and safety teams that they have abandoned or weakened in recent years. Investors can question the logic behind these choices and the notion that online mis- and dis-information is a problem that is simply too hard to solve. The tech sector prides itself on finding creative solutions to all kinds of problems – why not this one? If verification commands a premium in our new information economy, what are these companies doing to ensure they will not be left behind?

  • Guardrails for generative AI: Investors can engage with and file shareholder proposals at companies developing generative AI to encourage them to put better guardrails on their models. This should already be in their interest since AI that is more accurate and credible is also more marketable. And, unlike platform companies, companies developing generative AI do not have the benefit of section 230; there is nothing shielding them from liability for their models’ outputs. (Open MIC is supporting Vancity Investment Management in a shareholder proposal at Alphabet on this issue.)

  • Signal support for regulation: Regulators across jurisdictions are devising policies to mitigate AI’s adverse impacts on society. At the same time, several leading AI companies and the VCs backing them are actively lobbying for industry-friendly regulation with fewer guardrails in the US and Europe. Investors can request transparency from companies on their lobbying practices and signal their support for smart AI regulation that removes the incentives for mis- and dis-information. 

  • Shift to a build mindset: Our information economy is in transition, which means there is an opportunity for investors to fund the buildout of the digital infrastructure that will eventually command the most value: products that support information verification, trust, and provenance. 

Keep Reading