Generative AI has a numbers problem. Increasing users also increases costs, and finding users willing to pay enough to offset those costs has been challenging. Only 3% of commercial users are paying for generative AI, which Menlo Ventures calls a “strikingly low conversion rate.” Though ChatGPT’s is slightly higher at 5%, OpenAI has been aggressively pursuing new revenue streams, including rushing to integrate ads. On the enterprise side, we see Microsoft lowering its AI sales quotas and fewer than 5% of Salesforce customers paying for Agentforce.
Analysts at Bain & Co argue that AI is not likely to generate sufficient revenue to fund further growth initiatives. By 2030, anticipated demand for AI services would require $2 trillion in annual revenues, leaving a shortfall of $800 billion globally to meet that demand. In a recent report, the Center for Democracy & Technology’s Miranda Bogen and Nathalie Maréchal issue a stark warning to pay attention not only to the financial viability of frontier AI companies, but also to the choices they make in pursuit of that goal.
Over the past week we’ve been watching these choices bear out in real time with Anthropic facing pressure from the US Defense Department to drop its safety restrictions – something the company resisted for fear of its AI being used to operate lethal autonomous weapons systems or conduct mass surveillance on US citizens. Despite these concerns mapping directly onto existing legal frameworks, the impasse has resulted in the Pentagon terminating Anthropic’s $200 million contract.
But Anthropic is not the only AI lab to work with the US military or the militaries of other countries. There’s been a marked trend of companies developing generative AI moving into the defense space - some, like Google, Meta, and OpenAI, quietly removing pre-existing commitments to not use their AI for warfare. In the past year, Google, Microsoft, Nvidia, OpenAI, Salesforce, Meta, and xAI have all announced AI-related defense contracts – with some going so far as to embed executives within the ranks of the US military.
Unlike consumer and enterprise spending on AI, defense spending is steadily on the rise due to mounting global conflict and instability. Government contracts often come with high price tags and longer-term commitments, which is perhaps why we’re seeing companies who have sunk large amounts of capital into generative AI reassess their risk tolerance for selling this powerful nascent technology for military use.
And they’re not the only ones; Pitchbook reports that VC funding for defense tech climbed so high in 2025 “that even the term ‘record-breaking’ feels inadequate.” Some publicly-traded tech companies like Alphabet, Salesforce, and Nvidia are doubly exposed; in addition to their own defense contracts they are among the largest venture investors in dual-use generative AI.
The private sector has always played a role in defense, but dual-use generative AI poses a unique challenge for limited partners and public equities investors.
The blurring of tech and defense renders traditional defense screens obsolete. Investors have very little transparency into the nature of government contracts and what, if any, guardrails exist for how a company’s generative AI will be used. There is even less transparency in private markets. This means active engagement with portfolio companies and general partners is essential to meaningfully assess risk – both financial and social.
The use of generative AI in military contexts is still largely untested. To quote Jack Shanahan, retired US Air Force General and the first Director of the Department of Defence Joint Artificial Intelligence Center, “Despite the hype, frontier models are not ready for prime time in national security settings. Over-reliance on them at this stage is a recipe for catastrophe.”
Dual-use and military applications employing AI have many known risks. Generative AI has inherent error rates, which are amplified in unpredictable environments like a battlefield. The rapid speed of AI decision-making can outpace the ability to take precautions in preventing or minimizing harm to civilians, and it can increase the likelihood of conflict escalation. Even with a human in the loop, evidence suggests people are more likely to defer to the judgment of machines in high stress environments.
All of this translates into potential regulatory, legal, human capital, and reputational risks for companies and their investors – the latter we are also watching play out in real time. Since upholding their military guardrails, new users have flocked to Anthropic’s Claude. Conversely, as OpenAI signaled it will take up Anthropic’s former role with the Department of Defense, ChatGPT has lost consumer market share.
In response to the backlash, OpenAI announced it would add a prohibition on domestic surveillance into its contract terms, but said nothing about autonomous weapons systems. CEO Sam Altman promised that while there are “many areas we don’t yet understand the tradeoffs required for safety” the company will “work through these, slowly, with the DoW, with technical safeguards and other methods.”
Effectively, OpenAI is suggesting that investors and the general public should simply trust it will be willing and able to prevent potentially catastrophic uses of its models – even without any explicit red lines. This should not be the model for dual-use generative AI contracts. Even Anthropic’s rejected guardrails, though better than nothing, were far from sufficient for the complexity and gravity of the context.
Not all generative AI will be fit for dual use – even if the revenue opportunities seem promising. Nobel Prize winner and leading economic expert on AI Daron Acemoglu reminds us: the direction of AI development is not preordained and it doesn’t have to be dystopian. AI for military use can be “responsible-by-design,” with legality and accountability engineered into the very earliest stages of system inception rather than being retrofitted after deployment. Investors can support this vision of the future.
Dual-use AI is likely hidden in many portfolios. A crucial first step for investors is to assess the extent of their exposure.
Topics for engagement:
Disclosure: Private market investors with exposure to companies developing generative AI can request greater transparency from fund managers and portfolio companies regarding existing military contracts or plans for future dual-use. Investors in publicly-traded hyperscalers can also seek greater transparency around government contracts and how their corporate venture arms are investing in dual-use generative AI.
The business case: Government and military contracts are often associated with lower profit margins, complex regulatory requirements, and bureaucratic processes that can increase costs and operational risk. Public and private market companies should be able to articulate the strategic value of taking on these risks, particularly if it may hinder consumer sentiment in commercial markets.
Governance: For both public and private companies, what governance structures are being put in place to assess and manage the complex risks associated with dual-use AI? Are there board members and senior leadership with relevant expertise? Does the company have a human rights policy or acceptable use policy? Are there clear articulations of permissible and impermissible uses of AI systems in military contexts? What oversight mechanisms exist?
Responsible design: Are generative AI companies approaching the design of models and applications for military contexts differently? What steps are they taking to embed legality, accountability, and ethics across the lifecycle of their AI systems?



