Tech is now “driving the narrative of conflict,” and the venture capital sector sees the dual-use and defense sector as a huge area of opportunity. The total value of VC investment in defense tech in 2025 has already reached a new peak. The private sector is now the primary driver for rapid defense innovation; startups are able to deploy new tech with a shorter timeframe than traditional government R&D. And pursuing a dual-use strategy can offer startups a way to generate the income they need to keep funding further development. The Veteran fund, a VC firm run by GPs with military experience, is raising $50 million for a fund to invest in dual-use startups that generate revenue from both government contracts and the private sector. 

Military tech whistleblowers are warning about AI in the “kill cloud,” i.e. the data infrastructure built by companies like Amazon, Microsoft, and telecom companies that militaries can use to identify and target people. LLMs scrape vast amounts of data from the internet that they use to identify targets, and there is no evidence of an audit trail. Even with a “human in the loop,” there is a built-in trust that the AI must be right. And the lack of a framework for responsibility is problematic: who is responsible when a missile is fired? Is it the person in the loop, the commander, the programmer who built the AI, or the company - like Microsoft or Google - that provided the system?

Following intense scrutiny, Microsoft has terminated a set of services for the Israel military after a Microsoft-led investigation uncovered evidence supporting findings from journalists that Israel was using the company’s cloud computing tech for mass surveillance of Palestinians. Microsoft’s move comes amid a United Nations commission of inquiry concluding that Israel has committed genocide in Gaza, a charge supported by many international law experts. It also comes in response to increased investor scrutiny and greater anxiety from tech employees about the enmeshment of their companies with militaries.

Questions to consider

  • How are GPs investing in dual-use startups assessing risk to human rights? What steps are startups taking to ensure their tech is not going to be used in ways that contravene the principles of international humanitarian law (IHL)? Have they assessed their potential liability under IHL?

  • How are tech and telecom companies assessing the human rights and legal risks of their devices and infrastructure being used for military targeting?  

  • How are tech companies developing and licensing AI tools to militaries assessing the human rights and legal risks? How are they responding to employee feedback? Do they have adequate whistleblower protections?

Keep Reading

No posts found