Hello!

Thank you for agreeing to participate in Open MIC’s soft launch of Material Tech. Currently, Material Tech consists of an online platform and a regular newsletter, below. Click any link to read the full article or list of updates on the platform.

Material Tech is a response to a common refrain we hear from our investor partners: it’s challenging to keep up with the rapid pace of development in tech. Our goal is to keep you up-to-date on the companies at the leading edge of developing and deploying new tech. 

We use an integrated approach to writing about the materiality of tech news, reflecting how - especially in tech - social risk and financial risk are two sides of the same coin.

Each Material Tech article is written to highlight the “so what?” for investors, i.e. how does a particular news item impact companies in an investor’s portfolio and what questions should they be thinking about? 

You will be reviewing a starting sample of the kind of content we will feature on Material Tech. We envision adding other formats, including longer-form reports, info-graphics, data sets, analyses of company reporting, interviews, and short videos. We are also exploring how to incorporate an AI-enabled investor due diligence tool. 

We’re working with a design team to develop the branding for Material Tech, so what you’ll see is a placeholder.

We’ve prepared a survey to help facilitate your response, but please feel free to contact us directly to offer your thoughts on anything and everything. 

Many thanks for taking the time to help us build this new project. We have always valued your insight, and we hope Material Tech will be of value to you. 

Sincerely, 

Audrey & Michael

Material Tech Newsletter - October 2025

Artificial Intelligence

AI Market News

AI infrastructure is now a major driver of U.S. economic growth, but it's being financed through increasingly complex and risky financial strategies that echo the subprime mortgage crisis. As tech giants and startups rack up debt to fund data centers and cloud capacity far ahead of actual demand, experts warn of mounting systemic risk, unsustainable valuations, and potential fallout for both investors and local communities.

AI Privacy

AI companies are increasingly harvesting vast amounts of personal data to fuel model development and personalized services, which raises serious privacy, security, and legal concerns. As generative AI faces technical limits, experts warn that firms may shift to surveillance-based business models, exploiting sensitive user data without meaningful consent or transparency.

AI’s Human Harms

AI systems accessible to children must prioritize youth safety, transparency, and privacy from the outset. Yet companies like Meta and some toy makers are already ignoring these guidelines, raising serious concerns about exploitation, inappropriate content, and unknown developmental risks for children.

In other AI news:

  • The Center for Humane Technology warns: do not be distracted by the surge of attention around “AI welfare” and Microsoft’s AI CEO agrees.

  • Research from MIT highlights a “cognitive cost” of regularly using LLMs. 

  • New research suggests that the people most likely to use AI tend to be those who understand the technology the least, a reversal from other technological trends. 

  • An analysis of the six-most downloaded AI companion apps found that 43% used tactics such as guilt appeals, fear-of-missing-out hooks, and metaphorical restraint when users tried to disengage. 

Information Integrity

AI is accelerating the spread and sophistication of global disinformation, with chatbots, video generators, and social media platforms being used to manipulate public opinion, disrupt elections, and amplify propaganda—often at the hands of authoritarian regimes.

Workers

The recent surge in unemployment among college graduates is driven by a mix of factors, not just AI. While AI is augmenting tasks rather than replacing most jobs, companies appear to be scaling back entry-level hiring and training in anticipation of future automation, making it harder for early-career workers to enter the job market.

  • Fear not, workers! Palantir has launched “Working Intelligence: The AI Optimism Project,” a “quasi-public information and marketing campaign” centered around AI in the workplace. 

  • The Chartered Management Institute reveals that one third of UK employers is using some kind of “bossware” technology to track workers’ activity, and this may be an underestimate.

  • While President Trump wages a trade war premised on the return of manufacturing work, the US tech sector is backing ventures that aim to replace human workers with robots

  • Amazon marked the deployment of its millionth robot in its warehouse operations. 

Climate & Environment

The true environmental cost of AI is vastly underestimated, and the companies developing it remain opaque about energy use and emissions. Despite some efforts at transparency, the current pace and scale of AI deployment is unsustainable without clearer accountability and better efficiency metrics.

  • Copper and lithium extraction in Chile’s Atacama desert has been accelerated to keep pace with generative AI’s need for power plants and power lines, devastating the local environment and communities.

  • While the access to cheap and abundant energy seemingly makes the Gulf region seem like a good place to situate data centers, the current tensions in the area introduce new risks. 

  • US states are facing pressure to insulate regular household and business ratepayers from rising electric bills due to the energy demands of data centers. More than a dozen states have taken steps to address the issue. 

Platforms

Meta’s recent policy rollbacks and lack of transparency have led to increased harmful content and user distrust, and the company’s automation of content moderation raises concerns about safety and integrity. Several other social media platforms are nevertheless following Meta’s lead. 

Cybersecurity

LLMs and AI coding agents are increasingly vulnerable to “prompt injection” attacks that can trick them into revealing sensitive data or executing malicious code. This raises significant security risks as these tools become more common and gain more authority in development environments. 

  • Researchers anticipate an increase in AI-driven data breaches that impede the deployment of networked low-carbon technologies, like electric vehicles, smart building systems, and grid-responsive technologies that we need  to mitigate climate change.

  • Despite purporting to have safety measures in place, the Tea app’s users - who are predominantly women -  had their images, identifying information, and private conversations compromised in two separate security breaches in late July, leading to online abuse against the women whose information was stolen.

  • Ron Deibert, director of Citizen Lab, is sounding the alarm to the cybersecurity community to step up and join the fight against authoritarianism and what he sees as the “fusion of tech and fascism.” 

Surveillance & Dual-Use Tech

VC investment in dual-use tech is booming as startups aim to corner both government and private markets. As dual-use tech proliferates, concerns are growing over the ethical implications of AI-powered military systems like the “kill cloud,” with companies like Microsoft facing backlash for enabling surveillance and human rights abuses by militaries. 

  • The race to dominate the AI-powered defense drone market is intensifying.

  • NVIDIA chips are finding their way into Russian drones that use AI, enhanced navigation systems, and real-time acquisition logic. These are drones that “no longer follow orders [but] follow intent.” 

  • Many nuclear war experts take it as an inevitability that governments will mix AI and nuclear weapons. 

  • A data broker owned by major US airlines is selling access to five billion plane ticketing records to the US government for warrantless searching and monitoring of peoples’ movements, including by the FBI, Secret Service, and ICE.

Emerging Tech

Brain-computer interface companies like Neuralink and Merge Labs are advancing invasive technologies that connect the brain directly to external devices, raising major ethical, legal, and privacy concerns.

  • The Center for Privacy and Technology found that the Department of Homeland Security captured the DNA of thousands of US citizens. Unlike other forms of mass surveillance, genetic surveillance also impacts any of the affected party’s biological relations that exist today and may exist in the future. 

  • rBio, an AI model trained to reason about cellular biology using virtual simulations, is poised to accelerate biomedical research and drug discovery. 

  • SpaceX is paying $17 billion for the rights to some of EchoStar’s valuable spectrum for cellphone service, laying the foundation for Starlink’s global direct-to-cell business.

  • Amazon’s Project Kuiper, a rival to Musk’s Starlink, now has over 100 satellites in orbit, with more set to launch in the coming months.

  • US officials are looking for alternatives to SpaceX and have indicated that they will seek to buy rocket launches, satellite internet, and other services from emerging competitors if available. 

Policy Updates

Governments are reassessing their reliance on foreign tech, with European countries moving away from U.S. tech giants like Microsoft and scrutinizing foreign-controlled AI apps over data privacy concerns. In response, some companies are promoting digital sovereignty through local platforms  or negotiating “sovereignty as a service” deals.

  • The Special Rapporteur on the situation of human rights in the Palestinian territories occupied since 1967 released a report that “investigates the corporate machinery” sustaining the war in Gaza and outlines international law mechanisms for holding the private sector - and its executives - accountable. Tech companies, including IBM, Microsoft, Google, and Amazon, are mentioned specifically.

  • New York State has become the first US state to ask companies to disclose when AI contributes to mass layoffs, the first official step toward measuring and perhaps eventually regulating AI’s impact on the labor market.

  • The European Commission published its final Code of Practice on General-Purpose AI. It sets out detailed expectations for companies around transparency, copyright, and measures to mitigate systemic risks. Mistral, OpenAI, Anthropic, Microsoft, and Google have signed or signalled their willingness to sign the code. Meta has stated that it will not.

  • The UK’s Online Safety Act has come into force, requiring online platforms to check that all UK-based users are at least eighteen years old before granting access to “harmful” content. 

  • President Emmanuel Macron said he will ban social media for youth under 15 years old if progress is not made at the EU level to significantly limit the amount of time minors can spend online. 

  • Denmark is considering a proposed change to its copyright law that would ensure that everybody has a right to their own body, facial features, and voice. It would give people in Denmark the right to demand that online platforms remove content that includes their likeness if it is shared without their consent. 

  • Following a ruling in an antitrust lawsuit, Google is able to keep its search business running “largely without interruption.” The company will have to share more of its data with competitors and create an oversight committee to monitor its business practices. Generative AI featured heavily in the judge’s reasons, predicting that users will increasingly use chatbots like ChatGPT, Perplexity, and Claude to gather information they previously sought through internet search. The ruling, which comes alongside a parallel EU case, has implications for Apple’s and Meta’s upcoming cases. 

  • Amazon has to pay $2.5 billion to settle FTC claims that it used deceptive sign-up and cancellation practices in its Prime program. $1.5 billion of the settlement funds will go to an estimated 35 million customers.

  • OpenAI is being sued by the parents of a 16-year-old who died by suicide after eight months of ChatGPT use in which the chatbot validated his self-destructive thoughts. This case is the first major lawsuit against a general-purpose AI chatbot for psychological harm. The plaintiffs allege that their son’s death “was a predictable result of deliberate design choices.”

  • Meta is being sued by one of its former security engineers for failing to protect its users’ data, violating privacy regulations, and firing him in retaliation for whistleblower complaints with US authorities. 

  • The New York State Supreme Court ruled that a lawsuit against Meta and TikTok for the wrongful death of a minor can move forward. The court distinguished this case from other Section 230 cases, suggesting that Meta’s and TikTok’s algorithms went beyond simple promotion of the content in question.

  • Microsoft settled with EU anti-trust regulators to decouple Teams from its Office productivity suites. The decision follows a 2020 complaint by Slack, now part of Salesforce, alleging that tying Teams to Office gave Microsoft an unfair market advantage.

  • A Brazilian indigenous group brought a formal complaint before federal authorities asking them to stop the development of a TikTok data center being built on their land. The group’s leaders say their legal right to consultation was violated and that their concerns about the project’s water consumption are being ignored.

  • The Chilean government has partnered with 30 institutions across Latin America and the Caribbean to create Latam-GPT as an alternative to Big Tech LLMs, which have limited capabilities in languages other than English. The model follows Southeast Asia’s Sea-Lion, Africa’s UliaLlama, India’s BharatGPT, and Mongolia’s Egune AI

  • Limited partners are increasingly adding AI ethics to their due diligence practices, even as ESG has fallen out of the limelight. Regulatory developments at the state level in California and New York are helping to drive the push. Early leaders include Manulife, StepStone Group, and Norway’s sovereign wealth fund.

  • Pharmacists in the Amazon region are using AI developed by Brazilian nonprofit NoHarm (and backed by Google and Amazon) to process prescriptions faster in overburdened public clinics. Early evidence from the project suggests it is a scalable model for under-resourced health systems.

  • AI “Godfather” Yoshua Bengio is launching a new non-profit AI safety research organization called LawZero that intends to prioritize safety over commercial imperatives. The organization is a response to what Bengio sees as today’s frontier models “growing dangerous capabilities and behaviours, including deception, cheating, lying, hacking, self-preservation, and more generally, goal misalignment.”

  • Startup Crusoe Energy Systems, currently valued near $10 billion, converts stranded fuel into computing energy to power data centers, a solution that addresses both energy inefficiency and environmental waste. Since its founding in 2018, Crusoe has prevented over 22 billion cubic feet of natural gas from being flared, which is equivalent to taking 630,000 cars off the road annually.

Keep Reading

No posts found