Logo
By Open MIC
Search
Login
Sign Up

OpenAI

In Other News: Human Harms - October, 2025

Oct 6, 2025

Read More

Microsoft

OpenAI

Artificial Intelligence

Legal Updates: United States - October, 2025

Oct 2, 2025

Read More

Perplexity

Amazon

Anthropic

TikTok

Alphabet

Meta

OpenAI

Legal Updates

United States

Policy Updates: October, 2025

Oct 2, 2025

Read More

Microsoft

Multilateral

IBM

Amazon

Anthropic

Alphabet

MENA

OpenAI

Mistral

Policy Updates

United States

Europe

Home-Grown Tech Stacks

Governments are reassessing their reliance on foreign tech, with European countries moving away from U.S. tech giants like Microsoft and scrutinizing foreign-controlled AI apps over data privacy concerns. In response, some companies are promoting digital sovereignty through local platforms or negotiating “sovereignty as a service” deals.

Oct 2, 2025

Read More

Microsoft

Amazon

Oracle

Asia

Alphabet

MENA

OpenAI

DeepSeek

Gander

Canada

Policy Updates

Broadcom

United States

Europe

Who’s Liable for AI Harms?

Lawmakers are struggling to define liability for AI-related harms. The EU dropped its efforts to advance the AI Liability Directive. Federal efforts in the US increasingly shield tech companies from accountability, while states are pushing forward with stronger regulations.

Oct 2, 2025

Read More

Anthropic

Alphabet

OpenAI

Policy Updates

United States

Europe

Are We Ready for “the Merge?”

Brain-computer interface companies like Neuralink and Merge Labs are advancing invasive technologies that connect the brain directly to external devices, raising major ethical, legal, and privacy concerns.

Oct 2, 2025

Read More

Neuralink

OpenAI

Biotech

Emerging Tech

AlterEgo

Merge Labs

United States

Prompt Injection Needs Prompt Attention

LLMs and AI coding agents are increasingly vulnerable to “prompt injection” attacks that can trick them into revealing sensitive data or executing malicious code. This raises significant security risks as these tools become more common and gain more authority in development environments.

Oct 2, 2025

Read More

Perplexity

Cybersecurity

Cursor

OpenAI

United States

RunSybil

Assessing AI’s Energy Appetite

The true environmental cost of AI is vastly underestimated, and the companies developing it remain opaque about energy use and emissions. Despite some efforts at transparency, the current pace and scale of AI deployment is unsustainable without clearer accountability and better efficiency metrics.

Oct 2, 2025

Read More

Climate & Environment

Alphabet

OpenAI

United States

In Other News: Workers - October, 2025

Oct 1, 2025

Read More

Palantir

Amazon

Workers

OpenAI

NVIDIA

Authoritarian AI

AI is accelerating the spread and sophistication of global disinformation, with chatbots, video generators, and social media platforms being used to manipulate public opinion, disrupt elections, and amplify propaganda—often at the hands of authoritarian regimes.

Oct 1, 2025

Read More

Mastodon

Microsoft

Perplexity

Truth Social

TikTok

Alphabet

Meta

X

OpenAI

Reddit

Exa

Information Integrity

United States

Apple

The AI Kids Are Not Alright

AI systems accessible to children must prioritize youth safety, transparency, and privacy from the outset. Yet companies like Meta and some toy makers are already ignoring these guidelines, raising serious concerns about exploitation, inappropriate content, and unknown developmental risks for children.

Oct 1, 2025

Read More

Meta

OpenAI

Mattel

Human Harms

United States

AI is Listening, and it Can’t Keep a Secret

AI companies are increasingly harvesting vast amounts of personal data to fuel model development and personalized services, which raises serious privacy, security, and legal concerns. As generative AI faces technical limits, experts warn that firms may shift to surveillance-based business models, exploiting sensitive user data without meaningful consent or transparency.

Oct 1, 2025

Read More

Privacy

Amazon

BBVA

xAI

Meta

OpenAI

JP Morgan Chase

United States

Europe

A World Beyond LLMs?

A new Apple study confirms that LLMs fail at true reasoning, relying on pattern matching rather than logic. Experts warn that current evaluation methods overstate LLM capabilities, and some researchers argue that advancing AI will require moving beyond LLMs toward other approaches that better mimic human understanding.

Oct 1, 2025

Read More

Anthropic

Meta

Salesforce

World Labs

OpenAI

DeepSeek

Artificial Intelligence

United States

Apple

AI Bubble Watch

Despite massive VC investment, many AI startups are struggling to deliver returns, with valuations skyrocketing even as most projects fail to generate meaningful revenue and the costs of model training and inference rise. Experts are cautioning that the market may be more overvalued than in the dot-com era.

Oct 1, 2025

Read More

Thinking Machines Lab

Anthropic

xAI

OpenAI

Artificial Intelligence

Andreessen Horowitz

United States

All Eyes on Agents

AI agents are generating major interest, with both Big Tech and VC rapidly investing in the space. But many so-called "agents" are just rebranded chatbots or automation tools. Despite their promise, real-world performance remains inconsistent, and concerns over reliability, misuse, and integration challenges are leading some analysts to warn of high failure rates.

Oct 1, 2025

Read More

Microsoft

Anthropic

Y Combinator

Alphabet

Salesforce

OpenAI

Artificial Intelligence

United States

Material Tech

Material Tech equips investors with an integrated materiality analysis of the latest developments in digital and emerging technologies. Stay ahead of risks and opportunities with data-driven insights, deep dives, and expert perspectives.

The Material Tech Newsletter is a publication of Open Media and Information Companies Initiative (Open MIC), a non-profit working to foster greater corporate accountability in the deployment and use of digital technologies.

To learn more about Open MIC, visit us on the web.

© 2025 Open Media and Information Companies Initiative.

Report abuse

Privacy policy

Terms of use

Powered by beehiiv