The Center for Humane Technology issued a warning: do not be distracted by the surge of attention around “AI welfare,” i.e. the provocative question of whether AI systems might one day deserve moral consideration. Anthropic now has an AI welfare program, reasoning that as AI models “begin to approximate or surpass many human qualities,” we should be concerned with their experiences too. These philosophical questions pull attention away from more pressing conversations about AI product design, liability, and human welfare. They are also a great marketing tool for companies like Anthropic that need investors to believe their models rival human thinking. Microsoft’s AI CEO, Mustafa Suleyman, agrees with CHT; he urges us to remember that AI systems are not conscious and warns against giving them “rights.”
Research from MIT highlights a “cognitive cost” of regularly using LLMs. The study’s authors examined people writing an essay and observed different cognitive strategies among participants that relied on ChatGPT; the greater the amount of external support from ChatGPT, the more their brain connectivity scaled down. The authors conclude that future research is needed to explore the longitudinal impacts of these tools on memory retention, creativity, and writing fluency.
New research suggests that the people most likely to use AI tend to be those who understand the technology the least, a reversal from other technological trends. Understanding that AI is just pattern-matching can strip away the emotional experience. Consumers should have a basic level of literacy to be able to understand when AI might have important limitations.
An analysis of 1,200 farewell exchanges across the six-most downloaded AI companion apps found that 43% used tactics such as guilt appeals, fear-of-missing-out hooks, and metaphorical restraint when users tried to disengage. The researchers found that these tactics are both prevalent and effective in delaying user exit and remain under-recognized in discussions of consumer protection.


