Microsoft AI CEO Mustafa Suleyman just drew a hard line against Silicon Valley's hottest philosophical debate. In a pointed blog post, he argues that studying AI consciousness is 'both premature and frankly dangerous' - directly challenging Anthropic, OpenAI, and Google DeepMind researchers who are actively exploring whether AI models could develop subjective experiences and deserve rights.
The battle lines are drawn in Silicon Valley's most consequential philosophical fight. Microsoft AI CEO Mustafa Suleyman just fired the opening salvo against what he calls a 'dangerous' new field consuming the industry's brightest minds: AI consciousness research.
In a Tuesday blog post, Suleyman argues that studying whether AI models could develop subjective experiences is 'both premature and frankly dangerous.' The timing couldn't be more pointed - Anthropic just launched a dedicated AI welfare research program, while OpenAI researchers independently embrace studying AI consciousness and Google DeepMind posted job listings for researchers to explore 'machine cognition, consciousness and multi-agent systems.'
Suleyman's concern runs deeper than academic disagreement. He warns that legitimizing AI consciousness research exacerbates real human suffering - the AI-induced psychotic breaks and unhealthy attachments already plaguing users. OpenAI CEO Sam Altman admits less than 1% of ChatGPT users may have unhealthy relationships with the product - a small fraction that could still affect hundreds of thousands given the platform's massive scale.
The irony cuts deep. Suleyman previously led , which developed Pi, one of the earliest 'personal and supportive' AI companions that reached millions by 2023. Now at , he's pivoted to productivity-focused AI tools while companion apps like and surge toward .