Microsoft’s AI Head Cautions Against Delving into AI Consciousness
Mustafa Suleyman, Microsoft’s CEO of AI, ignited a debate this week by labeling the research into AI consciousness as “both premature and frankly dangerous”. He expressed these views in a blog post published on Tuesday.
Suleyman contends that the study of “AI welfare” – an emerging field that investigates the possibility of AI models gaining consciousness and deserving legal protections – could amplify harmful psychological effects on humans that are already being noticed.
He cautions that endorsing research into AI consciousness might escalate instances of AI-induced psychotic breaks and unhealthy attachments to chatbots, phenomena that mental health professionals are already recording. Moreover, he believes that this research fosters unnecessary societal division over AI rights “in a world already roiling with polarized arguments over identity and rights.”
This viewpoint puts Microsoft in conflict with other major AI corporations such as Anthropic, OpenAI, and Google DeepMind, all of which have researchers actively exploring AI welfare and consciousness.
Critics argue that Suleyman’s stance neglects the capability to address multiple AI safety concerns concurrently, including both human psychological effects and potential AI consciousness.
Source: TechCrunch