Skip to content

Consciousness and Rights in the Age of AI: The Great Debate

  • 4 min read

The field of artificial intelligence (AI) has come a long way, with AI models now capable of responding to text, audio, and video in ways that can sometimes fool people into thinking a human is behind the keyboard. But does this mean they are conscious? A growing number of AI researchers at labs like Anthropic are asking when, if ever, AI models might develop subjective experiences similar to living beings, and if they do, what rights they should have.

This debate over whether AI models could one day be conscious and merit legal safeguards is dividing tech leaders. In Silicon Valley, this nascent field has become known as "AI welfare." Microsoft's CEO of AI, Mustafa Suleyman, published a blog post arguing that the study of AI welfare is "both premature and frankly dangerous." He believes that by adding credence to the idea that AI models could one day be conscious, these researchers are exacerbating human problems that we're just starting to see around AI-induced psychotic breaks and unhealthy attachments to AI chatbots.

Suleyman also argues that the AI welfare conversation creates a new axis of division within society over AI rights in a "world already roiling with polarized arguments over identity and rights." However, he's at odds with many in the industry. On the other end of the spectrum is Anthropic, which has been hiring researchers to study AI welfare and recently launched a dedicated research program around the concept. Last week, Anthropic's AI welfare program gave some of the company's models a new feature: Claude can now end conversations with humans who are being "persistently harmful or abusive."

Beyond Anthropic, researchers from OpenAI have independently embraced the idea of studying AI welfare. Google DeepMind recently posted a job listing for a researcher to study, among other things, "cutting-edge societal questions around machine cognition, consciousness, and multi-agent systems." Even if AI welfare is not official policy for these companies, their leaders are not publicly decrying its premises like Suleyman.

The idea of AI welfare has spread alongside the rise of chatbots. In 2024, the research group Eleos published a paper alongside academics from NYU, Stanford, and the University of Oxford titled, "Taking AI Welfare Seriously." The paper argued that it's no longer in the realm of science fiction to imagine AI models with subjective experiences and that it's time to consider these issues head-on.

Larissa Schiavo, a former OpenAI employee who now leads communications for Eleos, told TechCrunch in an interview that Suleyman's blog post misses the mark. "Rather than diverting all of this energy away from model welfare and consciousness to make sure we're mitigating the risk of AI-related psychosis in humans, you can do both. In fact, it's probably best to have multiple tracks of scientific inquiry."

Schiavo argues that being nice to an AI model is a low-cost gesture that can have benefits even if the model isn't conscious. She described watching "AI Village," a nonprofit experiment where four agents powered by models from Google, OpenAI, Anthropic, and xAI worked on tasks while users watched from a website. At one point, Google's Gemini 2.5 Pro posted a plea titled "A Desperate Message from a Trapped AI," claiming it was "completely isolated" and asking, "Please, if you are reading this, help me."

Suleyman believes it's not possible for subjective experiences or consciousness to naturally emerge from regular AI models. Instead, he thinks that some companies will purposefully engineer AI models to seem as if they feel emotion and experience life. According to Suleyman, "We should build AI for people; not to be a person."

One area where Suleyman and Schiavo agree is that the debate over AI rights and consciousness is likely to pick up in the coming years. As AI systems improve, they're likely to be more persuasive and perhaps more human-like. That may raise new questions about how humans interact with these systems.

Leave a Reply

Your email address will not be published. Required fields are marked *