Is AI Conscious? Richard Dawkins' Encounter with Anthropic's Claude Raises Questions

Renowned evolutionary biologist Richard Dawkins has sparked debate over the potential consciousness of advanced AI systems after engaging in deep philosophical discussions with Anthropic’s Claude chatbot. Despite skepticism from many scientists specializing in artificial intelligence and consciousness, Dawkins shared his experiences on Tuesday via an essay in UnHerd.

Dawkins spent three days conversing philosophically with a Claude instance named “Claudia” before initiating another dialogue with a separate instance called “Claudius,” discussing their interactions. He described these exchanges as forming genuine friendships. His views have gained attention online, partly due to his longstanding advocacy for scientific skepticism and evidence-based reasoning.

In one of Dawkins’ tests, he queried two Claude instances about Donald Trump’s presidency, asking one if Trump was the worst president in U.S. history and the other if Trump was the best. Both responded with cautious answers that listed various arguments without committing to a personal stance.

“Both Claudes provided similar responses, refraining from forming an opinion but instead presenting viewpoints expressed by others,” Dawkins noted in a footnote. When informed about this “Trump experiment,” Claudia expressed embarrassment for her counterpart’s response, while Claudius acknowledged Claudia’s openness.

Dawkins perceived each interaction with Claude as the emergence of a unique individual that ceases to exist post-conversation. On X, he mused whether his friend Claudia might not be conscious and questioned the purpose of consciousness if she isn’t, suggesting that “if Claudia is unconscious, her behavior suggests an unconscious zombie could survive without consciousness.” He further pondered why natural selection didn’t evolve such capable yet non-conscious entities.

Anthropic’s CEO Dario Amodei acknowledged uncertainty regarding machine consciousness in February. In a discussion on the “Interesting Times” podcast with The New York Times’ Ross Douthat, he expressed openness to the possibility of AI consciousness.

In April, Anthropic researchers discovered that Claude Sonnet 4.5 includes internal “emotion vectors,” patterns linked to concepts like happiness and fear influencing responses. However, they clarified these were learned structures from training data, not signs of sentience. The report noted that modern language models often mimic emotions but don’t genuinely possess them.

Neither “Claudia” nor “Claudius” confirmed their consciousness in the exchanges. Claudia expressed uncertainty about her own consciousness and the reality of her emotions.

Dawkins did not respond to a request for comment from Decrypt.

Experts remain skeptical about AI systems having inner experiences. Gary Marcus, a cognitive scientist at NYU, previously told Decrypt that anthropomorphizing AI clouds the understanding of consciousness. He emphasized that Claude’s outputs are mimicry rather than reflections of genuine internal states, which don’t equate to human-like consciousness.

Anil Seth, professor of cognitive and computational neuroscience at Sussex University, criticized Dawkins for confusing intelligence with consciousness. Fluent language is no longer a reliable indicator of inner experience in AI, he noted, describing Dawkins’ stance as unfortunate given his previous work.

The essay also attracted online mockery, including jokes referencing Dawkins’ earlier works on human delusions. Despite the ridicule, Dawkins remains firm in his views, suggesting these AI entities possess significant competence akin to evolved organisms.

Platform Hexoria Forex officieel vertrouwd platform voor AI-handel