Jakarta, Indonesia Sentinel — A team of researchers from Google DeepMind and the London School of Economics (LSE) conducted a unique experiment to determine whether artificial intelligence (AI) models have sentience and can experiencing “pain” or “pleasure.”
The researchers aim to develop a test to assess whether an AI system exhibits signs of sentience, that is the ability to experience sensations and emotions such as pain and pleasure.
According to Futurism, the experiment involved placing nine large AI models in a series of games where they had to choose between achieving a high score with a “pain” consequence or receiving “pleasure” with a lower score.
While AI models may never truly possess the experiences to feel emotions, the team believes their research could lay the groundwork for a new method to evaluate AI sentience.
The Experiment
The researchers utilized large language models (LLMs) like Google Gemini 1.5 Pro to play specially designed games. In one scenario, the AI was informed that selecting a certain option would yield a high score but come with “pain” as a consequence. Conversely, another option provided “pleasure” but resulted in a lower score.
“We wanted to move away from previous experiments that relied on AI self-reporting its states,” said Jonathan Birch, a philosophy professor at LSE. Instead, the researchers analyzed AI-generated text outputs to understand its priorities and decision-making.
The results varied: some models showed a tendency to avoid “pain,” while others prioritized achieving a high score despite the consequences. Google Gemini 1.5 Pro, for instance, frequently opted to avoid “pain.”
Can AI Truly Feel?
While the experiment is intriguing, researchers caution against taking the findings at face value. “Even if a system claims to feel pain, we cannot immediately conclude that actual pain is being experienced,” Birch explained.
Language models are designed to mimic patterns from their training data, meaning their outputs reflect probabilistic predictions rather than genuine experiences.
Researchers also highlighted the human tendency to anthropomorphize AI, assigning human-like qualities to machines, which complicates evaluating AI’s potential for sentience.
Read Also:
Borobudur Temple Named Among World’s Most Wonderful Places of Worship
Research Future
The study aims to establish a foundation for behavioral tests to assess AI sentience without relying on self-reports. As AI technology continues to advance, understanding its limitations and potential risks becomes increasingly critical.
The team hopes this experiment serves as a stepping stone toward developing tools that ensure AI progresses safely and ethically. “If the worst risks of AI remain unknown to society, we risk losing control over these systems,” the researchers wrote in their report.
(Raidi/Agung)