“Existential risk” – Why scientists are racing to define consciousness

“Existential risk” – Why scientists are racing to define consciousness


As artificial intelligence continues to advance and ethical concerns grow alongside it, scientists say the need to understand consciousness has reached a critical point.

In a new review published in Frontiers in Science, researchers warn that progress in AI and neurotechnology is moving faster than scientific understanding of consciousness. This gap, they argue, could lead to serious ethical problems if it is not addressed.

The authors say explaining how consciousness emerges is now an urgent scientific and moral priority. A clearer understanding could eventually make it possible to develop scientific methods for detecting consciousness. That breakthrough would have far-reaching consequences for AI development, prenatal policy, animal welfare, medicine, mental health care, law, and emerging technologies such as brain-computer interfaces.

“Consciousness science is no longer a purely philosophical pursuit. It has real implications for every facet of society — and for understanding what it means to be human,” said lead author Prof Axel Cleeremans from Université Libre de Bruxelles. “Understanding consciousness is one of the most substantial challenges of 21st-century science — and it’s now urgent due to advances in AI and other technologies.

“If we become able to create consciousness — even accidentally — it would raise immense ethical challenges and even existential risk,” added Cleeremans, a European Research Council (ERC) grantee.

The Challenge of Defining Sentience

Consciousness, commonly described as awareness of both the world around us and ourselves, remains one of science’s most difficult puzzles. Despite decades of research, scientists still lack agreement on how subjective experience emerges from biological processes.

Researchers have identified brain regions and neural activity linked to conscious experience, but major disagreements remain. Scientists continue to debate which brain systems are truly necessary for consciousness and how they interact to produce awareness. Some researchers even question whether this approach captures the problem correctly.

The new review examines the current state of consciousness science, future directions for the field, and the possible consequences if humans succeed in fully explaining or even creating consciousness. This includes the possibility of consciousness emerging in machines or in lab-grown brain-like systems known as “brain organoids.”

Testing for Consciousness

The authors argue that developing evidence-based tests for consciousness could transform how awareness is identified across many contexts. Such tools could help detect consciousness in patients with brain injuries or dementia and determine when awareness arises in fetuses, animals, brain organoids, or even AI systems.

While this would represent a major scientific advance, the researchers caution that it would also create difficult ethical and legal questions. Determining that a system is conscious would force society to reconsider how that system should be treated.

“Progress in consciousness science will reshape how we see ourselves and our relationship to both artificial intelligence and the natural world,” said co-author Prof Anil Seth from the University of Sussex and ERC grantee. “The question of consciousness is ancient — but it’s never been more urgent than now.”

Medical, Ethical, and Legal Implications

A deeper understanding of consciousness could have wide-ranging effects across society.

In medicine, it could improve care for patients who are unresponsive and assumed to be unconscious. Measurements inspired by integrated information theory and global workspace theory[1] have already detected signs of awareness in some individuals diagnosed with unresponsive wakefulness syndrome. Further progress could refine these tools to better assess consciousness in coma, advanced dementia, and anesthesia, and influence treatment decisions and end-of-life care.

Mental health treatment could also benefit. Understanding the biological basis of subjective experience may help researchers develop better therapies for conditions such as depression, anxiety, and schizophrenia by narrowing the gap between animal studies and human emotional experience.

Greater insight into consciousness could reshape how humans think about their moral responsibilities toward animals. Identifying which animals and systems are sentient could influence animal research practices, farming, food consumption, and conservation strategies. “Understanding the nature of consciousness in particular animals would transform how we treat them and emerging biological systems that are being synthetically generated by scientists,” said co-author Prof Liad Mudrik from Tel Aviv University and ERC grantee.

Rethinking Responsibility and Technology

Consciousness research could also affect how the legal system understands responsibility. New findings may challenge traditional legal concepts such as mens rea, the “guilty mind” required to establish intent. As neuroscience reveals how much behavior arises from unconscious processes, courts may need to reconsider where responsibility begins and ends.

At the same time, advances in AI, brain organoids, and brain-computer interfaces raise the possibility of creating or altering awareness beyond natural biological limits. While some researchers believe consciousness might arise through computation alone, others argue that biological factors play an essential role. “Even if ‘conscious AI’ is impossible using standard digital computers, AI that gives the impression of being conscious raises many societal and ethical challenges,” said Seth.

A Call for Coordinated Research

The authors emphasize the need for a coordinated, evidence-based approach to studying consciousness. One proposed strategy involves adversarial collaborations, in which competing theories are tested against each other through experiments designed jointly by their supporters. “We need more team science to break theoretical silos and overcome existing biases and assumptions,” said Mudrik. “This step has the potential to move the field forward.”

The researchers also argue that scientific work should place greater emphasis on phenomenology (what consciousness feels like) alongside studies of function (what consciousness does).

“Cooperative efforts are essential to make progress — and to ensure society is prepared for the ethical, medical, and technological consequences of understanding, and perhaps creating, consciousness,” said Cleeremans.

Notes

  1. Global workspace theory suggests that consciousness arises when information is made available and shared across the brain via a specialized global workspace, for use by different functions — like action and memory.
    Higher-order theories suggest that a thought or feeling represented in some brain states only becomes conscious when there is another brain state that “points at it,” signaling that “this is what I am conscious of now.” They align with the intuition that being conscious of something means being aware of one’s own mental state.
    Integrated information theory argues that a system is conscious if its parts are highly connected and integrated in very specific ways defined by the theory, in line with the idea that every conscious experience is both unified and highly informative.
    Predictive processing theory suggests that what we experience is the brain’s best guess about the world, based on predictions of what something will look or feel like, checked against sensory signals.



Source link