Is AI Conscious? CDS Assistant Professor Grace Lindsay Investigates

NYU Center for Data Science
4 min readOct 12, 2023

Will AI ever be conscious? It’s a question prevalent in contemporary media. CDS Assistant Professor of Psychology and Data Science Grace Lindsay has recently participated in a round table discussion on a report she co-authored, “Consciousness in Artificial Intelligence: Insights from the Science of Consciousness”, which analyzes how to determine if the answer to this question is yes. The talk was hosted by the NYU Mind, Ethics, and Policy Program and facilitated by NYU Associate Professor of Environmental Studies Jeff Sebo. The panelists were some of the project’s co-authors Robert Long, Patrick Butlin, and Yoshua Bengio.

The talk opened with an overview given by Robert Long, one of the project’s leading co-authors and a Research Associate at the Center for AI Safety. Early in his segment, Long made the point that the question of whether AI is conscious — or could ever be conscious — is often not considered with a scientific and/or evidence based approach when discussed in the media. Thus, part of the team’s objective was to create “better footing” for the wider discussion on AI consciousness, one rooted in science. But what exactly is meant by consciousness? Long clarifies that what they mean by consciousness in the paper is phenomenal consciousness (i.e. subjective experience, what it’s like to be an entity) and not intelligence, rationalization, or whether AI experiences the world the way in which humans do (understanding, rationalizing, etc.) He then went on to explain how the team approached the report. They began by assessing prominent scientific theories on consciousness such as recurrent processing theory, global workspace theory, higher order theories, predictive processing, and attention schema theory. Specifically, they noted what informational and architectural processing features each theory associates with consciousness. They then compared those features against certain AI systems and evaluated if the AI matched those requirements. Ultimately, this was how they determined if a particular AI was (or could be) conscious.

Following Long’s overview, the conversation touched on a number of points, but specifically Lindsay spoke on the current state of consciousness research in neuroscience. Though she is not directly a consciousness researcher, Lindsay’s work in attention intertwines with research in that area. She raised that it’s important to remember that neuroscientific study of consciousness, though rapidly evolving, is still overall in its infancy. “There used to be a joke that you had to be tenured to study consciousness. Now there are full labs dedicated to this,” said Lindsay. Her gut is that we’re nowhere near the “final drafts” of these theories, and it’s important for scientists to remember this before drawing any conclusions. This is why the paper doesn’t choose one specific theory (on consciousness) to utilize because there are multiple — some of which conflict with each other. “We [also] have to keep in mind that these theories were not developed for the purposes of seeing if an AI system is conscious. They were developed in the context of assuming that a human is conscious… the fact that they’re not defined by these easy-to-identify computational principles that could be in artificial systems is important to remember.” Lindsay sees their project as a good opportunity for neuroscientists studying consciousness to see these theories in a new light. “If the scientists who created these theories would agree with the conclusions of the report or even agree that an Al system that had these properties was conscious, I think it would be important for those scientists to reflect on that,” said Lindsay.

The discussion concluded with some final questions for the panelists regarding the potential moral and ethical consequences should AI ever be conscious. Each researcher was asked if they associated consciousness or sentience with intrinsic moral, legal, or political significance. If a being is — or is likely to become — conscious or sentient, should we assign them intrinsic value and consider their vulnerabilities and needs when making decisions that affect them? What are the risks of inadvertently mis-evaluating consciousness in an AI system — i.e. mistaking an object for a subject and vice versa? Lindsay responded by expressing that if a system seems conscious or human-like, we should take that into consideration in how we’d treat that AI. Particularly, if we tell people they can treat an AI poorly because it’s not human, what effect does that have on society?

Overall, it was a highly thought-provoking conversation, one of more to follow.

About Grace Lindsay

Grace Lindsay’s work leverages artificial neural networks to better understand the human brain. She is particularly interested in studying how attention impacts sensory processing. Additionally, it’s her pursuit to answer the question of whether or not tools used to understand neural activity are actually up to par. It’s Lindsay’s aim as a data science professor to also apply data science to real problems — her application of machine learning for climate change evident in her work with Collaborative Earth is an example of this. In addition to her research and teachings, Lindsay is the author of the popular science book Models of the Mind: How physics, engineering and mathematics have shaped our understanding of the brain”, which discusses why and how we use mathematics to understand the brain.

By Ashley C. McDonald

--

--

NYU Center for Data Science

Official account of the Center for Data Science at NYU, home of the Undergraduate, Master’s, and Ph.D. programs in Data Science.