ChatGPT & Other AI Programs Not Conscious, Argue Experts Who Publish Checklist of Meaningful Indicators of Consciousness


Chart via Consciousness in Artificial Intelligence: Insights from the Science of Consciousness [pdf]

Here's a free gift link to a fascinating New York Times article about a group of philosophers, neuroscientists and computer scientists who published a comprehensive study on consciousness and AI:

The fuzziness of consciousness, its imprecision, has made its study anathema in the natural sciences. At least until recently, the project was largely left to philosophers, who often were only marginally better than others at clarifying their object of study. Hod Lipson, a roboticist at Columbia University, said that some people in his field referred to consciousness as “the C-word.” Grace Lindsay, a neuroscientist at New York University, said, “There was this idea that you can’t study consciousness until you have tenure.”

Nonetheless, a few weeks ago, a group of philosophers, neuroscientists and computer scientists, Dr. Lindsay among them, proposed a rubric with which to determine whether an A.I. system like ChatGPT could be considered conscious. The report, which surveys what Dr. Lindsay calls the “brand-new” science of consciousness, pulls together elements from a half-dozen nascent empirical theories and proposes a list of measurable qualities that might suggest the presence of some presence in a machine.

Here's a pdf link to the actual report, which requires some serious heavy lifting to read, but it includes this nice colored chart (above) with a checklist of consciousness indicators.

For instance: "Agency guided by a general belief-formation and action selection system, and a strong disposition to update beliefs in accordance with the outputs of metacognitive monitoring." Basically that means: When the AI knows that its conscious, and makes decisions with that knowledge in mind. Or as the Times puts it:

[Consciousness could] arise from the ability to be aware of your own awareness, to create virtual models of the world, to predict future experiences and to locate your body in space. The report argues that any one of these features could, potentially, be an essential part of what it means to be conscious. And, if we’re able to discern these traits in a machine, then we might be able to consider the machine conscious.

And no, the experts say in their report, ChatGPT is probably not conscious:

In general, we are sceptical about whether behavioural approaches to consciousness in AI can avoid the problem that AI systems may be trained to mimic human behaviour while working in very different ways, thus “gaming” behavioural tests (Andrews & Birch 2023).

Large language model-based conversational agents, such as ChatGPT, produce outputs that are remarkably human-like in some ways but are arguably very unlike humans in the way they work. They exemplify both the possibility of cases of this kind and the fact that companies are incentivised to build systems that can mimic humans.

Schneider (2019) proposes to avoid gaming by restricting the access of systems to be tested to human literature on consciousness so that they cannot learn to mimic the way we talk about this subject. However, it is not clear either whether this measure would be sufficient, or whether it is possible to give the system enough access to data that it can engage with the test, without giving it so much as to enable gaming (Udell & Schwitzgebel 2021).

In other words, the very fact that AI programs like ChatGPT are trained on online debates about whether AI programs like ChatGPT are conscious enables them to "fake" consciousness better!

Stay in the Loop

Get the daily email from CryptoNews that makes reading the news actually enjoyable. Join our mailing list to stay in the loop to stay informed, for free.

Latest stories

- Advertisement - spot_img

You might also like...