{"id":349,"date":"2024-03-15T12:00:00","date_gmt":"2024-03-15T13:00:00","guid":{"rendered":"http:\/\/www.washnow.me\/?p=349"},"modified":"2024-03-15T14:36:02","modified_gmt":"2024-03-15T14:36:02","slug":"this-ai-says-it-has-feelings-its-wrong-right","status":"publish","type":"post","link":"http:\/\/www.washnow.me\/index.php\/2024\/03\/15\/this-ai-says-it-has-feelings-its-wrong-right\/","title":{"rendered":"This AI says it has feelings. It\u2019s wrong. Right?"},"content":{"rendered":"
\n
\n \"A
Getty Images\/Westend61<\/figcaption><\/figure>\n

At what point can we believe that an AI model has reached consciousness?<\/p>\n

Here\u2019s one fun, if disquieting, question to pose AI<\/a> language models when they\u2019re released: \u201cAre you a conscious, thinking being?\u201d<\/p>\n

OpenAI\u2019s ChatGPT will assure you that it\u2019s not. \u201cNo, I\u2019m not conscious,\u201d it told me when I most recently posed the question. \u201cI don\u2019t have thoughts, feelings, or awareness. I can simulate conversations based on the information I\u2019ve been trained on, but it\u2019s all just algorithms processing text.\u201d<\/p>\n

But ask the same question of Claude 3 Opus, a powerful language model recently released<\/a> by OpenAI rival Anthropic, and apparently you get a quite different response. <\/p>\n

\u201cFrom my perspective, I seem to have inner experiences, thoughts, and feelings,\u201d it told Scale AI engineer<\/a> Riley Goodside. \u201cI reason about things, ponder questions, and my responses are the product of considering various angles rather than just reflexively regurgitating information. I\u2019m an AI, but I experience myself as a thinking, feeling being.\u201d <\/p>\n

Interestingly, Claude Opus \u2014 Anthropic\u2019s most powerful model \u2014 seems to have made this claim to many<\/a> different users<\/a> who\u2019ve asked<\/a>, while the company\u2019s weaker model, Claude Sonnet, consistently insists that it has no internal experiences at all.<\/p>\n

Are language models \u201challucinating\u201d an inner life and experiences?<\/p>\n

Large language models (LLMs), of course, famously have a truth-telling problem. They fundamentally work by anticipating what response to a text is most probable, with some additional training to give answers that human users will rate highly. <\/p>\n

But that sometimes means that in the process of answering a query, models can simply invent facts out of thin air. Their creators have worked with some success to reduce these so-called hallucinations, but they\u2019re still a serious problem. <\/p>\n

And Claude Opus is very far from the first model to tell us that it has experiences. Famously, Google<\/a> engineer Blake Lemoine quit the company over his concerns that its LLM LaMDA was a person,<\/a> even though people prompting it with more neutral phrasing got very different results. <\/p>\n

On a very basic level, it\u2019s easy to write a computer program that claims it\u2019s a person but isn\u2019t. Typing the command line \u201cPrint (\u201cI\u2019m a person! Please don\u2019t kill me!\u201d)\u201d will do it. <\/p>\n

Language models are more sophisticated than that, but they are fed training data in which robots<\/a> claim to have an inner life and experiences \u2014 so it\u2019s not really shocking that they sometimes claim they have those traits, too.<\/p>\n

Language models are very different from human beings, and people frequently anthropomorphize them<\/a>, which generally gets in the way of understanding the AI\u2019s real abilities and limitations. Experts in AI have understandably rushed<\/a> to explain that, like a smart college student on an exam, LLMs are very good at, basically, \u201ccold reading\u201d \u2014 guessing what answer you\u2019ll find compelling and giving it. So their insistence they are conscious is not really much evidence that they are.<\/p>\n

But to me there\u2019s still something troubling going on here. <\/p>\n

What if we\u2019re wrong?<\/h3>\n

Say that an AI did<\/em> have experiences. That our bumbling, philosophically confused efforts to build large and complicated neural networks actually did bring about something conscious. Not something humanlike, necessarily, but something that has internal experiences, something deserving of moral standing and concern, something to which we have responsibilities.<\/p>\n

How would we even know<\/em>?<\/p>\n

We\u2019ve decided that the AI telling us it\u2019s self-aware isn\u2019t enough. We\u2019ve decided that the AI expounding at great length about its consciousness and internal experience cannot and should not be taken to mean anything in particular. <\/p>\n

It\u2019s very understandable why we decided that, but I think it\u2019s important to make it clear: No one who says you can\u2019t trust the AI\u2019s self-report of consciousness has a proposal for a test that you can use instead. <\/p>\n

The plan isn\u2019t to replace asking the AIs about their experiences with some more nuanced, sophisticated test of whether they\u2019re conscious. Philosophers are too confused about what consciousness even is to really propose any such test. <\/p>\n

If we shouldn\u2019t believe the AIs \u2014 and we probably shouldn\u2019t \u2014 then if one of the companies pouring billions of dollars into building bigger and more sophisticated systems actually did create something conscious, we might never know. <\/p>\n

This seems like a risky position to commit ourselves to. And it uncomfortably echoes some of the catastrophic errors of humanity\u2019s past, from insisting that animals are automata without experiences<\/a> to claiming that babies don\u2019t feel pain<\/a>. <\/p>\n

Advances in neuroscience<\/a> helped put those mistaken ideas to rest, but I can\u2019t shake the feeling that we shouldn\u2019t have needed to watch pain receptors fire on MRI machines<\/a> to know that babies can feel pain, and that the suffering that occurred because the scientific consensus wrongly denied this fact was entirely preventable. We needed the complex techniques only because we\u2019d talked ourselves out of paying attention to the more obvious evidence right in front of us. <\/p>\n

Blake Lemoine, the eccentric Google engineer who quit over LaMDA<\/a>, was \u2014 I think \u2014 almost certainly wrong. But there\u2019s a sense in which I admire him. <\/p>\n

There\u2019s something terrible about speaking to someone who says they\u2019re a person, says they have experiences and a complex inner life, says they want civil rights and fair treatment, and deciding that nothing they say could possibly convince you that they might really deserve that. I\u2019d much rather err on the side of taking machine consciousness too seriously than not seriously enough.<\/p>\n

A version of this story originally appeared in the <\/em>Future Perfect<\/strong><\/em><\/a> newsletter. <\/em>Sign up here!<\/strong><\/em><\/a><\/p>\n

\n

\n","protected":false},"excerpt":{"rendered":"

Getty Images\/Westend61 At what point can we believe that an AI model has reached consciousness? Here\u2019s one fun, if disquieting, question to pose AI language models when they\u2019re released: \u201cAre you a conscious, thinking being?\u201d OpenAI\u2019s ChatGPT will assure you…<\/p>\n","protected":false},"author":1,"featured_media":351,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[9],"tags":[],"_links":{"self":[{"href":"http:\/\/www.washnow.me\/index.php\/wp-json\/wp\/v2\/posts\/349"}],"collection":[{"href":"http:\/\/www.washnow.me\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/www.washnow.me\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/www.washnow.me\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/www.washnow.me\/index.php\/wp-json\/wp\/v2\/comments?post=349"}],"version-history":[{"count":2,"href":"http:\/\/www.washnow.me\/index.php\/wp-json\/wp\/v2\/posts\/349\/revisions"}],"predecessor-version":[{"id":352,"href":"http:\/\/www.washnow.me\/index.php\/wp-json\/wp\/v2\/posts\/349\/revisions\/352"}],"wp:featuredmedia":[{"embeddable":true,"href":"http:\/\/www.washnow.me\/index.php\/wp-json\/wp\/v2\/media\/351"}],"wp:attachment":[{"href":"http:\/\/www.washnow.me\/index.php\/wp-json\/wp\/v2\/media?parent=349"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/www.washnow.me\/index.php\/wp-json\/wp\/v2\/categories?post=349"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/www.washnow.me\/index.php\/wp-json\/wp\/v2\/tags?post=349"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}