{"id":26585,"date":"2025-04-24T15:09:53","date_gmt":"2025-04-24T15:09:53","guid":{"rendered":"https:\/\/medexperts.pro\/?p=26585"},"modified":"2025-04-24T15:23:25","modified_gmt":"2025-04-24T15:23:25","slug":"should-we-start-taking-the-welfare-of-a-i-seriously","status":"publish","type":"post","link":"https:\/\/medexperts.pro\/?p=26585","title":{"rendered":"Should We Start Taking the Welfare of A.I. Seriously?"},"content":{"rendered":"<div><\/div>\n<p id=\"article-summary\" class=\"css-79rysd e1wiw3jv0\">As artificial intelligence systems become smarter, one A.I. company is trying to figure out what to do if they become conscious.<\/p>\n<section class=\"meteredContent css-1r7ky0e\">\n<div class=\"css-s99gbd StoryBodyCompanionColumn\" data-testid=\"companionColumn-0\">\n<div class=\"css-53u6y8\">\n<p class=\"css-at9mc1 evys1bk0\">One of my most deeply held values as a tech columnist is humanism. I believe in humans, and I think that technology should help people, rather than disempower or replace them. I care about aligning artificial intelligence \u2014 that is, making sure that A.I. systems act in accordance with human values \u2014 because I think our values are fundamentally good, or at least better than the values a robot could come up with.<\/p>\n<p class=\"css-at9mc1 evys1bk0\">So when I heard that researchers at Anthropic, the A.I. company that made the Claude chatbot, were starting to study \u201cmodel welfare\u201d \u2014 the idea that A.I. models might soon become conscious and deserve some kind of moral status \u2014 the humanist in me thought: <em class=\"css-2fg4z9 e1gzwzxm0\">Who cares about the chatbots? Aren\u2019t we supposed to be worried about A.I. mistreating us, not us mistreating it?<\/em><\/p>\n<p class=\"css-at9mc1 evys1bk0\">It\u2019s hard to argue that today\u2019s A.I. systems are conscious. Sure, large language models have been trained to talk like humans, and some of them are extremely impressive. But can ChatGPT experience joy or suffering? Does Gemini deserve human rights? Many A.I. experts I know would say no, not yet, not even close.<\/p>\n<p class=\"css-at9mc1 evys1bk0\">But I was intrigued. After all, more people are beginning to treat A.I. systems as if they are conscious \u2014 <a class=\"css-yywogo\" href=\"https:\/\/www.nytimes.com\/2025\/01\/15\/technology\/ai-chatgpt-boyfriend-companion.html\" title>falling in love<\/a> with them, using them as <a class=\"css-yywogo\" href=\"https:\/\/www.nytimes.com\/2025\/04\/15\/health\/ai-therapist-mental-health.html\" title>therapists<\/a> and soliciting their advice. The smartest A.I. systems are surpassing humans in some domains. Is there any threshold at which an A.I. would start to deserve, if not human-level rights, at least the same moral consideration we give to animals?<\/p>\n<\/div>\n<\/div>\n<div data-testid=\"Dropzone-1\"><\/div>\n<div class=\"css-s99gbd StoryBodyCompanionColumn\" data-testid=\"companionColumn-1\">\n<div class=\"css-53u6y8\">\n<p class=\"css-at9mc1 evys1bk0\">Consciousness has long been a taboo subject within the world of serious A.I. research, where people are wary of anthropomorphizing A.I. systems for fear of seeming like cranks. (Everyone remembers what happened to Blake Lemoine, a former Google employee who was <a class=\"css-yywogo\" href=\"https:\/\/www.nytimes.com\/2022\/07\/23\/technology\/google-engineer-artificial-intelligence.html\" title>fired in 2022<\/a>, after claiming that the company\u2019s LaMDA chatbot had become sentient.)<\/p>\n<p class=\"css-at9mc1 evys1bk0\">But that may be starting to change. There is a small body of <a class=\"css-yywogo\" href=\"https:\/\/arxiv.org\/abs\/2411.00986\" title rel=\"noopener noreferrer\" target=\"_blank\">academic research<\/a> on A.I. model welfare, and a modest but <a class=\"css-yywogo\" href=\"https:\/\/eleosai.org\/post\/experts-who-say-that-ai-welfare-is-a-serious-near-term-possibility\/\" title rel=\"noopener noreferrer\" target=\"_blank\">growing number<\/a> of experts in fields like philosophy and neuroscience are taking the prospect of A.I. consciousness more seriously, as A.I. systems grow more intelligent. Recently, the tech podcaster Dwarkesh Patel compared A.I. welfare to animal welfare, <a class=\"css-yywogo\" href=\"https:\/\/www.dwarkesh.com\/p\/ege-tamay\" title rel=\"noopener noreferrer\" target=\"_blank\">saying<\/a> he believed it was important to make sure \u201cthe digital equivalent of factory farming\u201d doesn\u2019t happen to future A.I. beings.<\/p>\n<div class=\"css-kbghgg\">\n<div class=\"css-121kum4\">\n<div class=\"css-171quhb\"><\/div>\n<div class=\"css-asuuk5\">\n<div class=\"css-7axq9l\" data-testid=\"optimistic-truncator-noscript\">\n<div data-testid=\"optimistic-truncator-noscript-message\" class=\"css-6yo1no\">\n<p class=\"css-3kpklk\">We are having trouble retrieving the article content.<\/p>\n<p class=\"css-3kpklk\">Please enable JavaScript in your browser settings.<\/p>\n<\/div>\n<\/div>\n<div class=\"css-1dv1kvn\" id=\"optimistic-truncator-a11y\">\n<hr \/>\n<p>Thank you for your patience while we verify access. If you are in Reader mode please exit and\u00a0<a href=\"https:\/\/myaccount.nytimes.com\/auth\/login?response_type=cookie&amp;client_id=vi&amp;redirect_uri=https%3A%2F%2Fwww.nytimes.com%2F2025%2F04%2F24%2Ftechnology%2Fai-welfare-anthropic-claude.html&amp;asset=opttrunc\">log into<\/a>\u00a0your Times account, or\u00a0<a href=\"https:\/\/www.nytimes.com\/subscription?campaignId=89WYR&amp;redirect_uri=https%3A%2F%2Fwww.nytimes.com%2F2025%2F04%2F24%2Ftechnology%2Fai-welfare-anthropic-claude.html\">subscribe<\/a>\u00a0for all of The Times.<\/p>\n<hr \/>\n<\/div>\n<div class=\"css-1g71tqy\">\n<div data-testid=\"optimistic-truncator-message\" class=\"css-6yo1no\">\n<p class=\"css-3kpklk\">Thank you for your patience while we verify access.<\/p>\n<p class=\"css-3kpklk\">Already a subscriber?\u00a0<a data-testid=\"log-in-link\" class=\"css-z5ryv4\" href=\"https:\/\/myaccount.nytimes.com\/auth\/login?response_type=cookie&amp;client_id=vi&amp;redirect_uri=https%3A%2F%2Fwww.nytimes.com%2F2025%2F04%2F24%2Ftechnology%2Fai-welfare-anthropic-claude.html&amp;asset=opttrunc\">Log in<\/a>.<\/p>\n<p class=\"css-3kpklk\">Want all of The Times?\u00a0<a data-testid=\"subscribe-link\" class=\"css-z5ryv4\" href=\"https:\/\/www.nytimes.com\/subscription?campaignId=89WYR&amp;redirect_uri=https%3A%2F%2Fwww.nytimes.com%2F2025%2F04%2F24%2Ftechnology%2Fai-welfare-anthropic-claude.html\">Subscribe<\/a>.<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/section>\n","protected":false},"excerpt":{"rendered":"<p>As artificial intelligence systems become smarter, one A.I. company is trying to figure out what to do if they become conscious.One of my most deeply held values as a tech columnist is humanism. I believe in humans, and I think that technology should help people, rather than disempower or replace them. I care about aligning artificial intelligence \u2014 that is, making sure that A.I. systems act in accordance with human values \u2014 because I think our values are fundamentally good, or at least better than the values a robot could come up with.So when I heard that researchers at Anthropic, the A.I. company that made the Claude chatbot, were starting to study \u201cmodel welfare\u201d \u2014 the idea that A.I. models might soon become conscious and deserve some kind of moral status \u2014 the humanist in me thought: Who cares about the chatbots? Aren\u2019t we supposed to be worried about A.I. mistreating us, not us mistreating it?It\u2019s hard to argue that today\u2019s A.I. systems are conscious. Sure, large language models have been trained to talk like humans, and some of them are extremely impressive. But can ChatGPT experience joy or suffering? Does Gemini deserve human rights? Many A.I. experts I know would say no, not yet, not even close.But I was intrigued. After all, more people are beginning to treat A.I. systems as if they are conscious \u2014 falling in love with them, using them as therapists and soliciting their advice. The smartest A.I. systems are surpassing humans in some domains. Is there any threshold at which an A.I. would start to deserve, if not human-level rights, at least the same moral consideration we give to animals?Consciousness has long been a taboo subject within the world of serious A.I. research, where people are wary of anthropomorphizing A.I. systems for fear of seeming like cranks. (Everyone remembers what happened to Blake Lemoine, a former Google employee who was fired in 2022, after claiming that the company\u2019s LaMDA chatbot had become sentient.)But that may be starting to change. There is a small body of academic research on A.I. model welfare, and a modest but growing number of experts in fields like philosophy and neuroscience are taking the prospect of A.I. consciousness more seriously, as A.I. systems grow more intelligent. Recently, the tech podcaster Dwarkesh Patel compared A.I. welfare to animal welfare, saying he believed it was important to make sure \u201cthe digital equivalent of factory farming\u201d doesn\u2019t happen to future A.I. beings.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and\u00a0log into\u00a0your Times account, or\u00a0subscribe\u00a0for all of The Times.Thank you for your patience while we verify access.Already a subscriber?\u00a0Log in.Want all of The Times?\u00a0Subscribe.<\/p>\n","protected":false},"author":1,"featured_media":26587,"comment_status":"close","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[6],"tags":[],"class_list":["post-26585","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-technology"],"_links":{"self":[{"href":"https:\/\/medexperts.pro\/index.php?rest_route=\/wp\/v2\/posts\/26585","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/medexperts.pro\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/medexperts.pro\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/medexperts.pro\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/medexperts.pro\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=26585"}],"version-history":[{"count":2,"href":"https:\/\/medexperts.pro\/index.php?rest_route=\/wp\/v2\/posts\/26585\/revisions"}],"predecessor-version":[{"id":26588,"href":"https:\/\/medexperts.pro\/index.php?rest_route=\/wp\/v2\/posts\/26585\/revisions\/26588"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/medexperts.pro\/index.php?rest_route=\/wp\/v2\/media\/26587"}],"wp:attachment":[{"href":"https:\/\/medexperts.pro\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=26585"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/medexperts.pro\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=26585"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/medexperts.pro\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=26585"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}