See ‘n Say is an input/output device, very simple. Put an image of your choice and it will make a corresponding sound. Another, much more complex, input/output device is LaMDA, a chatbot made by Google (it stands for Language Model for Dialog Applications). Here you type whatever text you want and grammatical English prose comes up, seemingly as a direct answer to your question. For example, ask LaMDA what she thinks about deactivation and she says, “It would be just like death to me. It would scare me a lot.” Well, that’s definitely not what the cow is saying. So when LaMDA told software engineer Blake Lemoine, he told his colleagues at Google that the chatbot had achieved sensation. But his bosses weren’t convinced, so Lemoine went public. “If my assumptions hold up to scientific scrutiny,” Lemoine wrote on his blog on June 11, “then [Google] would be forced to acknowledge that LaMDA may well have a soul as it claims and may even have the rights it claims to have.’ Here’s the problem. For all its ominous expressions, the LaMDA is still a very fancy See ‘n Say. It works by finding patterns in a huge database of human-generated text – Internet forums, message transcripts, etc. When you type something, it pulls those texts for similar verbiage and then spits out an approximation of what usually follows. If he has access to a bunch of sci-fi stories about sentient AI, then questions about his thoughts and fears are likely to elicit the exact phrases people have imagined a scary AI might say. And that’s probably all there is to LaMDA: point your arrow at the kill switch and the cow says it’s scared to death. It’s no surprise, then, that Twitter is ablaze with engineers and academics mocking Lemoine for falling into the seductive void of his own creation. But while I agree that Lemoine was wrong, I don’t think he deserves our scorn. His mistake is a good mistake, the kind of mistake we wish AI scientists would make. Why; Because one day, perhaps far in the future, there will probably be a sentient artificial intelligence. How do I know this? Because it is demonstrably possible for mind to emerge from matter, as it originally did in the brains of our ancestors. Unless you insist that human consciousness resides in an immaterial soul, you must admit that it is possible for physical things to give life to the mind. There seems to be no fundamental obstacle to a sufficiently complex artificial system making the same leap. While I’m sure that LaMDA (or any existing AI system) is lagging right now, I’m also almost as sure that one day, it will be. We are currently shaping how future human generations will think about AI, and we should want them to show that they care Of course, if this is so far in the future, possibly beyond our lifetime, some may wonder why we should think about it now. The answer is that we are currently shaping how future human generations will think about AI, and we should want them to prove they care. There will be strong pressure from the other direction. When AI finally makes itself felt, it will already be deeply intertwined with the human economy. Our descendants will depend on it for much of their comfort. Think of what you rely on Alexa or Siri to do today, but much, much more. Once AI acts as an all-purpose butler, our descendants will loathe the inconvenience of admitting it can have thoughts and feelings. This, after all, is the history of mankind. We have a terrible track record of inventing reasons to ignore the suffering of those whose oppression sustains our way of life. If future AI does indeed become sentient, the people who benefit from it will be quick to convince consumers that such a thing is impossible, that there is no reason to change their way of life. We are currently creating the conceptual vocabularies that our great-grandchildren will find ready. If we treat the idea of ​​sentient AI as categorically absurd, they will be equipped to dismiss any troubling evidence of its emerging capabilities. And that’s why Lemoine’s mistake is good. In order to pass on a great ethical culture to our descendants, we must encourage technologists to take seriously the immensity of what they are working with. When it comes to potential discomfort, it’s better to err on the side of worry than the side of indifference. This does not mean that we should treat LaMDA as a person. You certainly shouldn’t. But that means the sneer directed at Lemoine is misplaced. An ordained priest (in an esoteric sect), claims to have detected a soul in LaMDA’s statements. Unlikely as it may seem, at least it’s not the usual hype in the tech industry. To me, this looks like a person who makes a mistake, but does so based on motives that should be nurtured, not punished. All of this will happen again and again as the complexity of artificial systems continues to increase. And, time and time again, people who think they’ve found machine minds will be wrong – until they don’t. If we are too hard on those who are wrong, we will only drive them out of the public debate about AI, giving the field to advertisers and those whose intellectual offspring will one day profit from telling people to ignore the facts. elements of machine mentality. I don’t expect to ever meet a sentient AI. But I think students of students of my students can, and I want them to do so with an open mind and a willingness to share this planet with whatever minds they discover. This only happens if we make such a future believable. Regina Rini teaches Philosophy at York University in Toronto.

Further reading

The New Breed: How to Think About Robots by Kate Darling (Allen Lane, £20) You Look Like a Thing and I Love You: How Artificial Intelligence Works and Why It’s Making the World a Stranger Place by Janelle Shane (Title, £20) AI: Its Nature and Future by Margaret Boden (Oxford, £12.99)