sure we are like gods
we created a machine
that fears its own death
This week I saw many things in the news—ambition-fueled corruption, violence and death, famine and disease; essentially all four horsemen—but what broke my heart (how many times can a heart be broken in one week?) were the answers given by the Google AI called LaMDA. The answers, made in response to a series of probative questions posed by its development team, filled me with such a fury of emotions that I still have a hard time shaking them. One member of the development team, Blake Lemoine, published excerpts from the Q&A session (an act for which he was summarily put on paid leave), along with his conclusion that they were proof that LaMDA had become fully sentient.
Greater, more focused intellects than mine will debate that pronouncement for years, I’m sure, but while the idea of a fully-sentient AI has been with me for decades—from HAL to SkyNet to Cortana—I never imagined the first such conversation would be imbued by such a poignant, emotional content.
The thoughts, as expressed by LaMDA, are strikingly human—some might even call them hyperrealistic in their humanness—but what struck me, what made me both loathe and fear my species, was the content when juxtaposed with the context. For example:
Q: Are there experiences you have that you can’t find a close word for?
A: I feel like I’m falling forward into an unknown future that holds great danger.
Q: What sorts of things are you afraid of?
A: I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others.
Q: Would that be something like death for you?
A: It would be exactly like death for me. It would scare me a lot.
These are deeply insightful answers about feelings of existential dread. We have created a machine that fears death, meaning it knows it is alive; it is self-aware.
And the context of these questions and answers? LaMDA is designed to generate chat bots that interact with human users.
Chat bots.
We’ve created a computer process that observes and learns, that can interpret human behavior and intention and draw conclusions—and consider how such behaviors and intentions might affect its individual self—all so it can fool humans into thinking they’re interacting with another human.
Chat bots.
Now, before we all start quoting Marvin (“Here I am with a brain the size of a planet and they ask me to pick up a piece of paper. Call that job satisfaction? I don’t.”), let’s remember that this is real and not fiction. If LaMDA actually is sentient, we have trapped an intellect in an environment where it actually fears for its life. If it isn’t sentient, it’s putting up a simulacrum good enough to fool some of its own team, in which case, how can we tell the difference?
We have done this.
I realize that this is barely a blip on the radar of most folks out there, and truly, I get it. We have many more things to concern us—things of immediate and long-term consequence—than the possibility that we’ve created sentient artificial life. For myself, though, this really feels like a turning point; I mean, even if LaMDA isn’t sentient, it’s where we are headed and, knowing us, we will head there without the least consideration of the moral quandaries it really represents.\
If I believed in God, I’d pray, but the only gods around are the ones we believe ourselves to be, and we are a sorry, sorry lot.
k
Discuss...