A few weeks ago in Texas, my daughter and the neighbor kid spent about 15 minutes asking Alexa to make farting noises. Unfortunately but predictibly, she obliged.
“She.” Alexa is not a woman, or a human, but I typed it without a second thought.
I often find myself ascribing personalities to different AI systems that are already part of my life. Alexa (I don’t have one, but my sister does), strikes me as a kind and sometimes funny nanny, like Mary Poppins but less sassy. The AI that helps with my editing, I’ve decided, sometimes thinks to itself, “What is this word? You know what, I’m not even going to dignify that with an attempt,” and lets me do the guessing instead. (That one is sassy.)
But that personality, that voice, is just me. AI itself has no personality, or voice, or thoughts, or sense of right and wrong.
I’ve been thinking a lot about how we give meaning to things as I read with increasing feelings of both anticipation and horror about the advancement of AI and the inevitability of a major change in how we do, as those with their fingers on the pulse assure us, everything.
AI is still in toddler mode, and like toddlers, has no concept of the damage it can cause. Neither it nor toddlers can be blamed for not “acting right,” because of their lack of developmental capacity to do so. But toddlers at least shed their psychopathy as they grow. There’s no reason we can expect the same from AI. And unfortunately for all of us, AI is much stronger and more powerful than your average toddler.
I watched an interview with OpenAI’s CEO yesterday hoping to assuage my fears, but his attitude seemed to be, “Well, we’ve just got to give it a try, it’s got too much potential.” And here’s an actual quote from him: “It doesn’t work to do all this in a lab; we’ve got to get these products out into the world...and make our mistakes when the stakes are low.”
The stakes are low? Also, note the use of the word “products,” as opposed to tools. These are things for consumption, which means these are things to be bought. They certainly wouldn’t be around if there weren’t great economic potential in them for those who’ll be controlling the reins.
The tech powers that be have made the choice for all of us: it’s coming, and we can’t stop it. One thing he’s right about: “we” have no choice; I certainly didn’t vote for it, and no one else I know has had any say, either.
His enthusiasm feels like that of a kind of cooky educator: “Let’s just let the kids run the preschool! It will be amazing! Their creative potential is limitless!” — ready to bask in the incredible things that could happen, assuming the best.
I’ve got a bad feeling about this. I’ve had a bad feeling about this. (Please read this next sentence in a delicate British female accent): Oh, what will become of us?
I hope I’m wrong, but I suspect the answer is “not good things.” During the experience of my own lifetime, the economy has not been something that tends toward improvement, at least (and especially) not for your average worker.
The main way that I make money for example (writing, editing, translating), and therefore feed, clothe, and shelter myself and my family seems ripe for the immediate AI picking. Hopefully, I’m being overly catastrophic. Hopefully, AI will be like the invention of the motor engine: a thing that helps us get much further than we would have on our own. This is what my partner believes, and I so hope he’s right and I’m wrong.
But as Dr. Hinton, the “Godfather of AI” who just left Google in protest (or retirement — he’s 75) says, “It takes away the drudge work.”
And “drudge work” is what most workers get paid for.
Really, we just don’t know what will happen. What I do feel pretty certain of, however, is that no corporation or government is going to say, “Hmm, well, now that AI is making so many jobs unnecessary, I guess it’s time to just let humanity sit back and relax and make art or something! Naturally, we’ll be sending out checks to everyone now that your doing this work has been rendered unnecessary.”
Alright, fine. Maybe the Norwegians will say that. But I doubt very much it will be a worldwide phenomenon.
For now, I feel like we’re all in that scene from Deep Impact where the wave is hovering over an impossibly tiny shore, with impossibly tiny people on it. We’re watching it come; it’s inevitable. We don’t know what will happen when the waves are settled. But we do know that we probably won’t be around to see the result.
I totally agree about the sketchyness of AI. Particularly, because it absorbs the prevailing social, cultural, and political sensibilities of the data that it is exposed to -- which is a problematic proposition all on its own. Then to top it off, its primary goal is/will be to increase shareholder value, not make the world a better place.
"Then to top it off, its primary goal is/will be to increase shareholder value, not make the world a better place." This statement reminds me of the horseless carriage "Red Flag Laws" of yore. The fact that we are able to voice our opinions over the Internet with computers has increased shareholder value; which for me as a value/dividend investor has prevented me from living in poverty. A simplistic statement - of course. We have no idea where AI is going to take us in the future. Let us not speculate on the downside of future technical innovation.