Dell founder says calm down about SHODAN-style sentient AI because ‘you remember the ozone layer and all’ and we fixed that

We live in an era of AI hype and everyone has a take. But while most of us are a bit concerned about what the rise of ultra-predictive-text means for human creativity and criticism, a few Silicon Valley types are worrying themselves about Artificial General Intelligence, or AGI, which is basically a serious-sounding term for self-teaching AI with sentience and, potentially, an unslakeable lust for human blood. Or something of the sort.

But Dell founder and CEO Michael Dell says not to worry. In a recent virtual fireside chat with wealth management firm Bernstein (spotted by The Register), Dell said that he worried about the advent of AGI “a little bit, but not too much.” Why? Because “For as long as there’s been technology, humans have worried about bad things that could happen with it and we’ve told ourselves stories… about horrible things that could happen.”

That worrying, continues Dell, lets humanity “create counter actions” to prevent those apocalyptic scenarios from playing out before they happen. “You remember the ozone layer and all,” said Dell to Bernstein’s Tony Sacconaghi, “there are all sorts of things that were going to happen. They didn’t happen because humans took countermeasures.”

Dell (the man) went on to say that Dell’s (the company) AI business was booming. “Customer demand nearly doubled quarter-on-quarter for us and the AI optimized backlog roughly doubled to about $1.6 billion at the end of our third quarter,” beamed Dell (the man again), which—and I write this as someone for whom ‘literally GLaDOS’ ranks low on the list of fear priorities—does seem like the kind of thing a tech CEO would say in the prologue to a film about AI killing everyone.

Regardless, Dell reckons you shouldn’t be worried about the robot uprising any time soon, because humans are just that good at recognising and heading off problems before they occur. Except for that climate change thing and the nanoplastics in our blood, I guess. Oh, and the fact that we didn’t “fix” the ozone layer until there was already a gaping hole in it (that won’t be fixed until 2040, or 2066 if you happen to live in the Antarctic). If you’ll permit me a bit of editorialising, which I guess I’ve already been doing, that feels like reaching the right conclusion for the wrong reasons. 

For my money, you shouldn’t worry about AGI because it’s a spooky story well-off tech types dreamt up to hype up the capabilities of their actual AI tech and because it’s a much neater and easier tale to cope with than the things which are really scary about AI: the potential for the decimation of entire creative industries and their replacement by homogenous robotic sludge. Plus, the possibility that the internet—for all its problems, a genuinely useful repository of human knowledge—becomes a great library of auto-completed and utterly incorrect nonsense of no use to anyone. 

After all, I’ve already reached the point where I append most of my Google searches with “Reddit” to make sure I’m actually getting human input on whatever problem I’m facing. And that’s a much trickier problem with much more profit-threatening solutions than is the bogeyman of HAL 9000.

Leave a Reply

Your email address will not be published. Required fields are marked *