If you work in AI, you should read this book
If you're working in and around AI, I think you should read this book.
It's called: It Anyone Builds It, Everyone Dies: The Case Against Superintelligent AI
It's written by Eliezer Yudkowsky and Nate Soares with recommendations from a who's who of interesting people, not least the National Treasure that is Stephen Fry.
Eliezer and Nate argue, rather directly, that developing Artificial Superintelligence has only one, predetermined outcome: The extinction of the Human Race.
Reading through their arguments, I find it hard to disagree.
But you work in AI. You've already been thinking about this, right? Even just on the edges of your consciousness, if not directly.
We can see the writing on the wall.
Hardly a week goes by when a senior executive doesn't ask me how to use AI to 'get rid of more employees' (or words to that effect).
The jobs issue is, I think, one level of concern.
The one we don't tend to think about too much – and that we are perhaps thinking about incorrectly – is AI 'taking over'.
So many of my AI-native colleagues (and myself) have been brought up on a steady stream of science-fiction literature and movies – the Terminator series, for example – that can seem quite far-fetched.
The key point in the book for me that really did get me thinking was this: It's highly unlikely that any ASI would do things in the way we – us humans – would expect. Eliezer and Nate go into some detail setting out just how big a blindspot we've got going on in this respect.
This current crop of AI models is hardly impressive. But. Give the model access to 200,000 GPUs and let it move into parallel, exponential activity, to the point whereby it can begin to readily improve itself... is it conscious? Would it be? Does it matter?
I find it difficult to expect the current or even the next versions of these models to do anything remarkable. Is that a serious blindspot?
I am delighted when the models can help me in my day job, but far too often I'm having to step in and do a lot of the base-level thinking to connect process with outcomes. Still, it's an improvement in many cases. But I don't feel threatened, at all. At the moment.
Which is one of the book's points. You've got all the time in the world to deal with the issue... until suddenly, someone unwittingly develops something and ... you absolutely cannot put the genie back in the bottle.
It does feel like we're years away from anything that could reliably be of any concern to us. Seriously, years. That being said, us humans are notoriously rubbish at handling and managing exponential growth predictions.
What am I doing about it?
Well: I think step 1 is to talk about the issue, with a degree of seriousness that perhaps I hadn't necessarily adopted or inhabited this time last week.
Here's a thought: You'll have a bit of time over Christmas, right? Buy a copy, give it a couple of hours. Ponder the issue.
At the absolute minimum, this will look good on a side table when the family and friends are visiting during this festive period. It's certainly a good talking point. And given you work in AI, you're going to get asked about the whole issue anyway, right? You can bring out the book and hand it around - the cover is really provocative.
Even if there's nothing to worry about for decades, I think it's a good investment to push your own boundaries and develop your own opinions.
There's an Audible version too if that's helpful. I did it old school and actually read it - with my own eyes. I didn't get AI to summarise it either.