Why I’m far more scared of other humans than I am of Skynet

AI robot playing piano

This week, the House of Lords AI Select Committee brought out their report on artificial intelligence. I’m still wading through it (in my defence, it’s been hot and sunny here in the UK, which is a pretty unusual state of affairs), but the tl;dr summary includes a code of ethics that the Committee proposes the UK should adopt around AI.

One item struck me particularly – “The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.” It’s that word, autonomous, that makes it clear what the Committee was thinking about here: Skynet. They’re envisaging a world in which AI enslaves us all, Terminator style, and in that respect his part of the code feels like the beginnings of the Three Laws of Robotics Asimov came up with all those years ago in I, Robot.

But what the word autonomous infers is that we can use AI to hurt, destroy or deceive human beings, as long as there’s another human at the controls, as it were. This feels worse, somehow, and reminds me of that tired old NRA trope – ‘guns don’t kill people, people do’. Well, yes, but as we see in the USA, having guns around makes it far easier to kill people. Beating someone to death is really quite a lot of effort, compared to pulling a trigger.

I’m not for a minute advocating that we should treat AI like guns – guns have very few inherently positive uses while AI has huge numbers of potential benefits – and most current uses of AI are completely innocuous. But wouldn’t it be great if our code of ethics could stipulate that we won’t use such potentially powerful technology to hurt people? This feels slightly more urgent: AI isn’t currently at a stage where it could take over the world and enslave us – but human beings could definitely do some damage with it at some point not too far in the future.

To be fair, the code also includes items around data privacy and human rights, and starts with a nice catch-all that AI should be used for ‘the common good and benefit of humanity’. But it doesn’t stipulate who should decide what that common good might be, and how, so it all feels a bit like Google’s ‘Don’t be Evil’ mantra. And what the current Facebook furore underlines emphatically is that you can’t always trust people to make the right decisions, particularly where technology and profit are involved. Just because you can do something definitely doesn’t always mean you should.

In our society, people with power have always bent the world to their own ends – not all of them, not all of the time, of course, but an enormous amount of the world’s ills can be viewed as a function of unequal power: sexual harassment, modern slavery, data theft, war, poverty, I could go on. Human beings are flawed, fallible, and vulnerable to manipulation – as Margaret Heffernan pointed out in one of my recent interviews, “you can take a whole bunch of perfectly sane, normal people and make them do something absolutely vile, repeatedly and reliably. We don’t know yet how to build systems where people will come to work every day and be creative, original and honest.”

As our technology increases in power, it has the potential to magnify these power inequalities exponentially – something that Richard K Morgan foresaw in his dystopian novel Altered Carbon. There’s a great post just published today on the Singularity Hub on this very topic.

Stifling an emerging industry with regulation is never a good idea – so I see this code of ethics as laying the foundations for future regulation without preventing or slowing down the wild experimentation that has to happen in order for the technology to develop. But technology moves very fast: so while I welcome the Select Committee’s suggested code of ethics as a great starting point, we should be ready to come up with something more robust sooner rather than later – and I’d like us to start with stopping human beings from using AI nefariously, and then worry about Skynet later.

(Photo by Franck Veschi on Unsplash)

Also published on Medium.