Earlier this year the European Parliament’s Legal Affairs Committee called for a new regulatory framework to ensure the safe and ethical application of robots and artificial intelligence (AI).
It quoted the rules created by writer Isaac Asimov: a robot may not injure a human being or, through inaction, allow a human being to come to harm; a robot must obey the orders given by human beings (except where such orders would conflict with the first law); and a robot must protect its own existence as long as such protection does not conflict with the first or second laws.
But robots have broken all of these rules. There are already robots that harm people: police in Dallas used a robot and a pound of C-4 explosive to kill a sniper last year, and the military is already using drones.
People, in my opinion, do far too much with these rules. They come from science fiction and the idea that they can be universally programmed into robots and adhered to is so unrealistic. It speaks of just how slow to catch on and confused much thinking around robots and automation currently is.
Also telling was the report’s exploration of whether robots should be given legal status as ‘electronic persons’. It could be argued that this point – concerning allocating liability for damage and harm rather than protecting ‘robot rights’– refers specifically to industrial robotics.
But still it misses the bigger issue of what exactly is a robot? A lot of advancements concern software, not robots. It’s not like in the movies where we have humanoid robots running around everywhere. So what are you going to single out to give legal status?
It’s true that there are surprisingly few regulations around the use of robots and AI, particularly in the workplace. But part of the problem is that because technology moves so fast, whatever regulations you come up with, by the time you do they will probably be outdated. So in general I’m not a proponent of trying to regulate workplace robots. (There’s of course a new administration in the US that’s very anti-regulation, so I doubt this will be realistic there anyway.)
More encouraging was the MEP report’s recognition of the profound implications of AI for job eradication, and the potential need for a general basic income. To those who would argue that humans have always worried needlessly about being rendered obsolete, I’d urge them to consider that this time really could be different.
I read an article recently on how the world’s richest people are building bunkers and buying estates in New Zealand because they’re worried we’re reaching the end game: the collapse of civilisation. That’s the extreme. But I wouldn’t underestimate the challenge.
Potentially it’s nowhere near in the same order of other things organisations worry about. This could be a huge challenge to society if people don’t have jobs or access to income. For the most part it’s an issue for governments. But in the meantime organisations must become much more aware, not of existing and incoming regulations around robots, but this much larger issue.
I find very often it’s the elites in business who are in a bubble and not really dealing with this. The main thing is to provide opportunities for retraining and make sure people are aware of the fact that more routine jobs will disappear.
Each individual company has to face up to what their policy is going to be. Are they going to be a relatively enlightened organisation that tries hard to make adjustments? Or maybe they’re going to be hard-nosed and kick people out on the streets, which is more what you’ll see in the US I’d imagine.
Because AI and automation is a relentless force that over time is going to be very hard to resist. It is this issue of the rapid erosion of jobs that requires our most urgent attention.
Martin Ford is a futurist and author of The Rise of the Robots: Technology and the Threat of Mass Unemployment