About 80 years ago Isaac Asimov proposed the now famous Three Laws of Robotics, which have influence in science and technology circles to this day.
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Now, a legal academic and artificial intelligence expert Frank Pasquale has added 4 more too it, apparently because innovation is not always good for humanity.
So here they are:
New law 1: AI should complement professionals, not replace them
Am I being cynical if I look at this as just being law 1 again, just with the definition of harm being widened to include social and economic harm? We all need to feel our work is valued by society – from garbage collector to neurosurgeon – and there is no substitute for us. That is what this extension seems to be about - we should not be replaced.
New law 2: Robotic systems and AI should not counterfeit humanity
Devices should not be developed to mimic human emotions. Bit late there Dr Frank! That is how we have been making them usable/relatable – from the simple chatbots to the virtual assistants, mimicry is what it is all about.
New law 3: Robotic systems and AI should not intensify zero-sum arms races
The development of robotic weapons systems is on the rise (Hi Skynet – of Terminator franchise fame)
Back to law 1 – as long as they are not going to harm people or allow people to come to harm. As we start to rely on them to make value judgements on who should live and who should die (in your autonomous car or overloaded emergency room). Where does our absolute superiority come from?
New law 4: Robotic systems and AI must always indicate the identity of their creator(s), controller(s) and owner(s)
I’m all for labelling of devices like this so we can make choice at the point of purchase. Dr Pasquale makes the point that we also need someone to blame that we know how to punish – how do you punish and AI apart from turning it off (killing it?)
So I ask you to pop into your time machine, head back 250 or so years in America and substitute “Slave” where I have written Robot or AI.
Inspired by the Future Tense podcast
Comments