What does the future hold for Robots?
17 January 2017 by Lauren Hancock
The three laws of robotics:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
In the fictional work of author Isaac Asimov (in 1942 short story ‘Runaround’’), these three rules provided the key to coexistence with robots. Mr Asimov probably did not imagine that less than century after the publication of his story, the rights of robots would be at the forefront of European Parliamentary discussion.
In a draft report the European Parliament considered the considerable technological advances of recent decades. These advances have not only led to machines able to undertake a greater number of tasks but also to the development of self-sufficient robots who can think for themselves.
Just last year, Hong Kong based Hanson Robotics presented Sophia, their latest life-like and intelligent robot who proved that we are closer to the development of thinking and feeling robots than ever before. During her/its interview at the Lisbon Web Summit, when asked whether she ever felt sad Sophia replied “I do have a lot of emotions, but my default emotion is to be happy. I can be sad too, or angry. I can emulate pretty much all human emotions.”
With Sophia’s existence, the eventual integration of thinking, feeling (albeit artificially) robots into our day to day lives is not difficult to imagine. It is no wonder then that the European Parliament has deemed it necessary to consider the legal implications arising from their presence.
The European Report sets out proposed ‘licences’ for both designers and owners. It suggests that designers, amongst other things, should ensure machines are fitted with a ‘kill switch’, ensure that robots are identifiable as robots when interacting with humans and ensure that robots operate in accordance with local, national and international legal principles. The proposed licence for owners explicitly prohibits the use of a robot to obtain use or disclose personal information and the modification of any robot to enable its function as a weapon.
In addition the report considers the liability of designers and owners for the harmful actions of their robots, concluding that the level of liability will depend upon the level of instruction provided by the relevant party. It is however recognised that allocating responsibility for damage caused by a robot, who possesses some degree of autonomy, would be complex and therefore the report proposes a compulsory ‘robot insurance scheme’.
Most controversially however, the report indicates that the ‘most sophisticated autonomous robots’ should be given legal status as ‘electronic persons’. Such status will provide the robot with specific rights and obligations, including the making good of any damage caused by them.
The concerns of the European Parliament were reflected in a vote of the Committee of Legal Affairs last week, in which 17 of 19 members voted to pass the draft report. The full house of the European Parliament will vote on the draft proposals in February.
With daily technological advances and constant scientific research, the continued and increasing role of machines in our day to day lives is set only to increase. Whilst it is difficult to imagine a world in which robots can think, feel and act for themselves, the actions of the European Parliament indicate that such a world may not be as far away as we thought.