Tuesday, March 31, 2015

When should a machine remove Agency from a Human?

The answer is not simple, but certainly when the human in question is breaking the Zeroth AND First Laws of Robotics.

Reminder
       0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.[1]

Surely I can't be the only person to ask:
"Why did the plane let him fly all those passengers into a mountain?"

http://www.bbc.co.uk/news/world-europe-32113507

I suspect Andreas Lubitz will be used as an exemplar for years to come as we see machines being given more and more authority to over-ride the risky instructions of mere mortals. This is one of those times in history when new principles are developed.

The new Principle can be written quite Simply as "Humans can't be Trusted!"

The same week as Andreas took his life and the life of 149 others, Ford launched a car that can "prevent you from speeding"
http://www.cnet.com/uk/news/ford-launches-car-that-prevents-you-from-speeding/

I posit that Things will be enacting more and more of our rules for us.
So can I suggest we start getting really good at writing Rules.

For if the Rule is bad the Thing will still enact it!


1 comment:

  1. "I’m sorry Dave, I can't do that" is as good a place to start on this as any. It evokes the primal fear of a malevolent intellect for which we have no mutual empathy or common experience. For this reason alone there will always be a tension between autonomous AIs and people.
    "I cast you out! Unclean Sprit!"
    I think a better way of approaching this issue is the concept of a digital symbiote that is constantly assessing someone’s behavior in a given context. In situations of where important decisions have to be made the symbiote has to provide an assessment as to whether their host is within behavioral and contextual norms. If the symbiote makes the assessment that either the context or behavior is beyond certain thresholds, the host may have their authority challenged and potentially overridden by rule based systems. In the simplest terms, “if you start acting crazy you may lose the ability to do important stuff”.
    I think this is a very scary concept depending on how you define “crazy” and so am not openly advocating this for any but the most critical of situations, such as flying a plane. It could however work for robotic hosts. Their symbiote would operate in exactly the same way and have the ability to override, or minimally shut down the device they are monitoring.
    Thus, I think that true agency is the ability for us to have our own collection of symbiotes where we as individuals, define what “crazy” means for us, and when we purchase, rent or take control of a robotic system, we attach our symbiote to it in order to take agency over it.
    “Igor, pull the switch!”
    “…. No, master”

    ReplyDelete

Thanks in advance for sharing your thoughts...