It’s Saturday a long time in politics—especially considering whether it’s worth giving robots the freedom to kill people on the streets of San Francisco.
In late November, the city’s supervisors gave local police the right to kill a suspect using a phone-controlled robot, if they believe that not doing so would harm the public or the police. The justification used for the so-called “program of killer robots” is that it can prevent atrocities like the 2017 Mandalay Bay shooting in Las Vegas, which killed 60 people and injured more than 860, from happening in San Francisco.
But less than a week later, the same councilors reversed their decision, sending the plans back to committee for review.
The change is in part due to the huge public outcry and appeal that came from the original license. Concerns arose that removing people from the big issues of life and death was a step too far. On December 5thprotests were held outside San Francisco City Hall, where one administrator who accepted the decision later said he regretted it.
“Even though I was very concerned about the plan, I voted after increasing security,” said Gordon Mar, San Francisco’s fourth district supervisor. tweeted. “I feel sad. I was not comfortable with our ratings & how it is in other cities without a strong commitment to police accountability. I don’t think making domestic violence far, far, and few people is progress. “
The question posed by the San Francisco administration is really the value of life, says Jonathan Aitken, senior lecturer in robotics at the University of Sheffield in the UK. “The actions of using lethal weapons are always deeply emotional, both in police and military situations,” he says. Those deciding whether or not to take a life-threatening action need critical information about the situation in order to be able to perceive it in a thoughtful way—a situation that cannot be achieved through remote work. “Small things and things are important, and spatial separation removes that,” says Aitken. “Not because the user can’t think of them, but because they may not have the information provided to the user. This can cause errors. ” And errors, in terms of lethal force, can literally mean the difference between life and death.
Peter Asaro, an assistant professor at The New School in New York who researches police automation in New York said: “There are many reasons why it is not good to carry weapons. It is believed that the decision is part of a larger movement to strengthen the police force. “You can think of a problem that would be very useful, like being captured, but there are all kinds of missions,” he says. “This is hurting people, especially communities of color and the poor.”
Asaro also opposes the idea that guns on robots can be replaced by bombs, saying that using bombs in civilian areas would not be appropriate. (Other police forces in the United States are currently using bomb-disposal robots to intervene; in 2016, Dallas police used a bomb-carrying bot to kill a suspect in what experts called an “unprecedented moment”.