Wednesday, September 27, 2023
HomeTechnologyBoston Dinamycs calls for robots to stop arming. What's going on?

Boston Dinamycs calls for robots to stop arming. What's going on?

Boston Dynamics, along with other companies in the robotics industry, have signed an open letter announcing that they will not assemble their robots and will not support other companies in doing so. What is happening to get to this?

Julián Estévez Sanz , University of the Basque Country / Euskal Herriko Unibertsitatea

Robots armed with explosives in the US

At the end of November 2022, the news broke that the San Francisco police could use robots capable of killing. According to them, “this type of robot will only be used in extremely dangerous situations, which seek to protect innocent lives.” And they added that exceptionally “the machines may be equipped with explosives.”

But armed robots had been used before. This is precisely what happened in Dallas in 2016 . A sniper had killed five police officers and wounded seven others. In these circumstances, the policemen claimed that they were forced to send a robot, Packbot , towards the criminal.

The scene: The robot approaches, dragging an electrical buzz that drives its gears. Bewilderment for the sniper. Will you bring the requests you have negotiated with the police? The robot keeps getting closer. A little bit closer. The criminal doubts whether to shoot him, he just yells nervously at the policemen. He doesn’t understand anything.

Boston Dinamycs calls for robots to stop arming. What's going on?

Bomb disposal robot iRobot PackBot 510 of the Spanish Army. wikimedia commons

The robot gets a little closer, and boom! His C-4 explosive charge kills the sniper, Micah Johnson. Johnson was black, a US Army Reserve veteran of the Afghanistan War who was reportedly angered by police shootings of black men and stated that he wanted to kill white people, especially police officers. whites. It is believed to be the first time that a US police department used a robot to kill a suspect. In 2018, the police officers were acquitted in a trial .

Digidog, the New York Police robot dog

In 2020 and 2021 it was the New York police that began using the Digidog robot, created by Boston Dinamycs . They used it to inspect suspicious homes, including an apartment in the Bronx after a kidnapping. They have also been used to negotiate with the kidnapper of a mother and her baby, and even to bring food to some robbers.

In all the interventions, the explanations given by the security forces regarding the intervention of the robot in these scenarios were rather scarce. And that’s really the ethical problem with these machines. Can the use of this type of weapon guarantee respect for citizen rights and unnecessary violence against a criminal? Can each of the robot’s actions be reasoned in detail?

Following the apartment search episode in the Bronx, Congresswoman Alexandria Ocasio-Cortez raised the alarm about the use of Digidogs only in lower-class neighborhoods.

At this point in the text, there are plenty of reasons to understand why Boston Dynamics, along with other companies in the robotics industry –Agility Robotics, ANYbotics, Clearpath, Open Robotics and Unitree Robotics– have signed an open letter announcing that they “will not they will arm their advanced mobility general purpose robots or the software they develop that enables advanced robotics, and will not support other companies in this purpose.”

Eye! The devil is in the small details. In the letter they do not clarify (deliberately?) what they mean by general purpose robots. In addition, they leave the door open for them to be used in surveillance or people recognition missions , as they are already doing.

The war in Ukraine and other previous wars have also highlighted the use of robots, especially drones, in this field.

Both police and military robots fall under the ethical debate on autonomous weapons . In fact, its regulation is currently being debated in the United Nations.

China will not sign any agreement for the use of autonomous weapons

A century ago the philosopher and sociologist Antonio Gramsci said that we must be pessimistic with intelligence and optimistic with will. However, there does not seem to be a willingness to agree. To begin with, China has already announced that it does not plan to sign any, and the United States and other major military powers are investing heavily in this type of armed robotic system. The conflict in Ukraine has been the straw that broke the camel’s back.

This ethical debate has some very slippery bases. Can we consider an automatic missile guidance system, like those that have existed for decades, as an autonomous weapon? Or an antipersonnel mine, which does not distinguish between allies and enemies?

The scientific community at the moment does not even have any clear definition of what an autonomous weapon is.

China claims that its interest in developing AI weaponry is not related to automatic killing of people, but rather to predictive maintenance, battlefield analysis, autonomous navigation and target recognition . Perhaps in the military world, it doesn’t even make sense to let an autonomous weapon exist that behaves unpredictably and kills without human control.

Drones in the spotlight

In 2021, a multitude of media reported that a drone had completely autonomously killed a victim in the Libyan conflict for the first time in history. However, that drone never existed, according to the manufacturing company itself . But that news was barely published.

Resistance to change has always been a lever for human impulses and concerns. The feat of the Wright brothers and their flying machine was still very recent, when a great popular controversy arose about the possible use of these devices in warfare. HG Wells ‘s 1907 novel The War in the Air is evidence of that concern.

Given all this commotion, on December 7 the city of San Francisco rectified, and will not allow, for the moment, its police to equip robots capable of killing.

David Collingridge , a professor at Aston University , in the United Kingdom, published The Social Control of Technology in 1980 with the dilemma that bears his name, Collingridge’s dilemma: “When change is easy, the need for change cannot be foreseen. same. When the need for change is evident, change becomes costly, difficult, and time-consuming.”

Surprisingly, this paradox is very topical today. The Conversation

Julián Estévez Sanz , Professor and researcher in Robotics and Artificial Intelligence, University of the Basque Country / Euskal Herriko Unibertsitatea

This article was originally published on The Conversation . Read the original .


Most Popular