Sunday, August 1

Robotics: Military intelligence, of course | Opinion


A United States Air Force drone at Kandahar Airport, Afghanistan.
A United States Air Force drone at Kandahar Airport, Afghanistan.Josh Smith / Reuters

The three laws of robotics invented by novelist Isaac Asimov in the mid-20th century, when even the kitchen robot did not exist, have been pushing for a few years to jump onto the nonfiction shelf. They are simple and elegant – a robot will never harm a human. It will obey humans as long as this does not violate the above. And it will protect itself as long as this does not violate any of the above. Robotics experts have recognized for decades that the third law is useful, because any autonomous system must protect itself if it does not want to disintegrate or perish. That is good engineering. But what is under discussion now is the first law, that a robot must not harm a human, and that is not so much a question of engineering as of ethics and international politics. And it is a very important question.

Computer scientists themselves have been screaming for an international regulation of artificial intelligence since 2009, when the elite of the field held an interesting conference in Asilomar, in Monterey Bay, California. The meeting place itself was already a dazzling sign of what they intended, because Asilomar was just the place where molecular biologists had met 35 years earlier to recommend international safety standards on the genetic modification of viruses and bacteria. In 2009, the main concern of his heirs, computer scientists, was the military use of artificial intelligence. And it continues to be so, with increasing intensity and political pressure.

The United States Congress created a National Security Commission on Artificial Intelligence (NSCAI) two years ago, which presented its report in March. Whoever waits there for a call for the international regulation of robotic risk is going to be disappointed in his life, because what the NSCAI recommends is to accelerate artificial intelligence technologies to preserve national security and not lose the first set against China and Russia. This is reminiscent of the key idea that led to the creation of the Manhattan project: if the atomic bomb was possible, the United States had to build it before Hitler. The regulations would come later. It’s military logic, isn’t it? The NSCAI now advocates “the integration of artificial intelligence technologies in all facets of warfare.” More wood!

The diplomatic counterweight is coming from Europe, as usual. In January, the European Parliament issued guidelines proposing that artificial intelligence for military use should in no way replace human decisions. Another Asimov’s law, although I suspect it can be reduced to the original three rules (I leave it as a duty for the reader). The European Commission published in April the first international legal framework to ensure that artificial intelligence is “safe and ethical”. Computer scientists are more on the side of Europe, and 4,500 of them have spoken out against a robot making the decision to kill a person. The problem is those who cannot speak because they work for the Pentagon.


elpais.com

Leave a Reply

Your email address will not be published. Required fields are marked *