Wednesday, September 22

Will machines ever perform autonomous actions? | Scientists respond


A robot distributes disinfectant to customers in a shopping mall in Bangkok.
A robot distributes disinfectant to customers in a shopping mall in Bangkok.MLADEN ANTONOV / AFP

In fact, there are already machines that carry out autonomous activities for a long time. For example, the automatic control of the temperature of our house. This is a machine that works autonomously, according to a schedule, without us having to turn the heating on and off. On its own, based on what its sensors capture, it determines what it has to do: if it starts up or turns off.

As for other types of more complex machines, such as autonomous vehicles, things are different. We are at the moment to see if cars can be driven autonomously or if the participation of a person is necessary. When the action that the machine has to perform is much more complicated, providing it with total autonomy is more delicate. It depends on several factors, for example, on the amount of information that the machine needs to receive from its environment in order to make decisions. In the case of autonomous driving, that is one of the problems and that is that the environment in which the vehicle has to move is very complex and changing. To this must be added another key aspect, the actions carried out by that machine can be life-threatening, so there we enter a slippery terrain. How far are we willing to give machines autonomy?

Therefore, the answer to your question is that we encounter three types of difficulties. There is a technical difficulty that has to do with the interpretation of the information. In autonomous vehicles, for example, the interpretation that a machine is capable of making today of the environmental information captured by its sensors still does not reach what a person can do. In a given situation, the machine might not correctly interpret if there is an imminent risk; since this would imply anticipating and predicting what a pedestrian is going to do, for example, based on the movement they are carrying, but there will always be uncertainty. Technology is still trying to move forward to solve these kinds of problems. That is, to get to make an interpretation of the scene like the one that humans can do.

The challenge is to ensure that the decisions that the machine makes autonomously and without being directly supervised by a human are fair and without bias.

With regard to decision-making, once the machine receives and interprets all that information from the environment, we can let it do it alone. It is a matter of programming, of coding in a program the decision-making methods, whether they are algorithms themselves where there is a defined sequence of steps, or, lately, using machine learning techniques, which what they do is accumulate information and based on past experience they learn and can adjust their own behaviors. In short, we have methods for a machine to make decisions and act autonomously. But another important problem arises, which is visibility, that is, being able to see from the outside what the machine is deciding or why it is making the decisions it makes. The move to machine learning methods has meant that in some cases we lose that visibility. We cannot explain well why a network of neurons has made a certain decision. This is one of the technical challenges of today, that machines are able to explain to a human what has been the reasoning that has led to a certain decision. And that is necessary to adjust it if you do it incorrectly. Problems with, for example, gender biases have been identified in recruitment or job search programs. Keep in mind that the program will work according to previous experiences, and if we feed that machine with biased information, it will continue to replicate that behavior for the future. The challenge there is to ensure that the decisions made by the machine autonomously and without being directly supervised by a human are fair and have no bias, and for that we need to be able to understand them. If there is not a sufficient degree of visibility, confidence in the machine will suffer.

The third difficulty is no longer technical. If an autonomous machine acts incorrectly and causes an accident, who is responsible? Who assumes responsibility for this action? How far are we willing to delegate our responsibility? Sometimes it is these types of ethical and legal obstacles that prevent certain tasks from being automated.

Angelica de Antonio She is a full professor and researcher at the Department of Computer Languages ​​and Systems and Software Engineering at the Polytechnic University of Madrid.

Question sent via email by Cf. Alveo

We respond is a weekly scientific clinic, sponsored by the Dr. Foundation Antoni Esteve and the program L’Oréal-Unesco ‘For Women in Science’, which answers readers’ questions about science and technology. They are scientists and technologists, partners of AMIT (Association of Women Researchers and Technologists), those that answer those doubts. Send your questions to [email protected] or on Twitter #nosotrasrespondemos.

Coordination and writing: Victoria Toro

You can follow MATTER in Facebook, Twitter, Instagram or subscribe here to our Newsletter




elpais.com

Leave a Reply

Your email address will not be published. Required fields are marked *

Share