Sunday, October 17

Fernanda Viégas: “The justice of the algorithm cannot depend only on an engineer” | Trends



If the algorithms spoke, everything would be easier. They would warn us that they are making decisions for the wrong reasons. “This result takes gender into account,” they would say. Or, when they have been poorly trained: “The data on which I rely is not representative.” But no matter how much expectations we put on the shoulders of these formulas that Cathy O’Neil described as “opinions locked in mathematics,” the task of preventing us from going astray remains human.

Fernanda Viégas has not been able to make the algorithms speak. But almost. He leads People + AI Research (PAIR), the Google division dedicated to preserving good manners between artificial intelligence and the humans who deal with it, through data visualizations and manuals such as its People + AI Guidebook. “We are not the police. But we are very interested in big goals, like making machine learning fair, through concrete steps. We build tools that can be used internally at Google and then we open them up to the rest of the world, “he says.

ABOUT UNCERTAINTY

Viégas’ career began a long way from Google, geographically and academically. At the gates of the university, he only knew that he did not know what his vocation was. This in Brazil was a problem. “When you are deciding what to study you have to take specific exams for each university and also for each degree. So if you change your mind, you have to quit and introduce yourself again, ”he explains. Three times he tried. He tried his luck with Chemical Engineering, Literature and Teaching. And three times he decided to back down.

Then he discovered that in the United States he could keep moving forward despite his indecision and try different itineraries before deciding which one to stay with. “I’m definitely the type of person who needs to change his mind,” he admits. His next discovery was Graphic Design. “I’ve never heard of data visualizations,” he says. He opted for this career and added a second title in Art History to the mix.

The bridge between these two disciplines and technology was laid at MIT. “They told me about MediaLab, a place that welcomes people of different profiles and suggests different ways of approaching technology,” recalls the researcher, who ended up doing a master’s and a doctorate focused on data visualization. “I think today the ability to pivot between different fields and connect points that were not related before is very powerful. Most of my colleagues have no training in graphic design and this means that I bring different ideas. I ask myself questions that nobody around me is asking, ”he reasons.

Is being an outsider in a discipline the recipe for success? “No way. Everyone has their own way. I have wonderful colleagues who have always been dedicated to computer science. I have learned a lot from incredibly knowledgeable people in very specific fields. What we need is diversity ”.

His way of giving voice to machine learning systems is to create a window that allows the skilled eye and the layman to see what is going on in there. PAIR makes graphs that show how learning models are leaking data, that reveal where potential biases are hiding, and that help algorithms make fairer decisions.

LANGUAGE OF THE MACHINES

Attention, spoilerThere is no universal recipe to stop injustice and something is always given up. “Engineers should be aware of this, but we also need more stakeholders to help decide what to give up. Fairness has different aspects depending on the context, the countries, the situations. There shouldn’t be just one person deciding this, it can’t just be the engineer, ”adds Viégas.

The expert designs her projects as if anyone would see them. In his mind are the student who wants to know more about this technology, the doctor who does not want to blindly trust a system he does not understand, and the engineer who needs to understand how his lines of code can change the lives of other people. “We need more people to understand how machine learning works and what its limitations are, so they feel empowered to ask critical questions.”

However, the debate on these systems cannot be extended as long as the lingua franca is that of the engineers. “A lot of these methods tend to be very technical,” he says. For example, an image classification system could determine that what it is seeing is a zebra by taking into account the location and concentration of certain pixels. “That is not the way humans think. It’s the way computers think, ”he adds. In this context, PAIR’s job is to translate the machine’s reflections into explanations understandable to ordinary mortals.

“A researcher from our team created a technique that allows us to express which concepts are important and consult them with the machine,” says Viégas. In the example of the zebra, this system would allow us to forget about the pixels and ask how important the stripes have been in that decision. In the actual case of a tumor tissue imaging classifier, it allowed physicians to ensure that the system is taking into account the correct traits. “They tested the tool to try to understand it, to gauge their trust in it. The beauty of this is that suddenly you give a doctor the ability to express himself and dialogue with the system.

Tools like this shed some light on a scenario where entrusting us to machine learning models – to decide if we are employable, if they grant us a loan, if we are interested in seeing the latest from Marvel – is a commitment we make blindly. “There is no general rule to calibrate that trust, it depends on the system, the context or the level of risk, among others,” says Viégas.

MORE RELIABLE ALGORITHMS

For physicians, whether or not to believe in the results of a tumor classifier is simply a matter of life and death. “Sometimes these systems become as precise as the specialists or more. I’m not saying that doctors should blindly accept what the machine says. But ruling it out completely is detrimental to the patient. And, at the same time, we don’t want the doctor to trust the system too much. “

The balance, according to the researcher, is that the professional’s decision is based on their knowledge of how the tool thinks and what its limits are. “They need more transparency to do their job better,” he concludes.

Once again, the bottom line is that we can all see the guts of these technologies. “The engineers are the ones who are adjusting the pins, but even they don’t want to be the ones to make the decisions because they realize the responsibility that comes with it,” says the expert. His alternative proposal is to make those pegs accessible to more people, so that we can all debate and decide how to adjust them.

Is there time to seek consensus in an industry where rush reigns? According to Viégas, we have no choice. “We need to go through this complex and cumbersome process. Unless we are willing to let one person make all the decisions, which I don’t think so, we need more dialogue of this type. Their promise is that patterns applicable to situations with similar risks will emerge over time. “But we’re not going to fix anything if we don’t sit down and talk.”

Is there a risk that these recommendations will fall on deaf ears? Yes and no. “Part of the necessary leap is that it is understood that this is achievable”, sentence. And for those who see these initiatives as a marketing effort? “There is a lot of hard work and a lot of teams reflecting deeply on this, at least at Google. Additionally, we are seeing computer science programs from different universities integrate ethics into their curricula. This is super important, engineers need to learn that these questions can be asked while building systems. You don’t have to wait for the last minute ”.


retina.elpais.com

Leave a Reply

Your email address will not be published. Required fields are marked *

Share