Friday, February 3

How we will ensure that artificial intelligence does not get out of hand


It goes without saying that the AI ​​will become a Skynet-esque supervillain to have this debate. The potential of artificial intelligence is huge and we have only scratched the surface yet. How to make its impact beneficial to all? It is a question that was repeated over and over again during the Global AI Summit 2022.

We take advantage of our presence there to speak with different experts in regulation and ethical use of algorithms. These have been his reflections on how we will ensure that AI does not get out of hand.

It affects us all. Artificial intelligence is a tool. Sometimes it seems that “it works alone”, but nothing is further from reality. There is an algorithm behind it, there is a database on which it is based. Although most agree that we will increasingly let AI take care of more of the work, it is also remembered that humans must remain in the center.

Constanza Gomez Mont, CEO of C Minds, a UNESCO partner organization for ethical AI, sums it up this way: “AI is a transversal technology. There is no way that it does not touch all sectors. That is why a critical look is needed. Maintain a public discourse where the risks are taken into account.”

Constance Gomez

We have seen AIs in the art world, in health or in restoration. Each one more surprising and different. “The approach is different in each sector, but you have to get where that intersection is. Each one from each trench,” says Constanza. Although AI is a very generic concept, we must try to find common ways of dealing with it.

You have to go to the root: the data. It is the most repeated mantra. Data, data and more data. AI in the end is pattern detection in data. Unfortunately, the databases on which many artificial intelligences work are not always the most accurate or adequate.

“If there is no data diversity, the result will no longer be able to avoid being biased.” Constanza explains that better policies must be applied in reviewing the data and preparing them properly. It is no longer just that this data is well ordered, but that it is diverse enough. If, for example, there are only white men in the database, it is impossible for the AI ​​to present multiple results. Base biases are then replicated and AI must be prevented from helping to perpetuate certain asymmetries.

The solution? Organizations such as the one chaired by Constanza explain that the diversity of teams is important. “The data is a reflection of the teams.” If large companies want a more plural AI, in addition to looking at their technology, they must look at the very people who make it. Again, machines are a true reflection of man.

We have been to the world's largest event on artificial intelligence.  It's like going back to the beginnings of digitization

It is not regulating the AI, but the possible problems of the AI. “Europe is ahead in regulation.” Constanza reviews the importance of the RGPD and the future legislation on AI, but also focuses on Latin America, where despite not having a generic regulation, there are protocols on AI in countries such as Mexico, Uruguay or Brazil. “The conversation goes further. When I started it stayed there, but now the debate is much more mature. For example, there is already talk of what are the sources of energy for AI and its environmental impact.”

Also Read  The worst traffic jam in the world is in the sky: the ESA wants to solve the excess of satellites before it's too late

Constanza’s view is that countries should not try to regulate technology by itself, since we don’t even know its impact yet. Instead, you do have to think about the problems that may be associated and then regulate them. This ranges from putting limits on energy expenditure to forcing aspects such as diversity or inclusiveness for people with disabilities to be taken into account. “All these aspects are not at odds with economic viability.”

“I don’t know if the public apparatus is up to speed on such fundamental issues. With such an accelerated pace, I don’t know if the debate is up to the task, but the important thing is that it at least exists and encompasses all sectors.”

The result of an AI is not the same if we put humans at the center. Seth Dobrin, founder of Qantm AI and former Chief Data Officer of IBM, has gone from managing a large multinational to creating a small consulting firm focused on reviewing AI processes. For Seth, you have to have an “approach that puts the human at the center.” And he explains them to us in the following way: “the AI ​​of a bank will not offer the same result if it only follows economic criteria or values ​​the person as such and if they can pay the mortgage.”

Seth Dobrin

If we want to hope that AI does not get out of hand, its design has to think about the possible consequences from the beginning. “It’s very easy to focus on the technical side and forget about strategy,” he sums up. Right now all efforts are going into producing more powerful and precise AIs, but experts consider that more debate is needed on whether these results, in addition to being precise, are also diverse enough and beneficial for people.

“Making AI more humanistic is not at odds with a good business model. In fact, it adds more value. In the end, your customers as a company are human,” says Seth. If engineers create AIs with potential biases and outcomes in mind, then it will be easier to review them in the future.

“More data isn’t always better. Sometimes it’s better to have a smaller but well-reviewed data set. Then you can always train AI for a specific use case,” Seth replies when asked if it exists today. an insatiable thirst for data from everywhere.

Also Read  Bitcoin mining consumes a lot of resources. For some, the solution is simple: change your code

The More Creative Your Resume, the Worse: How Algorithms Have Taken Over Selection Processes

Algorithmic transparency. The results of the AI ​​fascinate us, but that’s where it all ends. We see a nice image of DALL-E, but we don’t understand what it’s based on. We get a translation from Google, but questions remain about how well they save our conversations. “You have to add the person whose data you’re getting into the conversation,” explains Seth.

“There is a real need for algorithmic transparency,” point out the different experts consulted. “All participants must be made to understand the importance of analyzing the impact. If we do not make responsible use of AI, we will end up breaking the exponential growth trend and going backwards.”

“Fortunately, there are open source language models, energy efficiency is being improved and there is more and more talk about the importance of responsible AI. They are green shoots,” says Seth, who, although he does admit that much of it is marketing, also there is a real advance that little by little has more force.

“We need to talk”. “Regulation is a necessity, especially in the field of health. I like the approach of the AI ​​Act in Europe, where the results are regulated and not the technology itself. It makes sense, because in the end it cannot be regulated a programming language. Tomorrow it could be a totally different one,” says Seth.

In addition to Europe, other countries such as the United States have also understood the need to regulate AI. The category and risk approach has quite a few advocates. For example, regarding facial recognition, they point out that its use should be feasible for certain moments such as access to airports, but that a massive use of video surveillance is more dangerous.

“We have to talk about AI and we have to do it now,” sums up Seth. “We have to look at how to regulate and how to educate future generations. If we don’t apply it now, the next generation in 10 years will have a problem.”

When technological change brings wage inequality: the polarization of the labor market

Define who is responsible. Mariagrazia Squicciarini, director of technology and innovation at the Organization for Economic Co-operation and Development (OECD), acknowledges that we are far from a solution. “It is not easy to coordinate efforts to regulate artificial intelligence. It affects all sectors and with many approaches; health, employment, privacy…”.

mariagrazia

One advantage is that “most of the problems with AI are the same in most countries.” In other words, the challenges that Europe has with AI may be similar to those of the US or Latin America. But this similarity is also a challenge, in the sense that if a company wants to create an AI and has many obstacles in a certain place, it will go to the next country. How to ensure that AI continues to evolve rapidly is one of the questions facing the OECD.

Also Read  A new emissions scandal has cost Volkswagen 80 million euros. The culprit: Porsche

“We are at a critical moment. We need to reach consensus. We have to define things well, such as who is responsible for an AI. The company? A company ethics chief? Whoever it is, we have to know who we can contact. address to ask for information and responsibility”.

And also define what the function of each AI is. Hassan Sawaf is CEO of aiXplain and former director of artificial intelligence for Facebook, Amazon Web Services and eBay. With more than 25 years of experience in the innovation departments of large technology companies, Hassan is quite clear about what is needed: “to understand AI and be able to adapt quickly”.

“What does it mean that an AI is better? How can we define well the benchmarks that define that an AI is more powerful?” Asks Hassan. The answer is not clear, but it is not that AI that is trained in more parameters.

Hassan Sawaf

“We need to define the problems we want to solve. Change the benchmarks to get closer to the practical case”, he exposes. For example GPT-3. What are we looking for the AI ​​to tell us? It’s not just clever phrase linking; It’s not just pretending to understand the question; it is not impersonating a conscience. If we want to be able to improve models like OpenAI’s, we will not only need to train more, but also start defining what we are going to ask of it. We are at a point of almost brute force, but the experts ask that we stop marveling at the results and start being more critical of these tools.

Understand the machines. “You have to understand how machines think. What they need and where they have the most problems.” An example is language. “English is not the easiest language for an AI. Logically it is not the grammar, which is trivial for the AI. But it is also not how well structured the language is. Languages ​​like English or Chinese are difficult because they use many metaphors. Many abstract concepts, almost colorful or based on context. This complexity is the most difficult for a machine to understand.”

Keeping AI from getting out of hand is also a matter of not taking these metaphors out of context. The subtleties of human interactions will result in all kinds of inconsistencies and errors. Some of these can have serious repercussions. AI advances fast, but if we define each part of the process well, we will be able to define its impact much better.

Image | GR Stocks

Leave a Reply

Your email address will not be published. Required fields are marked *