Friday, March 29

Diversity and inclusion are crucial for an AI project



in activities that range from the authorization for the granting of a credit to the absolutely precise diagnosis of a disease and its consequent treatment and from the detection of fraud in a financial operation to the identification of a suspect by the security forces.

It is much more present than we perceive: more and more applications take advantage of artificial intelligence (AI) and many of them enable social progress, increase the possibility of success of organizations and improve the quality of life of society.

But as with all innovation, the benefits and opportunities are combined with new challenges that we must pay attention to if we want to avoid headaches or negative impacts: we have seen cases where AI algorithms discriminated black people in recognition systems facial or relegated women in a process of selecting candidates for a job.

A huge risk considering that AI intervenes in activities that range from the authorization for the granting of credit to the absolutely precise diagnosis of a disease and its consequent treatment and from the detection of fraud in a financial operation to the identification of a suspect. by the security forces.

A reflection of the training data

Why is this happening? The reasons are multiple. One of the most common is that there is a bias in the data set used for the solution to learn. These data are usually a reflection of society: for example, a personnel system could have records mostly for men, which would condition future decisions of the system in terms of gender equality. It may also happen that in certain situations not foreseen in the construction phase, the tool works incorrectly.

Also Read  A century after the Colorado River was divided, tribes gain a voice

AI and machine learning solutions do not arise in a vacuum, but rather constitute a reflection, both of the data used to create them and of the point of view of the engineering team that will be in charge of their development, which makes the origin, the characteristics and experiences of each of them is relevant.

There is a general opinion that AI algorithms should correct the bias of humans in the data used in training, which would in turn allow them to be removed from the solution. The reality is much more complex: machine learning models have the virtue of the ability to generalize to make decisions based on data different from those used in their training. This makes it difficult to foresee or avoid discriminatory or wrong behavior in the future.

The responsibility of the AI ​​developer

The creators of AI and machine learning solutions, therefore, have an enormous responsibility: that of guaranteeing that the implementation and monitoring methods manage to mitigate any negative impact that they could cause and that they comply with the FAT (acronym in English for fair) premise. , accountability and transparent, that are fair, can be accountable for their operation and are transparent). Experience accumulates and today we have methodologies, good practices and techniques that can be applied before, during and after the implementation of the solution. In many geographies, there are even specific regulatory frameworks on this subject.

Factors such as diversity and inclusion are two drivers that must be present from the design of the solution: it is essential that the visions of people with different lifestyles, origins and points of view are considered at all times during the project. But bringing together diverse AI teams is no easy task. According to a World Economic Forum report dated 2020, only 26% of data and AI professionals are women. It is true that organizations such as AI4diversity exist to break these paradigms and that companies such as NTT DATA are putting more and more effort into developing processes for hiring and training professionals with a diverse approach, but there is still a long way to go.

Also Read  Māori moko facial tattoos revived by a new generation with designs on the future | Maori

AI has the potential to improve the quality of people’s lives in numerous ways, some already known, some waiting to be revealed in the immediate future. The only way to make this premise a reality is by ensuring that solutions are developed without prejudice and that include human beings in all their diversity.

By Lluis Quiles Ardila, Artificial Intelligence Director at NTT DATA Europe & LATAM






diarioti.com

Leave a Reply

Your email address will not be published. Required fields are marked *