Thursday , October 6 2022

Why can artificial intelligence be racist and cat?



[ad_1]

November 8, 2018
|
Updated on November 8, 2018. 18:36

Peruvian researcher Omar Flores is preparing for the future "very, very close" in which the streets will be full surveillance cameras that will recognize our faces and collect information about us as we pass through the city.

Explain that they will do so without our permission, because they are public spaces, and most of us usually do not cover our face when we leave the house.

Our face will become our password and, when we enter the store, we will recognize and explore the data as if we were new or repeat buyers, or in which places we were before we crossed the door. Of all the information you collect, the treatment given by this company will depend on it.

Florez wants to avoid that aspects like our sex or skin color are part of the criteria that these companies evaluate when deciding whether we deserve a discount or other special attention. Something that can happen without the same companies that they notice.

Artificial intelligence is not perfect: even if it is not programmed to do so, the software can self-learn to discriminate.

Florez works on an algorithm that allows face recognition, but hides sensitive data such as race and gender. | OMAR FLOREZ

This engineer was born in Arequipa 34 years ago and has a PhD in computer science at State University of Utah (USA) and is currently working as a researcher at Capital One Bank.

He is one of the few Latin Americans who studies the ethical aspects of machine learning or automated learning, a process he defines as "the ability to predict the future with past data using computers."

Technology based on algorithms used to develop a car without a driver or to detect diseases such as skin cancer, among others.

Florez works on an algorithm that allows computers to recognize faces, but can not decipher the gender or ethnic origin of a person. His dream is that, when this future comes, companies include their algorithms in their computer systems to avoid making racist or cat decisions, even without knowing it.

We always say that we can not be objective just because we are human. We tried to trust the machines so they are, but it seems that I can not & mldr;

Because they are programmed by a human being. In fact, we recently realized that the algorithm itself is an opinion. I can solve the problem with algorithms in different ways, and each one, in a way, includes my own vision of the world. In fact, choosing the right way to evaluate the algorithm is already observation, thinking about the algorithm itself.

Let's say I want to predict the likelihood somebody will commit a crime. That's why I collect photos of people who committed crimes, where they live, what their race is, their age, etc. Then, I use this information to maximize the accuracy of the algorithm and predict who can commit the crime later or even when the next crime can happen. These predictions can lead to the police focusing more on areas where there are suddenly more people of African descent because there are a greater number of crimes in that area or that are beginning to stop Latinos, as they probably do not have any documents in order.

Therefore, for someone who has a legal stay or is an African descendant and lives in the area, but does not begin the crime, it will be twice as difficult to rid the stigma of the algorithm. Because you are part of a family or distribution for the algorithm, it's more difficult for you to leave that family or distribution. In a way, you negatively affect the reality that surrounds you. Basically, we have coded stereotypes that we have as human beings so far.

The substance of the Peruvian researcher is that in future companies using facial recognition programs they use their algorithm | OMAR FLOREZ

This subjective element is in the criteria that you selected when programming the algorithm.

Exactly There is a chain of processes that deal with an automated learning algorithm: by collecting data, by selecting the significances that are important, by selecting the algorithm itself, then by testing to see how it works and reducing errors, and finally, we draw it to the public in order to use it. We realized that prejudice is in each of these processes.

The ProPublice investigation was discovered in 2016 that the judicial system of several United States states used software to determine which defendants will be repeated. ProPublica discovered that algorithms were favored by whites and punish blacks, although the pattern with which the data was collected did not contain questions about skin tone & mldr; In a way, the machine hit and used it as a valuation criterion although it was not designed to do it, did it?

What happens is that there are data already encoding a race and you do not even understand it. For example, in the United States we have a postal code. There are areas where only or mainly an African-American people live. For example, in southern California, mostly Latin people live. So, if you use a postal code as an automated learning algorithm for lubrication, you also encrypt an ethnic group without understanding it.

Is there a way to avoid this?

Obviously, at the end of the day, responsibility rests on the human being programmed by the algorithm and on how ethical it is. That is, if I know that my algorithm will work with another 10% more error and stop using something that might be sensitive to the characterization of an individual, then I simply take it out and take responsibility for the consequences, perhaps the economic ones that I can have I have my company. an ethical barrier between deciding on what is going and what does not go into the algorithm and often falls on the programmer.

It is assumed that algorithms are only for processing large amounts of information and saving time. Are there no ways to make them unmistakable?

No doubt, no. Because they are always an approximation of reality, that is, it is good to have a certain degree of error. However, there are currently very interesting research works that explicitly punish the presence of sensitive data. Therefore, a human being is basically choosing which data can be sensitive or not, and the algorithm stops using it or does it in a way that does not show correlation. However, frankly, for the computer are all numbers: either 0 or 1 or the value in the middle, it makes no sense. Although there are many interesting works that enable us to try to avoid prejudice, there is an ethical part that always falls on a human being.

Are there areas for which you, as an expert, believe that you should not leave artificial intelligence?

I think that at this point we should be ready to use the computer to help but not automate. The computer should tell you: these are the ones you should first process in the justice system. However, I should tell you why. This is called interpretation or transparency, and the machines should be able to inform what is the reason that led them to make such a decision.

Computers must decide on forms, but not standard stereotypes? Are they not useful for the system to detect patterns?

If, for example, you want to minimize the error, it's a good idea to use numerically the prejudices because it gives you a more accurate algorithm. However, the programmer must understand that there is an ethical component for this. There are currently regulations that prohibit you from using certain features for things like credit analysis or even using video for security, but are very early. Suddenly, what we need is that. Know that reality is unjust and there is a lot of prejudice.

It is interesting that, despite this, some algorithms allow us to try to minimize this level of prejudice. That is, I can use the skin tone, but without it being more important or having the same importance for all ethnic groups. So, answering your question, yes, you may think that in reality this use will have more accurate results and many times it is the case. And again there is this ethical component: I want to sacrifice a certain level of accuracy in favor does not give users a bad experience or the use of any prejudice.

Technology for getting a car without a driver uses machine learning | GETTI IMAGES

The Amazonian experts realized that the computer-aided design tool designed to discriminate personnel discriminated against curricula that included the word "woman" and favored terms often used by men. This is quite surprising, because in order to avoid bias, you need to guess which expressions men use more than women in curricula.

Even for a man it's hard to understand.

But at the same time, we are now trying to avoid making gender differences and saying that words or clothes are not male or female, but we can all use them. It seems that machine learning goes in the opposite direction, since you must recognize the differences between men and women and study them.

Algorithms collect only what is happening in reality, and the reality is that men use a few words that women may not. And the reality is that people sometimes get better connected with those words, because they are the men who are evaluating. So, to say differently, it might be against data. This problem is avoided by collecting the same number of curricula for men and women. There, the algorithm will allocate the same weight and words that use both poles. If you only select 100 curricula that you have on the table, there may be only two of them for women and 98 for men. Then you create prejudices because you model only what is happening in the space of men for this work.

So this is not a science for those who care about being politically correct, because you have to get into the differences & mldr;

You touched the perfect spot, which is empathy. The stereotype of an engineer is one who is very analytical and maybe even a little social. It happens that we start looking for things in engineers that we think are not so relevant or that it seems good to us: empathy, ethics and mldr; We need to develop these issues because we make so many decisions during the algorithm implementation process and there is often an ethical component. If you are not even aware of this, you will not notice it.

Florez says that our face will soon be ours. | GETTI IMAGES

Do you notice the differences between an algorithm designed by a person and one designed by 20?

In theory, prejudice should be reduced in the algorithm of more people. The problem is that many times this group is made up of people very similar to each other. Maybe all men or all are Asians. Perhaps it is good to have women to understand things that the group does not even understand. That's why diversity is so important today.

Can we say that the algorithm reflects the prejudices of its author?

Yes

And that there are algorithms with prejudices precisely because of the low diversity that exists among those who make algorithms?

Not only for this, but also an important part. I would say that this is partly because of the data that reflect reality. Over the past 50 years, we have been trying to create algorithms that reflect reality. Now we have realized that many times reflecting reality also strengthens stereotypes in people.

Do you think that in this sector there is enough awareness that algorithms can be prejudices or that this is something that is not given a great deal?

At the practical level, he was not given the importance. At the research level, many companies are beginning to seriously investigate this issue by creating groups called FAT: Justice, Accountability and Transparency (Justice, Accountability and Transparency).

[ad_2]
Source link