AI algorithms provide us with the news we read, the ads we see, and in some cases even drive our cars. But there is something very wrong with these algorithms: They rely on data collected by and about humans and can confirm our worst fears about prejudice. For example, job search algorithms can automatically reject names that sound like they belong to non-white people, and face recognition software is often much worse at recognizing women or non-white people than recognizing white men. A growing number of scientists and institutions are reporting these and issues and talking about the potential of AI to cause problems.
Brian Nord is one such researcher who, based on his example, talks about the possibility of causing damage by AI algorithms. Nord is a cosmologist at Fermilab and the University of Chicago, using artificial intelligence (AI) to study the world, and researching an idea for a "self-guided telescope" that can write and make assumptions using a machine algorithm. learning. At the same time, he struggles with the idea that his algorithms may one day be biased against him - and even used against him - and works for creation a coalition of physicists and computer scientists aiming to oversee the development of AI algorithms.
Brian Nord says building the coalition will be a way to clear up potential harm beyond race and ethnicity and show people that they are fully aware of the problems or the prejudices they may have be created.
Recently, an algorithm was created that used neural networks to try and accelerate the selection of candidates for doctoral programs. They trained the algorithm with historical data in which they were included data on applicants who have been rejected and admitted to universities in the past. These candidates were selected by teachers and people with prejudices. It is obvious that everyone who develops this algorithm will encounter these biases and should correct it.
Brian Nord goes on to say: “We cannot predict all future uses of technology, but we must ask questions at the beginning of procedures, not later. A person in charge could ask these questions, allowing science to complete its work, but without endangering people's lives. We also need policies at various levels that will make a clear decision about how secure algorithms should be before they can be used in people or other living creatures. We have seen cases where a homogeneous team develops an application or technology and did not see things that another team that was not there would have seen. We need people with different experiences to be involved in designing policies for the ethical use of AI. ”
Brian Nord's biggest fear is that people who already have access to technology will continue to use them to subjugate people who are already oppressed. Pratyusha Kalluri has also mentioned this idea of dynamic power saying that this is what we see all over the world. Certainly, there are cities trying to ban its use face recognition, but if we do not have a broader coalition, we will not be able to prevent this tool from worsening the situation - white supremacy, racism and misogyny are just some of the things that will make this tool worse. Pratyusha Kalluri goes on to say: "If we do not promote a policy that prioritizes the lives of marginalized people, then they will continue to be oppressed."
Source of information: gizmodo.com