Code bias

The fight against racist algorithms

Can we teach our machines to unlearn racism?

Code bias

The AI field has a big problem

Machine learning techniques keep creating racist algorithms
Researchers are scrambling to stop the trend, but it is extremely complex
The good news is that there are ways to prevent and un-teach algorithmic bias
Code bias

The fight against racist algorithms

Can we teach our machines to unlearn racism?

Algorithms trained on massive piles of real-world data are often interchangeably and confusingly referred to as artificial intelligence, neural networks, and machine learning, as we all figure out how to navigate this new frontier of computing. Whatever they’re called, they are ubiquitous.

These algorithms make invisible choices that affect everything from your daily perusal of YouTube to the sentencing of convicted criminals, and that ubiquity means that when these algorithms don’t work correctly, they reproduce problems on an enormous scale. Researchers are now scrambling to figure out how to benefit from the powers of artificial intelligence without replicating human flaws, particularly biases such as sexism and racism.

But how do you help an algorithm unlearn racism? We’ve written before about how bias gets unwittingly baked into algorithms. In April, it was discovered that the viral app FaceApp, which promised to “transform your face using artificial intelligence,” was racist. The app let users upload a photo and then modify it with various filters, including “old,” “young,” and “hot.” The underlying algorithm was trained through exposure to relevant examples of the same type of data over and over, in a process called machine learning. In this case, that meant feeding it likely thousands of photos of faces so that it could learn to recognize what a face was.

Unfortunately, this set of face photos was apparently mostly white, or else it was trained using data that reflected a preference for white features. This meant that a filter designed to make your face look “hot” translated to lighter skin, smaller noses, and rounder eyes.

The company behind the app, Wireless Lab, later removed the feature and apologized, blaming it all on the pitfalls of machine learning and pledging to correct the problem. But teaching an algorithm to not be racist is not easy. When The Outline reached out to Wireless Lab almost a month later, CEO Yaroslav Goncharov said it wasn’t ready to release a feature that could compute hotness for all races. “We don't have any updates yet,” he said in an email. “It is quite a time-consuming process.”

Engineers don’t totally understand how their own algorithms work

There are many other examples of algorithmic bias, where algorithms help propagate inequity. A translation tool produced female associations with family and male associations with career, while Google’s photo tagging service mistakenly identified black photo subjects as gorillas.

Often these types of mistakes aren’t due to an actual computing error or an evil cackling data scientist behind a partition. They occur when the algorithm is trained on data that doesn’t represent a population well enough, or when the algorithm is irresponsibly designed to optimize a singular type of decision.

The truth is that, in the brave new era of machine learning, engineers don’t totally understand how their own algorithms work. They create the conditions for learning, input data, and wait to see what the machine comes up with. What happens in between is a black box.

Preventing or correcting bias can seem impossible considering it works in the obscure subconscious of both humans and computers, but data scientists have been working on solving this problem for a while. The Fairness, Accountability, and Transparency in Machine Learning conference, or FAT/ML, held annually since 2014, brings together researchers working towards fairer guidelines and functionality for algorithms.

Prevention

One of the main tactics in the fight against algorithmic bias is to clean up the data before it’s even introduced into the system.

The most obvious correction is to ensure the data is representative. If it’s faces, include all races and ages. If it’s dogs, include all breeds. If it’s language, include colloquial sources like Facebook posts along with formal documents like those published by the U.N., a popular machine learning resource since the documents are often translated into multiple languages.

Teaching data scientists to prepare their data more carefully is an important step towards preventing unfair results, said Suresh Venkatasubramanian, one of the FAT/ML conference organizers, as well as an associate professor in the School of Computing at the University of Utah and a member of the board of directors for ACLU’s Utah branch.

“The challenge is still to get all of this to percolate back into the industry.”

“A lot of the problems that came up in the bad uses of machine learning can be attributed to people just not thinking through the results,” he told The Outline.

That includes imagining the consequences of their choices, he said, and the larger context in which the results are going to be used. If you were designing a predictive hiring algorithm for deciding which job applicants would be most likely to succeed for example, and your training data was mostly made up of young white women, considering the broader social context of the algorithm decisions might include recognizing and taking steps to counter further homogeneity in your hiring practice.

Several research teams have constructed preprocessing methods for data sets that minimize disparate impact while maintaining relative accuracy. These equation-based methods include assigning more weight to underrepresented populations within the data set and duplicating data points in order to make up for underrepresentation.

Reverse-engineering bias

The other main anti-bias strategy takes into account the fact that many machine-learned algorithms are opaque, whether for proprietary reasons or by deliberate design. In these black box algorithms, only the input and outputs are available while the actual implementation process is not discoverable, even to the original engineers.

In order to deduce whether or not such an algorithm is discriminatory, scientists can assess the algorithm output’s dependence on different input data categories. If the output changes drastically in response to an input value being changed, then it becomes clear that that input category factors in considerably to how the algorithm produces output. And if that category is an attribute which is undesired as a contributing factor to the output (for example, race in regards to predicting recidivism rates), then this provides grounds for re-examination of the algorithm.

To this end, institutions of data computing have begun to call for more transparent algorithms. The Association for Computing Machinery released a list of “Principles for Algorithmic Transparency and Accountability” earlier this year, and the Institute of Electronic and Electrical Engineers is working on a set of guidelines of their own.

Continued improvements

Suchana Seth, a data scientist and Ford-Mozilla Open Web Fellow at the Data & Society research institute, is publishing a technical report later this year to make the case for algorithmic fairness even clearer to the larger data science community, and hopefully, to key players in the commercial field.

“The challenge is still to get all of this to percolate back into the industry,” she told The Outline. While big name companies like Microsoft and Google have installed anti-bias measures in the form of ethics boards, at many other smaller companies the responsibility for fairness rests solely upon the engineers, who aren’t necessarily considering such matters.

By speaking to the scientists who are designing the systems, Seth hopes to bypass the bureaucracy and impress the need for preemptive considerations regarding fairness directly onto the source.

“Even if we’re not able to remove bias completely, we might at least be able to specify the extent to which we think there might be bias, or the extent to which we might be able to counter that bias,” she said.

It’s hard to overstate the significance of this effort. Structural inequality happens because government, corporate, and cultural institutions are predisposed to reward powerful groups and disenfranchise weak ones. If we aren’t careful, algorithms will do the same thing.