Science

That study on artificially intelligent “gaydar” is now under ethical review

The Journal of Personality and Social Psychology is reviewing a controversial study after a backlash from scientists and LGBTQ advocates.

Science

That study on artificially intelligent “gaydar” is now under ethical review

The Journal of Personality and Social Psychology is reviewing a controversial study after a backlash from scientists and LGBTQ advocates.
Science

That study on artificially intelligent “gaydar” is now under ethical review

The Journal of Personality and Social Psychology is reviewing a controversial study after a backlash from scientists and LGBTQ advocates.

Last week, a paper came out in the Journal of Personality and Social Psychology that claimed to show how off-the-shelf artificial intelligence tools can detect who is gay simply by looking at a photo of a person’s face.

It faced immediate backlash from from artificial intelligence researchers and sociologists as well as the advocacy organization GLAAD, who criticized the authors’ methodology and their grandiose conclusions.

The paper, titled “Deep neural networks are more accurate than humans at detecting sexual orientation from facial images” is now being re-examined, according to one of the journal’s editors, Shinobu Kitayama. “An ethical review is underway right at this moment,” Kitayama said when reached by email. He declined to answer further questions, but suggested the review’s findings would be announced in “some weeks.”

The study trained a computer model to recognize gay people based on photos of people from Facebook and a dating site. The researchers, Yilun Wang and Michal Kosinski of Stanford University, relied on stated sexual preferences as well as what Facebook groups people liked in order to determine who was gay or straight.

From The Economist, which first wrote about the study:

When shown one photo each of a gay and straight man, both chosen at random, the model distinguished between them correctly 81% of the time. When shown five photos of each man, it attributed sexuality correctly 91% of the time. The model performed worse with women, telling gay and straight apart with 71% accuracy after looking at one photo, and 83% accuracy after five.

In other words, the model was shown two photos and told that one was gay and one was straight.

Kosinski and Wang claimed that their findings provided “strong support” for the idea that sexual orientation is caused by hormone exposure in the womb, an unsubstantiated and unusual leap for scientists to make after an incremental study. They also claimed to be doing the LGBTQ community a service by exposing how artificial intelligence could hypothetically be used to persecute gay people.

Unfortunately, the experiment had design flaws. Critics pointed out that the study included no people of color, which is common in machine learning studies but artificially increases the model’s ability to find patterns; the fact that the data relied in part on looking at what Facebook groups people liked in order to determine their sexual orientation; and the fact that the researchers seemed to think that the contours of a person’s face are fixed, rather than something easily and frequently manipulated by makeup.

Philip N. Cohen, a sociologist at the University of Maryland, wrote that the authors had simply misinterpreted their own results. Greggor Mattson, an associate professor of sociology at Oberlin College and Director of the Program in Gender, Sexuality, and Feminist Studies there wrote a takedown of the study titled, “Artificial Intelligence Discovers Gayface. Sigh.” in which he placed the study in a long line of experiments designed to find physical traits to correlate with sexual orientation, including “19th century measurements of lesbians’ clitorises and homosexual men’s hips.” He also noted that Kosinski — whose name you may remember from a slew of news stories about how big data elected Trump — is an adviser to an Israeli security firm called Faception that aims to use “facial personality profiling” to catch pedophiles and terrorists.

Kosinski and Wang have been defensive on social media, saying their critics did not read the paper or want to ignore harsh truths. The former is certainly not true of Cohen and Mattson, who pieced the paper apart, and as to the latter, Mattson pointed out that this seems to be the first time these researchers have taken an interest in gay rights.

There are broader implications at play. The study shows how bias can creep into machine learning through the data that is used to train the models. It also casts more doubt on the already-embattled scientific process of peer review. “The saddest news–for all of us–is the peer review process at the Journal of Personality and Social Psychology allowed Wang and Kosinski to fling centuries-old turds without noticing the stink, and ignore 50 years of sociological and feminist evidence in the process,” Mattson wrote. The fact that the journal is now taking another look at the study is encouraging.

Bias

Machine learning is racist because the internet is racist

Deep learning algorithms are often trained on data from the web, and their biases are getting hard to ignore
Read More