Behind the hype

How to navigate the coming A.I. hypestorm

Many machine learning studies don’t actually show anything meaningful, but they spread fear, uncertainty, and doubt.
Behind the hype

How to navigate the coming A.I. hypestorm

Many machine learning studies don’t actually show anything meaningful, but they spread fear, uncertainty, and doubt.

Here's what you need to know about every way-cool and-or way-creepy machine learning study that has ever been or will ever be published: Anything that can be represented in some fashion by patterns within data — any abstract-able thing that exists in the objective world, from online restaurant reviews to geopolitics — can be “predicted” by machine learning models given sufficient historical data.

At the heart of nearly every foaming news article starting with the words “AI knows ...” is some machine learning paper exploiting this basic realization. AI knows if you have skin cancer. AI beats doctors at predicting heart attacks. AI predicts future crime. AI knows how many calories are in that cookie. There is no real magic behind these findings. The findings themselves are often taken as profound simply for having way-cool concepts like deep learning and artificial intelligence and neural networks attached to them, rather than because they are offering some great insight or utility — which most of the time, they are not.


Take, for instance, a study recently posted to the Open Science Framework website, a repository for articles that have been reviewed and approved by journals but not yet appeared in print, with the title “Deep neural networks are more accurate than humans at detecting sexual orientation from facial images.” The foaming news article translation of this is, “AI knows if you're gay,” which sounds like a leap forward in computer ability but is in fact absolutely banal. That machine learning does a pretty good job of picking out gay people from pictures of faces is a trivial finding that has little to no inherent meaning.

To understand why this apparently bombastic finding is actually boring, we need to look into the so-called “black box” that surrounds machine learning processes. How does this study in particular go about deriving gayness from images of faces? Pretty much the same way most machine learning algorithms do any other visual recognition tasks. This feat, where the computer model makes a guess based on what it knows from looking for patterns in previous examples, is referred to in machine learning as prediction; you feed a model 35,000 images of dogs, labeled as dogs, then give it a new, unlabeled image, and the model will “predict” if it’s a photo of a dog.

Robot waiting for elevator.

Robot waiting for elevator.

Of course, the machine has to translate the images into data first. So the basic playbook is to take a bunch of examples of some phenomenon to be detected, reduce them to data, and then train a statistical model. Faces reduce to data just like any other sort of image reduces to data. That's just what a digital image is: a data representation. An image file is a great big matrix of numbers and great big matrices of numbers are the currency of machine learning.

Generally, to train a machine learning model — the thing that's eventually tasked with making predictions from previously unseen observations — we take these giant matrices and add our own labels to them. This is manual labor, generally. To train a model that can predict gayness, we start with a bunch of images and then add “gay”/“not gay” labels to them. These labels represent our ground truth or human-defined truth, what we know to be true (or what we say to be true; critics of the paper rightly pointed out that the “gay” images in both the training data set and the test data set were not representative of gay people) about the data that we already have.

The researchers behind the current paper sourced the images used to train their machine learning model from an unnamed US dating website where members advertised their sexual orientation by specifying the gender of the partners they were seeking. They ended up with 130,741 images of 36,630 men and 170,360 images of 38,593 women between the ages of 18 and 40. Gay and heterosexual members were represented equally in the image sets. This image collection formed the foundation of what AI scientists call training data: the data that the computer learns from.

The researchers then employed Amazon Mechanical Turk workers to verify basic things about the face dataset, ensuring that the faces were all white, adult, and the same gender that the face-owner reported on their dating profile. (The study doesn’t report that the AMT workers received any particular training on discerning genders from faces.) Finally, the images were all fed into a common and free to use face-detection tool called VGG-Face, which is based on a deep-learning model developed at Oxford University. VGG-Face then took the facial images and translated them into a list of 4,096 scores corresponding to the facial features that it thinks are important — that is, the features that it has the best success with in differentiating between the images according to the given gay/not-gay labels.

What the researchers wound up with is a dataset where each item consisted of 4,096 independent variables (extracted facial features, such as nose shape and grooming style) and one dependent variable (sexual orientation). They were able to cull the list of independent variables down to 500 using a standard technique called singular value decomposition, a mathematical technique that roots out less important or less impactful variables form a matrix (it’s also used in image compression). This reduced dataset was then used to train a machine learning model.

I want to pause here to emphasize how unmagical the training phase of a machine learning algorithm is

I want to pause here to emphasize how unmagical the training phase of a machine learning algorithm is because it is endlessly black boxed in reporting about way-cool machine learning studies like this one. When I say independent and dependent variables, what I'm implying is that every observation �� like a face represented by a list of observed features — in one of these machine learning datasets is really an equation. It's a statement of truth. Let's imagine that I have only five independent variables in my dataset, meaning five facial features. Let’s say they are facial hair coverage (x1), eye color (x2), nose shape (x3), hair style (x4), and dimples/not-dimples (x5). Each face observation then takes the form of an equation like this:

x1a + x2b + x3c + x4d + x5e = y

The x1 and x2 etc. are independent variables and they correspond to actual observations. The a’s and b’s and c’s in this equation are coefficients. They represent the relative importance of each one of these things in determining y, which can only take one of two values: 1 (gay) or 0 (not-gay). Y is the dependent variable. It depends on the values of the x observations and then the relative importance of those observations.

Once you calculate the values of your independent variables, you can start to work out the coefficients.

2a + .78b + 10c + -1d + .2e = 1

Which would just look like this as represented in a mathematical matrix, which is just a grid of numerical values, or a list of lists.

2       .78       10       -1       .2       1

The idea is that you have tens of thousands of rows of data that looks like that, and somehow you want to come up with values for a, b, c, etc. that solve each one of these tens of thousands of equations as closely as possible. In other words, we're minimizing error across many different equations that happen to share variables.

Eventually, we have values for all of the independent variables that can be plugged into each row while best approximating the value of the dependent variable. This is the same thing as solving a mathematical summation. A summation in math can be just adding some numbers, but more typically we mean it to be a combination of equations. Either way, it isn't very esoteric.

Once you have values for the a’s and b’s and c’s, the coefficients or relative importance of each feature, you can go out and find some new data that describes observations your machine learning model hasn’t seen before and that is so-far unlabeled. A machine learning model basically just looks like the first equation above, except now we have some good values for the coefficients. It might look like this:

x1(10) + x2(3) + x3(.5) + x4(1) + x5(2) = y

We just take some new x values, corresponding to the features of new observations, and we get a value for y. That value is the prediction! Usually you will then evaluate the strength of your model by testing it out on some reserved segment of labeled data.

That's the whole idea. This is the black box. Neural networks add some cool twists and complexity, but the principle is always of minimizing predictive error. And that's what I mean when I can say you can do this to anything.

Does the mere fact that it’s simple make it less powerful or meaningful, though? Yeah, kind of. It means that, to some extent, we already know what is going to happen in a study like this. As long as we allow that there is probably one facial feature (it only takes one) that occurs more frequently in gay-labeled data points, we have to allow that that feature can be used to make predictions based with at least some accuracy. That doesn’t mean that everything is always predictable, but it means that if we’re clever about picking features we can usually succeed at making a model that predicts something.

Robot sitting on bench, using laptop.

Robot sitting on bench, using laptop.

What’s rarely seen in a study like this is a failed model. It happens when the math just doesn’t work such that it’s possible to produce values for the coefficients (a’s and b’s) given the observations (the x’s). When this happens it’s really easy to go in and tweak things until we’re able to get some coefficients just be adding and removing different features. Like, well, maybe if we just ignore nose shape or maybe we ignore facial hair. (The researches in the gaydar study deliberately ignored people who weren’t white, which automatically ups the accuracy of the model. This is actually pretty typical of machine learning studies; the short explanation being that people of color are not adequately represented in the available data. The explanation for why that is would take another article.) If selectively ignoring variables doesn’t do the trick, it may just be a matter of adding more data, which has the effect of making each observation a little less meaningful when it comes to training the model.

To some extent, we already know what is going to happen in a study like this

To see which features were having the most predictive impact, the researchers tried out their model on images where some part of the face had been blocked from view. This basically just told them that it was indeed facial features that were having the most impact rather than whatever junk happened to be in the background of the image. By isolating some specific facial features and using only those features as training data, they were able to surmise that jawline shape had a particularly outsized impact on the algorithm's predictive power. (In another part of the study, an analysis showed that gay men tended to have bigger foreheads and narrower jaws than other men, which the researchers interpreted as looking more feminine or “gender atypical” because they stood out from composite averages of male-classified faces. We should note that it is unclear how much of the model’s prediction was based on facial structure versus styling, makeup, and pose; the composite male “gay” face created by the model almost appears as if it is leaning forward toward the camera, suggesting the model may have been picking up on a certain style of selfie rather than a fixed trait.)


The gaydar study was criticized for its methodology as well as the researchers’ paternalistic rhetoric; it is now under ethical review. But there’s a lesson for anyone interested in the state of the art of artificial intelligence. This study, like many others we have seen and are likely to see in the future, didn't need a machine learning component. Once some general differences between gay/not-gay faces were found in the data, the machine learning conclusion hypothesis (that it's possible for a computer program to identify gay faces) became trivial. Ultimately, the researchers achieved an 81 percent success rate in using their model to classify gay-labeled male faces in one-to-one pairings between gay-labeled faces and straight-labeled faces. That is, the classifier was shown two images while knowing that one of the two images was definitely labeled as gay. This is sort of like saying the model was 81 percent accurate, but the model has a powerful advantage in that it knows that one of the two faces must be gay-labeled.

If I haven't said anything yet about the machine learning model making better predictions than humans, that's because it's completely meaningless. Like, obviously the computer is going to do a better job because it's solving a math problem and people are solving a people problem.

Expect more of this. A lot more. There are many more things to predict, and so many cheap intuitions to validate. In the background there are still a great many statisticians and computer scientists advancing the field of machine learning and solving actual problems with it — diagnosing medical conditions, epidemiological forecasting, detecting credit card fraud, deciphering speech and handwritten text, predicting bike sharing demand — and when the hype fades away, they'll still be there.

Bias

Machine learning is racist because the internet is racist

Deep learning algorithms are often trained on data from the web, and their biases are getting hard to ignore
Read More