The Future

How to train artificial intelligence that won’t destroy the environment

The carbon footprint of machine learning is bigger than you think.
The Future

How to train artificial intelligence that won’t destroy the environment

The carbon footprint of machine learning is bigger than you think.

There’s been a reckoning in recent years when it comes to measuring bias in machine learning. We now know that these “unbiased” automated tools are actually far from unprejudiced, and there’s a growing demand that researchers think about how their products might screw over or endanger the lives of others before they unleash them on society. It’s not just the final products we should be worried about, however, but also the consequences of building them. Training these algorithms can quite literally poison our planet.

As the world burns in Facebook feeds and in backyards, the carbon footprints of even the most innocuous things are coming under scrutiny. It’s sparked debates around AC units, straws, face scrubs, plastic bags, air travel. But there are also mundane systems that imperceptibly rule our lives and that contribute to climate change — things like spam filters, translation services, search engine keywords, and smart assistants.

The foundation for these services is called natural language processing (NLP), a branch of artificial intelligence that aims to teach machines how to understand the nuances of human language. Training these language models in a valuable way requires a monster amount of computing power and electricity. Simply using tools powered by this technology isn’t setting the world aflame (no need to boycott autocorrect), but teaching the brains behind these tools can wreak real environmental damage, if the industry and academia don’t adopt greener practices.

The study found that training just one AI model produces as much carbon dioxide equivalent as nearly the lifetime emission of five average American cars.

“We were hoping to get the conversation started about how we as a community can start thinking about efficiency and not just higher and higher accuracy at the cost of everything else,” Ananya Ganesh, the co-author of a recent paper on the environmental consequences of deep learning, told me. Ganesh was among a team of researchers at the University of Massachusetts Amherst that published a paper in June examining the environmental impact of these models. The study found that training just one AI model produces as much carbon dioxide equivalent as nearly the lifetime emission of five average American cars.

The findings were divisive among the community of AI experts; it only explored one very specific example of how to train an AI model, one that isn’t necessarily used by a majority of machine learning researchers — training from scratch. Still, there was an underlying consensus that an urgent concern for machine learning’s contribution to climate change is warranted.

Sasha Luccioni, a postdoctoral researcher at the Mila AI Institute in Quebec, worked on such a tool to help AI researchers estimate the carbon footprint of their machine learning models. She acknowledged that the University of Massachusetts paper was a bit of an edge case, because unlike the scenario used in the study, “very few people” train their models from scratch, and a lot of training is now done using cloud services from companies such as Google, Amazon, and Microsoft, which are mostly carbon-neutral or moving toward it. But Luccioni told me that the research serves as an important gateway into a crucial conversation around energy efficiency and AI. “It’s important to talk about these issues and bring them in as part of the standard conversation,” she said.

Traditionally, Luccioni said, machine learning has been confined to a lab activity in that it revolves around solving a certain dataset or reaching a certain benchmark. How training these models affect the world outside the laboratory walls wasn’t something deeply considered. “Now more and more, it’s becoming a social issue,” she said. “There’s bias and ethics and fairness, there’s that whole debate that’s going on, and now the energy debate is starting as well.”

Ethics is still a relatively new talking point in mainstream machine learning circles — it’s not even a required course for most graduate programs — but the focus has largely been on how finished products might harm vulnerable communities. (For example, an algorithm sold by a health services company was found to be biased against black patients.) But the ethical debate in machine learning often side-steps conversations about the environment, and the impact of the process that creates these flawed products.

By doing so many computations, are we putting a lot of carbon into the atmosphere that wouldn’t otherwise be there?”

“So far a huge amount of effort has gone toward trying to figure out how we design better algorithms to predict who should get a loan, or who should get bail, and is it ethical to have machines making these decisions,” Daniel Larremore, an assistant professor in the Department of Computer Science at the University of Colorado Boulder, said. “But there’s this other ethical component about the externality of computation itself: by doing so many computations, are we putting a lot of carbon into the atmosphere that wouldn’t otherwise be there?”

Last month, Luccioni, along with three other researchers in AI, submitted a paper — titled Quantifying the Carbon Emissions of Machine Learning — to the Climate Change AI workshop at the NeurIPS conference in Vancouver, hoping to force researchers to really soul search around that exact question. It introduces the team’s newly developed Machine Learning Emissions Calculator, which allows researchers to input their hardware, runtime, and whether Google, Amazon, or Microsoft is providing the server needed to train their model. The tool can then generate the estimated raw carbon emissions produced and the offset carbon emissions.

The tool alone isn’t a solution to energy inefficient training models, but it can give researchers a lens into the environmental weight of their decisions. For instance, the choice for the location of the server has a direct impact on how much carbon dioxide is emitted. Places like Canada and California are largely powered by renewable energy, compared to a massive, energy hungry data center in Iowa. In North America, a server in Quebec can emit 20g of carbon dioxide equivalent compared to a server in Iowa emitting about 736g, as illustrated in Luccioni’s paper.

Lukas Biewald, the co-founder of Weights and Biases, launched his company with this spirit of sharing in mind. One of the things Biewald’s company does is help people share their research with each other. Biewald said that transfer learning — when one company puts out a model and another can transfer it into their own data — is a newly popular technique that first started to get popular with vision models and has since become very popular with language models. “So for example, Google might spend millions of compute hours training a model and then they publish it and then a similar company could take that model and just do a few compute hours transferring the knowledge to the dataset,” Biewald said.

If massive tech companies with unmatched resources like Google commit to transparency around these high-quality models, it doesn’t just benefit their peers in the industry — it trickles down to student researchers who might not have the time, computation, and electricity to run their own systems. If tech companies and researchers not only publish their papers and code online but also post their trained models, students don’t have to worry about one major hurdle in building off of these models since they don’t need to figure out how to get the resources to train them from scratch. It means the cost of training — from both a financial and environmental standpoint — only has to meaningfully happen once.

But having models repeat the same operations over and over again is just one consideration: more generally, it’s difficult to draw the line on which models’ benefits outweigh the costs. Some of these applications, like climate modeling and computer vision satellite imagery, are even designed to combat climate change. If someone is doing cancer research to diagnose for breast cancer on an ultrasound, how do you decide whether the carbon emissions for training that AI model negates the need for potentially life-saving findings? While morality has finally entered conversations around finished AI-based products, it’s until recently been largely absent from the conversations around how these products are actually trained and brought to life.

“We’re not telling people, don't emit or don't train or don't make this great algorithm,” Luccioni said. “We’re just trying to say, compare the costs and the environmental costs and the benefit of your algorithm.”

Melanie Ehrenkranz is a freelance tech and culture writer based in Brooklyn.