Neural networks

This frostbitten black metal album was created by an artificial intelligence

And it shreds.

Neural networks

This frostbitten black metal album was created by an artificial intelligence

And it shreds.
Neural networks

This frostbitten black metal album was created by an artificial intelligence

And it shreds.

“Coditany of Timeness” is a convincing lo-fi black metal album, complete with atmospheric interludes, tremolo guitar, frantic blast beats and screeching vocals. But the record, which you can listen to on Bandcamp, wasn’t created by musicians.

Instead, it was generated by two musical technologists using a deep learning software that ingests a musical album, processes it, and spits out an imitation of its style.

To create Coditany, the software broke “Diotima,” a 2011 album by a New York black metal band called Krallice, into small segments of audio. Then they fed each segment through a neural network — a type of artificial intelligence modeled loosely on a biological brain — and asked it to guess what the waveform of the next individual sample of audio would be. If the guess was right, the network would strengthen the paths of the neural network that led to the correct answer, similar to the way electrical connections between neurons in our brain strengthen as we learn new skills.

At first the network just produced washes of textured noise. “Early in its training, the kinds of sounds it produces are very noisy and grotesque and textural,” said CJ Carr, one of the creators of the algorithm. But as it moved through guesses — as many as five million over the course of three days — the network started to sound a lot like Krallice. “As it improves its training, you start hearing elements of the original music it was trained on come through more and more.”

As someone who used to listen to lo-fi black metal, I found Coditany of Timeness not only convincing — it sounds like a real human band — but even potentially enjoyable. The neural network managed to capture the genre’s penchant for long intros broken by frantic drums and distorted vocals. The software’s take on Krallice, which its creators filled out with song titles and album art that were also algorithmically generated, might not garner a glowing review on Pitchfork, but it’s strikingly effective at capturing the aesthetic. If I didn’t know it was generated by an algorithm, I’m not sure I’d be able to tell the difference.

Coditany of Timeness is part of a side project by Carr, a startup CTO, and Zack Zukowski, a music producer. The pair met as undergrads at Northeastern University during a program at Berklee College, a prominent music school in Boston, and quickly bonded over a shared interest in programmatic composition and machine learning. Most algorithmic composition systems, like Emmy, an AI developed at the University of California, Santa Cruz that produced classical music so convincing that it alarmed musicians, tend to output notes that need to be synthesized into audio. Carr and Zukowski were interested in building a system that could produce actual waveforms, raw sound that can range from a screaming electric guitar to percussion and even wailing vocals.

Coditany of Timeness was the first of three programmatic albums Dadabots has released.

“While we set out to achieve a realistic recreation of the original data, we were delighted by the aesthetic merit of its imperfections,” reads the Dadabots paper, which the duo is scheduled to present next week at the Neural Information Processing Systems conference in Long Beach, California. “Solo vocalists become a lush choir of ghostly voices, rock bands become crunchy cubist-jazz, and cross-breeds of multiple recordings become a surrealist chimera of sound.”

Coditany of Timeness was the first of three programmatic albums Dadabots has released using this technique, and arguably most successful. A math rock album they trained on tracks by New Jersey metalcore band Dillinger Escape Plan passes muster with spastic rhythms and abrupt tone shifts, but a record called “Deep the Beatles!”, which they trained on the Fab Four compilation album “1,” is a sort of vague, malformed imitation of The Beatles that struggles with song structure. They plan to drop a new album every week on the Dadabots Bandcamp — Carr isn’t yet sure what next week will bring, but he said it could be trained by another metal act like Converge or Meshuggah, or even the experimental jazz artist John Zorn.

Carr and Zukowski have previously created a number of experimental projects that blend music and machine learning under the name Dadabots, including a series of Soundcloud bots that automatically downloaded tracks, remixed them, and uploaded the new versions, — like Autochiptune, which automatically remade songs in the style of a retro Gameboy.

It’s all part of what Carr sees as the “deep learning revolution in art” as artificial intelligence provides new venues for creativity. The project raises the vertiginous possibility that deep learning, with its penchant for strange intuitive leaps, could come to serve not just as a tool for human artists but even as a collaborator — or, potentially, a competitor.

Jon Christian is a contributing writer at The Outline. He last wrote about how crowdfunding platforms are crowdfunding themselves.