Below is a piece of flash fiction written by an AI. The bold text is a prompt written by a human (me) and the rest was spat out in a matter of seconds by a robot.
Clarence tried to keep smiling, but his mind was elsewhere in spite of the music, and all the smiling partygoers. His attention shifted to the view out the window. The night was still dark and the moon shone through. The river was a pale, ghostly outline through the tall trees of the hills. He could hear a distant car horn. The moonlight streamed through the tall, dark windows, illuminating the face of an old woman that was leaning on a wall. She was dressed entirely in black, and wore a pair of glasses that were very old. It looked as though she was reading.
"Clarence," she said. "I've been wanting to do something like this for a long time." She pulled a box out of her pocket and opened it, revealing a small wooden toy train. "Can you take a look at it?"
Clarence stood up and stretched his arms. The train was small and heavy.
While my prompt isn’t exactly Proust, I have to admit that the AI’s completion of my it is not a substantial downgrade in terms of quality. It takes a Lynchian turn, and ends on a note of creepy surrealism with a specificity to its imagery that reads as intentional. I’ve fed hundreds of prompts into this software and this is actually one of the less unsettling examples. Insert something benign, and out will come something creepy. And if your input is creepy, the result will often come out downright gruesome.
Tracey's heart soared as she peered through the bars of the school gate. Inside, a huge smile played across her face. As a member of the school, she didn't need to be ashamed of the school.
But she was.
So she ran away.
She ran until the sun went down, until the heat was oppressive and she was sweating so much her whole body ached.
She ran until her feet bled.
She ran until she wanted to kill herself.
She ran. She ran.
Then… she ran.
After a whole day of running and crying over a ruined world that she had never wanted to be a part of, she couldn't help but wonder where she had gone.
It was a Friday night. The school gate was closed, and she was alone.
Combine the chopped onion with the breadcrumbs and set aside. Next, using the scalpel, open the patient's chest cavity, and separate the lung into its sections. Gently pull out all of the lung.
Cut the lungs into 1- to 2-inch pieces and place them in a plastic bag or Tupperware, along with the lungs, onion, garlic and allspice. Put the plastic bag in a small bowl, and add the breadcrumbs. Cover the bag with plastic wrap, and refrigerate overnight.
Next, in a large bowl, mix together the olive oil, breadcrumbs, garlic, parsley and allspice, and season with salt and pepper. Add the mixture to the bag with the lung pieces. Wrap tightly and refrigerate until the next day.
This recipe can be doubled.
The software that created this madness is called GPT-2, a language model designed to improve Natural Language Processing (NLP) tasks like answering questions, translating documents, or the thing it’s really good at: generating text to complete a prompt. It was created via machine learning (basically high tech trial and error) at OpenAI, an A.I. research company founded in 2015 by (who else?) Elon Musk and a few others. Their stated purpose at the time was the realization of Artificial General Intelligence, which if it were achieved, would be exactly what it sounds like: that mythical piece of software that processes information in all the same ways as a human.
Universities often work behind closed doors on relatively small academic budgets, and they roll out creepy new AI breakthroughs from time to time, such as the famous Stanford paper that introduced most people to the internet phenomenon/global epistemological crisis known as Deepfakes. OpenAI, on the other hand, is a billion-dollar, for-profit company, and its training “gym” was designed to slam on the accelerator of the AI revolution by allowing programmers to submit their own algorithms.
But it’s not lost on the folks at Open AI that this stuff can be disturbing. In fact, when GPT-2 was announced in February, a blog post on the Open AI website read, “Due to our concerns about malicious applications of the technology, we are not releasing the trained model.” Open AI was worried that GPT-2 would be used to generate human-sounding spam or misinformation. That blog post contained examples of the type of fake news stories and contrarian blog posts that GPT-2 can churn out by the thousand, such as a plausible-sounding news story about scientists discovering real-life unicorns.
Then in early November, OpenAI went ahead and published it anyway, explaining that, “We’ve seen no strong evidence of misuse so far.” (The “so far” is appropriately ominous. I’ve never known spammers and scammers to avoid using any tool at their disposal). So, naturally, I started trying to misuse it as quickly as possible, and you can too! The easiest way is by using a simple web widget called Talk to Transformer, created by Toronto-based blogger and machine learning engineer Adam King.
Perhaps some of creepiness I immediately tapped into can partly be blamed on my use of vaguely melancholic imagery like the bars of a gate, or someone not smiling at a party, but that seems like a stretch. As OpenAI fully admits, “Language models have biases. Working out how to study these biases, discuss them, and address them, is a challenge for the AI research community.” The creepiness bias, in other words, is just one of countless quirks that were trained into GPT-2 because of the millions of pages of crazy shit it read when it was being created. OpenAI’s policy director Jack Clark told me in an email that, in the interest of transparency, “we list the top 1000 sites that fed data into GPT2 on our GitHub. We also published a ‘model card’ alongside our model on GitHub listing use cases and ones we don't recommend.”
So where did the creepiness bias come from? Here’s Clark’s hunch: “In terms of your prompts, I can give you a hypothesis: GPT-2 has read a non-trivial amount of fan fiction online. Fan fiction tends to involve lots of sex and/or violence. Therefore, since you're feeding it fiction-esque prompts, it's attempting to produce fiction in response.”
And is it ever attempting to produce fiction! Don’t be shy about creating your own and emailing them to me, which is what I’ve been asking my friends to do lately. If you’re like me and you have a penchant for dystopian insanity, GPT-2 may just be the deepest and most addictive internet rabbit hole of them all. I’ve spent hours typing into the blank box at Talk to Computer, and I haven’t lost track of time like this since I started using Stumbleupon, back in the frontier days of the internet, when there was no social media yet, and nothing online had a human face — or sense of morality — attached to it. Consider yourself warned.