The Future

The self-driving car that will never arrive

Self-driving cars are delusional tech optimism rooted in greed, sorry
The Future

The self-driving car that will never arrive

Self-driving cars are delusional tech optimism rooted in greed, sorry

Optimism about self-driving cars has sustained a fever pitch for so many years, at this point, that some die-hard boosters of the concept would still insist it’s an inevitability. Countless journalists who have experienced, with their own bodies and two eyes, a self-driving car journey, have declared it the inevitable future. These journeys have only taken place thus far on little obstacle courses that amount to little more than a carnival ride, or in at least one case, a road test where the car does fine by itself until it encounters any remotely challenging human-interaction scenario. At that point, the PR handler or engineer in the driver seat slickly takes over driving just for a split second, hoping the journalist doesn’t register that those split seconds are when the self-driving cars’ abilities, or lack thereof, matter the most.

But it hasn’t been a great six months for self-driving vehicles. In March, a self-driving Uber car in Arizona killed a woman who was walking a bike across a street. Public relations messaging around the death first cast aspersions on the testing driver in the seat, saying he was a felon and then also maybe watching Hulu, and then on the victim, saying she was walking a bike across the street outside of a crosswalk, and how was a car AI to distinguish her as a thing it shouldn’t hit? Final reports suggested the car’s emergency braking system had been disabled by Uber itself, and that was the ultimate cause of the incident.

Laying the blame on a critical failure conveniently sidesteps the whole issue of whether a self-driving car can adequately identify a thing it shouldn’t run into, which should be almost the entire point of a car that drives itself. But then, per the Verge, “the vehicle decided it needed to brake 1.3 seconds before striking a pedestrian, but Uber had previously disabled the Volvo’s automatic emergency braking system in order to prevent erratic driving.” Read: The car correctly identified a threat, but in a broken-clock-is-right-twice-a-day way, such that its threat identification reaction had become so annoying and frequent it was turned off.

Just last week, Uber released its Q2 financial reports showing a nearly billion-dollar loss, and a report from The Information revealed the company is losing at least a million dollars per day on its self-driving car project alone. Bloomberg reported its investors are pressuring the company, which is still struggling with profitability, to get rid of its self-driving car project to perhaps focus on making the dire economics of ridesharing even a little viable and, uh, maybe scooter rentals rather than split their efforts by trying to invent the self-driving wheel.

For its part, Google, the erstwhile self-driving industry leader, spun off its self-driving car department, Waymo, back in 2016. Waymo continues to trickle out tentative optimism to cooperative outlets that are iterations of “any day now,” and publications keep falling for it. (2016, Wired: “Google’s Self Driving Car Company Is Finally Here.” 2018, Bloomberg: “Waymo’s Self Driving Cars are Near.” Ok. Good to know the self-driving car can also drive itself in reverse.)

If that weren’t enough, self-driving car engineers themselves seem to finally be growing frustrated enough with the whole endeavor that they are engaging in some wild reality-distortion-field tactics. They have begun to blame the cars’ lack of success on non-negotiable aspects of reality. The problem is not that self-driving AI is bad at driving, their logic now goes; it’s that people are bad at walking. The Bloomberg report from Thursday detailing this tension included these devastating paragraphs:

With these timelines slipping, driverless proponents like Ng say there’s one surefire shortcut to getting self-driving cars on the streets sooner: persuade pedestrians to behave less erratically. If they use crosswalks, where there are contextual clues—pavement markings and stop lights—the software is more likely to identify them.
But to others the very fact that Ng is suggesting such a thing is a sign that today’s technology simply can’t deliver self-driving cars as originally envisioned. “The AI we would really need hasn't yet arrived,” says Gary Marcus, a New York University professor of psychology who researches both human and artificial intelligence. He says Ng is “just redefining the goalposts to make the job easier,” and that if the only way we can achieve safe self-driving cars is to completely segregate them from human drivers and pedestrians, we already had such technology: trains.

A conversation about self-driving cars is really a conversation about AI. AI as a concept has lately had even broader setbacks; IBM’s Watson managed to win at Jeopardy but has proven a catastrophic failure at its much more noble ultimate goal of helping treat cancer with more success than human doctors. While we’ve made progress in the time since sci-fi went from pulp to high art, our reach continues to cyclically elude our grasp. Quite simply, it is both very hard and not good.

In her latest book Life in Code, Ellen Ullman, a four-decade veteran programmer and an integral figure in the early days of several different Silicon Valley companies, wrote extensively about the periodic waves of excitement around the potential of AI. She described watching AI “fail spectacularly in fulfilling its grand expectations” to understand humans equally as well as humans do back in the 70s and 80s.

In an interview around the launch of her book last summer, Ullman told me she did not believe self-driving cars were anywhere near where they needed to be to reach the aspirations of companies developing them:

Our intelligence comes from social existence. We call somebody smart who can look in our eye, and we can trade understanding. If you're on the highway, you can see far ahead. You could see far behind. If you're an experienced driver, there are many ways that you see a car as another person, in a way. You can read that car. Self-driving cars do proximity around your own vehicle, and don't really look very far ahead. They're following rules like playing chess. An experienced good driver has these capabilities that I don't believe any time soon will be duplicated in a self-driving car.

Though futuristic optimism is great for currying public support, the actual promises of AI at a business level are always much less vague and much more sinister. As Ullman pointed out,

You have to get money from investors and venture capitalists, and the pitch has to be, "This will make a lot of money, and it will change the world…” [They] never specify if for better or for worse. I see disruption as a large increase of inequality.. It’s a way to throw the little guy out of business and make some very small group of people very wealthy. The jobs that are created, those people are being taken advantage of. They are stand-ins for Uber to replace them with self-driving cars. They're experiments actually working themselves into unemployment.”

It’s easy to forget how quickly we can overextend technology, which is so good at solving some of our problems, into a good way of solving all of our problems. But eventually the self-driving wheel turns and we realize we don’t have as much command to reduce the whole complex world to a set of yes or no answers, let alone predictions, and our grossest capitalistic dreams are thwarted yet again, but not without a cost.