Power

In what sense is a drone a type of animal?

When we struggle to understand machines of war, we often turn to zoology.
Power

In what sense is a drone a type of animal?

When we struggle to understand machines of war, we often turn to zoology.

For much of World War I, Sir Phillip Gibbs was one of the few journalists the British War Office permitted to correspond from the Western Front. Gibbs’s mission was to describe, in terms that would be acceptable to government censors, the deteriorating state of a war that was becoming increasingly unpopular among the British public. In this capacity, in September 1916 at the height of the Battle of the Somme, Gibbs found himself tasked with writing about a miracle weapon that would supposedly bring a decisive end to The Great War: the William Foster & Co. Gun Carrier Mark I — or, as we’d now call it, a tank.

The tank was a modern solution to a modern problem. Two years of stalemate — each meter of ground gained came at the cost of 60 or more lives — had pushed Allied leaders to seek a technological “fix” to the logistical morass of trench warfare, something that could plow across trenches and shrug off pepperings of small arms fire. The Mark I, whose designers had, up to that point, been best known for making tractors, was primitive by our standards. It had a top speed of 3.7 miles per hour and caged crews, like canaries, often fainted from carbon monoxide poisoning. But the tank had its intended effect (at least until the Germans made their own) and the War Office promptly ordered the Mark I into mass production.

Gibbs now faced a serious challenge. On the one hand, the Mark I was supposedly a revolutionary machine that, however paradoxically, could usher in peace on treads and metal. At the same time, however, the tank signaled a new era of industrial horrors. How could these competing views, and the futures they augured, be reconciled? One evening, shortly after the Mark I’s debut, Gibbs prowled the lines in welcome silence and happened across a dozen or so Mark 1s, draped and quiet, as if put to bed for the night. In his next report, he likened them to “herds of prehistoric beasts.” Faced with 28 of long tons of lethal steel, he reached for the animal to make sense of what he’d seen.

He wasn’t alone in doing so. Leaf through British, French, and German war diaries and you’ll find a small preserve’s worth of fauna called upon to domesticate the increasingly spectacular technology of modern warfare. One British officer, after watching a single tank crush a dozen German teenagers, wrote down that the machine was, in essence, “an elephant.” And as below, so above: technically superior German Fokkers were said to dive on Allied biplanes — named, most famously, the Sopwith Camel and Sopwith Pup — like “hawks on a bird,” while terrestrial armies cowered like “sparrows” in their shadows. A century later, their writings still read as young people’s attempts to find their place in a war, a world, that was becoming more about machine than Man.


In nearly every philosophical tradition, animals are celebrated for the virtues humans take them to embody — snakes for cunning, owls for wisdom, dogs for loyalty. For followers of Abrahamic religions, this practice goes all the way back to the fifth and sixth days of creation, when God brought forth “swarms of living creatures” to fill the heavens and earth. God “saw that it was good,” suggesting that the animals brought with them a righteous moral order.

From this perspective, animals reveal to us the goodness in God’s creation and, in emulating their finest traits, offer moral clarity that might guide us sinners to holiness. But if God is dead, as World War I led many to conclude, what is now to stop us from adapting His tricks for our own ends? Call it profane, but in the shadow of industrial war, animals became a useful touchstone for identifying (or assigning) the virtues and vices in our own creations. If German Fokkers really were the apex aerial predators of their day, why not compare them to the fastest bird in the sky?

More than any other technology, unmanned aerial vehicles are the quintessential technology of the Global War on Terror.

We’ve become surprisingly comfortable with war machines far more lethal than the Mark I and the Fokker, but this zoomorphic impulse lives on. From the V-22 Osprey to the Seawolf-class Submarine, the military is a menagerie and the menagerie is the message. It’s no surprise drones, from the mighty General Atomics MQ-1C Grey Eagle to the Northrop Grumman Global Hawk down to Prox Dynamics’ handheld Black Hornet Nano, have been named according to the same conventions that engineers have long used for aircraft.

More than any other technology, unmanned aerial vehicles are the quintessential technology of the Global War on Terror. Like all automated war machines, drones are intended to project power without displaying (human) vulnerability, putting to rest, once and for all, the pretense of a fair fight. For that reason, the philosopher Grégoire Chamayou concluded in his 2013 book Theory of the Drone that “we are no longer fighting the enemy on the battlefield.” Like eagles, “we are hunting him like a rabbit.”

But there’s something different about drones, something that puts them in a different genus from their manned ancestors, something that can’t be captured by metaphor alone. The end-goal of unmanned vehicles, after all, isn’t merely remote operation, but complete autonomy, as if we could simply say “sic em” and get back to pretending this is a country at peace. In 2002, as the Global War on Terror commenced, the Air Force Research Laboratory issued a short report declaring its intention to develop “technology allowing UAVs to replace human-piloted aircraft for any conceivable mission.”

The laboratory chose to define autonomy as “the ability to generate one’s own purposes without any instruction from outside” and “having ‘free will.’” But when computer scientists start invoking these ideas, philosophy and programming begin to mingle in strange ways. What the rhetoric of the Air Force Research Laboratory suggests is that “Grey Eagle” and “Global Hawk” aren’t merely allusions, like their predecessors, but aspirations. At a moment when machines are imagined to be capable of acting of their own accord, to do so in ways that exceed human capabilities and perceptions, and to be, in some profane sense of the term, alive, the idea of the animal takes on an additional set of connotations. (That this is happening at the same moment actual animals are dying by the billions only makes this inversion more striking). If Western philosophers have long imagined animals as a kind of machine — more on that in a second — we now find ourselves with that question’s inversion: are machines a kind of animal?

Set aside the obvious answer of “no” and consider how an argument for “yes” could be made. From Aristotle on, the animal has been imagined as a kind of waypoint between humans and inanimate matter, like trees or stones — lesser than us, but obstinately alive. For humanists, this poses something of a problem for their pleas about human exceptionalism. That dilemma that has backed even the most profound of philosophers into some very odd argumentative corners. The 20th-century philosopher Martin Heidegger, for example, suggested that if humans have a “world” — meaning, in essence, the richness of experience our senses afford — and stones are “worldless” because they have no senses, then animals must be poor in world.

If Western philosophers have long imagined animals as a kind of machine, we now find ourselves with that question’s inversion: are machines a kind of animal?

For Heidegger, this means that animals are perfectly animate, but have access only to a dull and denuded version of the reality humans can. Stranger still was Heidegger’s contemporary, Emmanuel Levinas, the 20th century’s leading philosopher of ethics who, bizarrely, found himself forced to argue that animals don’t have faces in order to save his own philosophical face. In any case, the “waypoint” idea has been the majoritarian understanding of animals in Western thought, an idea that isn’t really about animals at all. Instead, the animal exists to shore up the boundaries of some philosophical system, one that, conveniently enough, places humans at the height of Being.

In that, animals have something in common with technology which, in academic philosophy or, just as often, YouTube soliloquies, serves as a tool for mulling over the bounds of humankind. Depending on your perspective, the entanglement of humans and technology presents either metaphysical disaster or existential opportunity.

Many psychologists, for example, are convinced that social media cuts us off from some untroubled human essence, while, meanwhile, the voguish transhumanism of Silicon Valley elites imagines technology as our very deliverance. Whether or not you find either view compelling, they have something in common: there is some stable sense of what a human being is — its limits and capabilities, its proper place in existence — that precedes its annihilation or augmentation. And while we popularly associate these questions with today’s technological buzzwords (deep learning, artificial intelligence, machine vision etc.) they are, of course, as old as wheels, bone clubs, and flint arrows. Strictly speaking, we have never been natural.

But if the relationships between humans and animals, as well as humans and technology, are well-trodden ground, the third relationship in this triad, animals and technology, is somewhat trickier. In laying “I think therefore I am” as the cornerstone of the Enlightenment, René Descartes found himself at pains to demonstrate why humans were the only beings capable of reason. Like many of his 17th century contemporaries, Descartes saw clockwork, the privileged technology of its moment, as a useful metaphor for existence. Drawing inspiration from the assembly of mutually dependent cogs behind every clock face, Descartes reasoned that humans and animals were also, in essence, machines. Only humans were fated enough to have been imbued with consciousness by God.

Animals, on the other hand, were slaves to their mechanical instincts, incapable of thinking or feeling for themselves. As a result, Descartes felt no guilt over slicing into living animals or breaking their legs in the course of his research. If they were merely machines, then cries of protest could not be signs of pain or distress because that would mean that they could feel. Instead, Descartes reasoned, these had to be purely physical reactions, like the whistle of steam escaping from a boiling teapot.

Descartes’s ideas about animals are, thankfully, no longer taken seriously. But you only have to look as far as your nearest computer science department to know that other parts of Cartesianism are alive and well. Four centuries later, the Cartesian coordinate system remains an indispensable tool for engineers working with objects in three-dimensional space. Scroll through 2012’s Small Unmanned Aircraft: Theory and Practice, the first engineering textbook dedicated to the art of drone-making, and one of the first things you’ll notice is that Cartesian coordinate grids (i.e. x-axis, y-axis, z-axis) undergird virtually every calculation a drone’s onboard computer must perform to triangulate itself in space. Look a little further, though, and you’ll find whole chapters dedicated to teaching students how to write “low-level autopilot code,” “path following routines,” and “high-level path-planning algorithms.” You don’t need a Ph.D. in aerospace engineering to recognize these for what they are: steps on a path towards autonomy.

Even so, it’s the textbook’s final chapter, on machine vision, a term that refers to how computers “see” the world through purpose-built sensors, that serves as the final piece in the schematics of autonomy. It is also, not coincidentally, where animals come back into the picture. Like Heidegger pondering a “world” of a lizard on a rock, we might be tempted to imagine that what a drone “sees” is just a lesser version of what our own eyes do. But perhaps the opposite is true. Just as Arctic reindeer that evolved to find lichen using ultraviolet light, drones perceive the world in registers imperceptible to humans; like insects’ compound eyes, drones have as many focal points as they have lenses; and, like bald eagles, who see eight times as far as humans do, drones are capable of picking out and tracking small objects over enormous distances. A drone’s vision, and the world of sensations it can respond to, is decidedly non-human. In that, it shares something with animals.

Philosophers who took animals a little more seriously than Heidegger would have no trouble with the idea that animals don’t live in a “diminished” world, but merely a different one. Jakob von Uexküll, an Estonian biologist remembered for his contributions to animal philosophy, reasoned that every animal creates its own “world” through its senses. He famously uses ticks to show how little sensation a species needs to pass on its genes. “Just as the gourmet picks only the raisins out of the cake,” Uexküll wrote in 1932, “the tick distinguishes butyric acid from among the things in its surroundings.”

To humans, butyric acid, found in vomit, sweat, and, weirdly, parmesan cheese, is unpleasant, but, ultimately, unremarkable. To ticks, though, the smell is vivid and essential, used to differentiate new hosts from the rest of existence. Along with the warmth of skin and the taste of blood, this exact smell is all a tick can sense but also all it needs to live a life. And though there is much ticks cannot discern, the species has the ability to access the exact information it requires: one smell, one taste, one feeling. To us, that may seem lesser. To the tick, though, the world — its world — is perfectly complete.

The drone is, obviously, not one of us, but it is in possession of all it requires to be independent. Historically and philosophically speaking, we have a word, a category, a concept for that: the animal.

It would be perverse to think about autonomous drones along these same lines, as having their own “world” made by the range of signals their sensors can perceive. And yet, taken seriously, this is exactly what military imagineers do when they daydream about fully autonomous drones that do not simply perceive the world, but act within it, of their own accord, using data “found” in registers that humans cannot perceive for themselves. This particular genus of drone, which, for now, is more a flight of fancy than a real technology, exists in the world as much in its own world, on its own terms, as it does ours. The drone is, obviously, not one of us, but it is in possession of all it requires to be independent. Historically and philosophically speaking, we have a word, a category, a concept for that: the animal.

But computation is not cognition, let alone consciousness, and the dream of universal computation, where all matter can be reduced to the play of binary code, is a fantasy. And no matter how lifelike unmanned aerial vehicles appear to act, they are built, not birthed. Data collection is not sensation, and, in any case, sensor packages are still made at human hands and limited by human minds. These are not practical barriers to the absolute autonomy of drones that could, with the right research, a better algorithm, or the correct design, be overcome, so much as they are impassible ontological ones. Despite of how easily we may reach for the metaphor, machines are not — have never been, and cannot be — animals. And as long as that is endpoint engineers look to, drones will fall short of our goals for them.

Put differently, the metaphorical work performed by “Grey Eagle” and “Global Hawk” might have always been a red herring. When it comes to drones, the metaphor that matters most has been hiding in plain sight — “drone” itself. In many explanations, the provenance of the nickname “drone” is the gnawing, insistent buzz the machines make in flight. (The comparison is not limited to English-speakers either. In Waziristan, Pakistan, where drones were most active during Obama’s presidency, the machines are known as bangana, literally “wasp.”) But perhaps sonic similarity isn’t the whole story. “Drone” also invokes a particular category of insect, those honey bees and fire ants that shoulder burdens, build empires, and never question orders. These are drones that work in two senses of the word: they do “their” job, and they do it right. Drones, in other words, don’t simply work; they work because they work.

That’s not quite a tautology, but it’s comforting all the same. Conveniently, the metaphor of “drone” renders the machines predictable, even harmless, even in spite of plentiful evidence to the contrary. No wonder “drones” occupy a place of privilege in the practice and symbolism of warfare today. The comfort that “smart” technologies offer is much-needed for the Global War on Terror, a conflict that confounds the expectations we’ve inherited about what war should be. Gone is the glorious past of grand maneuvers, waving heraldry, and the dividends of honor found in fair fights. Instead, we are now faced with a war that has no borders, no armies, and, seemingly, no end.

Faced with impossibility, we reach for simplicity, whether in metaphors, machines, or the places they come together. Among many other possible functions, metaphor might also be thought of as a peculiar kind of defense mechanism, one that bridges the gap between the confounding complexity of the world and the imperfect tools we have to describe it. By positing what we can’t understand in terms of what we can, we try to get a grip, however fleeting, however precarious, on our place in a world gone mad.

That was true for Sir Phillip Gibbs invocation of long-dead beasts, the Allied soldiers who cowered like sparrows, and René Descartes’s conjuring of a clockwork universe in the comfort of his study while actual rabbits whimpered nearby. And, of course, it’s true of us. Only now the threat is not contained to the Western front — it is everywhere, everything. All the while, the animals — the real ones — are nowhere to be found.

Will Partin is a freelance writer and doctoral student. He researches labor, technology, and culture.