Towards the end of February, Peter King, a tutor in philosophy at Pembroke College, Oxford, was suspended from his role after pleading guilty on three counts of producing indecent photographs of a child — which in practice meant producing thousands of explicit images of a child between 2010 and 2018. Perhaps this development was a shock to his colleagues, but I’m not sure it should have been. Because in 2008, King published an article in the journal Ethical Theory and Moral Practice entitled “No Plaything: Ethical Issues Concerning Child Pornography,” in which he speculated about “the possibility of a morally acceptable form of child pornography.”
To save you reading what is mostly a rather plodding piece of academic philosophy: King argues, on quite dogmatically utilitarian grounds, that we can rank child porn on a spectrum of “harm” caused — with some pornographic images of children (for instance ones in which they don’t realize there is any sexual context, like a photo of a child in the bath, or on the beach) not technically causing the children any harm in the taking. This ultimately leads him to the thought that, in a hypothetical case where a group of adults consented to allow a community of pedophiles access to (innocently taken) nude images of themselves as children for the pedophiles’ sexual gratification, on the proviso that the pedophiles did not seek out or manufacture any other explicit images of children, this would not only be morally harmless — it could even be considered morally good. (Note that King’s utilitarian framing allows him no scope for considering that the sexual desire for children, and its various expressions, might be bad regardless of any specific harm caused — bad just in-itself).
Perhaps everyone else at Oxford just sort of innocently assumed that King was testing out some of our intuitions concerning hard outlier cases to help refine our best moral theories, but this would seem, let’s face it, quite desperately naïve. “Few things separate more profoundly the mode of life befitting an intellectual from that of the bourgeois than the fact that the former acknowledges no alternative between work and recreation” — as Adorno tells us in Minima Moralia. Although one doesn’t really need Adorno to realize that sometimes people’s academic interests really are just 1:1 identical with their personal ones.
But perhaps for professional philosophers this point really is in some important sense obscure. Philosophers love argumentative fallacies: the idea that if you beg the question, or affirm the consequent, or whatever, you have done something recognizably, objectively wrong. This thought is comforting to philosophers — the idea that there are objective norms of argument lends the discipline a definiteness, and a hardness that, as a “mere” humanity, it might be otherwise thought to lack. An awareness of argumentative fallacies and “critical reasoning skills” is thus typically one of the first things undergraduate philosophy courses are concerned to teach their students.
If ad hominem arguments are illegitimate, how come they’re so useful?
One of the key argumentative fallacies philosophy students will be introduced to is ad hominem, from the Latin argumentum ad hominem, or “argument to the people”: the idea that in attempting to refute the argument, you should always be sure only to attack the argument itself and not the person making it. Thus King might well be attempting to discover “morally acceptable” instances of child pornography because he himself has a sexual interest in children, but it would not then follow that his arguments about the moral acceptability (or otherwise) of child pornography are illegitimate. (Even if you’ve never taken a philosophy course in your life, you are likely anyway familiar with ad hominem. “Nice ad hom” is a favorite accusation of people on social media who are, you know, “a bit Quillette” — blustering in with their furry logic hammer to slay any internet Marxists with Facts and Reason).
The last sight of many a commie. pic.twitter.com/0rjvjhoMD7— Filthy Heretic (@FilthyHeretic) February 10, 2018
The ad hominem fallacy comes in a variety of flavors. There is “abusive ad hominem,” in which one person claims that another’s view should be rejected because of some (unrelated) bad thing about the speaker (e.g. “The senator’s tax proposals should be rejected because he once picked a dog up by its hind legs and pushed it around like a little wheelbarrow.”). There is “circumstantial ad hominem,” in which one person claims that another’s view should be rejected because their position is supported by self-interest, not good evidence (e.g. “The senator’s tax proposals should be rejected because they involve substantial tax breaks for people who once picked a dog up by its hind legs and pushed it around like a little wheelbarrow.”). And then there is tu quoque: the idea that a view should be rejected because someone arguing in its favor does not follow it themselves (e.g. “The senator’s proposed ban on people picking dogs up by their hind legs and pushing them around like a little wheelbarrow should be rejected because, well...”).
It’s easy to see how all of these might constitute bad ways of arguing. A position may well be good or correct quite independent of the character and interests of the people who defend it — there is no necessary connection between arguments and the character of their utterer. But if ad hominem arguments are illegitimate, how come they’re so useful?
The fact that a politician has a vested interest in arguing for a particular policy might not settle the matter against the policy, but it does at least give us reason to be suspicious — and the same goes for “abusive ad hominems” too. With tu quoque, the issue initially appears to be to do with allegations of hypocrisy, but a perhaps deeper point is that sometimes, even the defenders of a particular position can’t actually live in accordance with their professed ideals because doing so would be practically impossible. This might tell us all sorts of things: if a philosopher argues that existence is bad and that being born is always morally harmful, for instance, but then thanks their parents at the start of their book (as David Benatar does in his book Better Never to Have Been: The Harm of Coming Into Existence) then it might tell us something about the difficulty of actually believing that it’s bad simply to exist. If someone tweets that capitalism is bad using their iPhone, then it tells us something about the difficulty of ever doing anything that doesn’t somehow profit our capitalist overlords (we do indeed Live In A Society, after all).
If philosophers continue to blind themselves to the fact that people always reason about things as particular people, for particular (material) reasons, then all philosophy can ever really hope to be is just a very advanced form of stupidity.
Ad hominems are useful because of how thought both develops and exists: in relation to life as lived. In philosophy, it matters that Kant both devised and was (for a time, at least) committed to a “hierarchy of races” — not just because it means that “Kant was a racist,” but because it tells us something about the curiously cold, pedantically rationalistic way he had of considering all human affairs. It matters that Descartes undertook his famous Meditations when sitting completely isolated in a heated room; it matters that Rousseau forced his mistress to give all the children he fathered with her up for adoption. There is a guy who teaches philosophy at SUNY Fredonia called Stephen Kershnar who has written books and articles defending the morality of — among other things — torture, slavery, sex between children and adults, being rude to veterans, and “Asian sexual preference.” People don’t just do this thing because that’s where they think the argument is leading them — I would be fascinated to discover what his deal is.
Nietzsche frequently employed ad hominem as a mode of argument. For instance, in the preface to the second edition of The Gay Science, he declares that the majority of philosophers up to now have been somehow “sick,” developing their thought not in the interests of objective truth but rather because they needed it as a sort of medicine (he therefore speculates that philosophers should be less interested in truth than they are in health). Similarly, in a posthumously published essay entitled Rings and Books, Mary Midgley reads the whole history of western philosophy through the fact that hardly any great philosophers were married with children: the detached, rationalistic individualism of Descartes and the thought that followed in his wake the product of men trapped in the eternal adolescence of bachelor solitude, unable to see — for instance, from the perspective of a mother with an infant at her breast, who share not only nutrition but also an immune system — that the individuation of particular human beings is a far more fluid and contingent thing than the bulk of our most prominent minds have typically imagined.
In an essay entitled “Explanation and Practical Reason,” from his 1995 book Philosophical Arguments, the Canadian philosopher Charles Taylor offers a formalization of this way of thinking. In this essay, Taylor distinguishes between two ways of convincing an opponent of your position. One — the way that philosophers usually assume as the default — is “apodictic” reasoning, which involves presenting facts and principles that any opponent could not help but accept, regardless of who they are or what position they are arguing from (often, philosophy papers are pitched as attempting to convince some hypothetical and nefarious “skeptic,” a master of puzzles who always has one more trick up his logical sleeve). But in reality, most of our actual, practical reasoning is what Taylor calls “ad hominem,” which starts out from what shared ground you and your opponent do have (even you, as the polite liberal philosophy papers usually assume, and some unrepentant Nazi might both accept that murder is morally wrong — you might just have a very different understanding of what counts, or does not count, as murder).
People do not change their minds about something for detached, disinterested, abstract reasons that just about anybody would accept. A view is developed precisely as a view, a standpoint — and when this view shifts, for whatever reason, it will always be something we can make sense of narratively, in relation to our biographies. If philosophers continue to blind themselves to the fact that people always reason about things as particular people, for particular (material) reasons, then all philosophy can ever really hope to be is just a very advanced form of stupidity. And also we’ll, you know, happily and unquestioningly find ourselves working collegiately with child pornographers.
It’s time to admit that ad hominem is a useful logical and argumentative tool — and not, in and of itself, a fallacy.