12 Comments

Nice essay, which resonates strongly with my own reflections.

To me it boils down to two fundamental questions:

1. Should we approach suffering from a utilitarian or a deontological angle?

2. While there is little doubt that suffering is generally bad, i.e. detrimental to a sufficiently sentient sufferer’s well being, it is also inevitable. What business then is it of us humans’ to try (and miserably fail) to make a significant reduction to the totality of all suffering of (sentient) organisms? (And isn't that actually a manifestation of human exceptionalism?)

1. I think it doesn’t take long before a utilitarian approach gets stuck. As you show, the question of how much animal suffering is acceptable to inflict in return for a reduction in human suffering is one unsolvable dilemma—and philosophy is rife with other, perhaps even more impossible challenges. So it then necessarily becomes a matter of rules. But here too it’s both essential to construct a coherent hierarchy of rules and impossible to do so. Stuck again!

Which neatly segues into 2:

2. There had already been a huge amount of cumulative suffering in the millions of centuries before the first humans arrived on the scene. What is the reason why humans now ought suddenly (in evolutionary terms it definitely is suddenly) to take an interest, nay, a *responsibility* for other creatures’ suffering? I haven’t yet found a plausible justification.

So where does that leave us? Short answer: I don’t know. We cannot begin to eliminate, or even significantly reduce suffering. The best we can do is not needlessly *add* to the suffering. But what is ‘needlessly’? Just like Justice Potter Stewart said about obscenity, I cannot nail down wanton cruelty, but I know it when I see it. Pulling the legs of a (non-sentient?) fly would be a case, smashing a hedgehog that didn’t manage to run quickly enough to escape a car’s wheels would not, and neither would (humanely) slaughtering a rabbit for food. Other people’s mileage in this respect may vary.

My current conclusion: other than not causing suffering that our conscience would disapprove of, we should not seek to intervene, nor dictate what others should or should not do.

Expand full comment

Thanks for such a thoughtful comment! I agree - it's a very tricky moral problem (then again, most moral issues are very tricky).

I agree that it's nigh-impossible to eliminate suffering, but I think I'm a bit more optimistic that we can reduce it - and even a small dent in the amount of suffering is still significant. The extreme conclusion from the inevitability argument is that one might as well not bother trying at all to reduce suffering (including human suffering), which I think (and I think you would agree) is as wrongheaded as trying to eliminate all suffering. But we can probably do a lot better than we currently are.

A further reflection. I think the justification for suddenly taking an interest is the same as for taking an interest in any moral issue: because you think it causes needless suffering (and yes, as you say, 'needless' is a nebulous word, but it seems indispensable). Slavery was universal in human history until relatively recently - but I don't think we should therefore say that abolitionism was unjustified just because its proponents 'suddenly' (relative to the expanse of the history of human civilisation) decided slavery was immoral.

I agree with your conclusion: I don't know, either. I'm more or less thinking aloud as I write about this subject. Here's another thought: the US federal government used force to, well, enforce black civil rights against those who wanted segregation to continue. I think that was perfectly justified. Perhaps, then, a more interventionist pro-animal policy is also justified, unless we can come up with solid grounds for excluding animal rights from our policymaking.

And maybe this is human exceptionalism by the backdoor. But it's a more consistent form, I think, and not one based on positing a spurious ontological difference between humans and non-humans (one could make the same case to an advanced alien species to treat us nicely, and if chimps were the most intelligent and cultured species, the same case could be made to/by them vis a vis non-chimp animals).

As I said, this comment is a rather muddled, live-streamed (if you like) response. Ultimately I come down on the 'I don't know' side, too, and thus I end with a moment of aporia. The only things I am reasonably certain about are the contradiction between some of my ideals and my actions and the legitimacy of the uneasiness I feel. Perhaps it's no bad thing to feel uncertain and uneasy, though.

Expand full comment

Thanks for responding!

This reinforces my perspective that the problem lies in the fact that neither a strict utilitarian, nor a strict deontological approach will give us a clear cut answer without dumping a moral load onto our shoulders, so to speak.

You speak of a "spurious ontological difference" between humans and non-humans. I think that is a defensible element of the starting position. But it can still be interpreted in different, even diametrically opposed, ways. I would take position in the non-interventionist corner. If there is no non-spurious ontological difference between humans and non-humans, then whatever the general situation was before our species emerged, any argument that holds the pre-human "is" "ought" to be replaced by something different. What I mean is that the non-human animals did not particularly care about the suffering they inflicted on other species (or indeed on their own, plenty of infighting!). Individuals had (and have) particular goals tied to their survival, thriving and procreation, and if that required attacking, killing and/or eating some other individual, then that is what happened. That seems to me to be a reasonable base case for the human period as well.

We also have goals of surviving, thriving and procreation, and to believe that we can pursue those goals with no collateral damage - both to our species and to others - is deluded. That said, just like most animals (I am thinking of cats playing with mice despite having no intention to eat them) tend not to inflict harm on others unless there is a good, goal-related reason for it, it would be fair to expect us to act in the same way, and not be wantonly cruel to other species.

Unfortunately this does not lead us down an easy path, since we still have to decide to what extent we can justify harm to other species. This inevitably leads us towards the inconclusive utilitarian path (which is not dissimilar to, for example, establishing maximum speeds - we know that a lower maximum speed would cause fewer traffic casualties, but we also know that reducing the speed to a point where there are no more victims is not feasible): we must make a trade-off. The deontological alternative is similarly inconclusive - an absolute rule of not causing any harm is untenable (all those poor insects we trample on a walk in the park!!), and so we would need to construct an impossible tangled web of rules and exceptions.

Your final point is well made - it reminds me of a subject I wrote about a few weeks ago (in a very different context): the challenge of having to serve two, apparently diametrically opposed, goals. Such situations are sometimes referred to as polarities, in which there is no static solution, and the dynamic solution is flipping focus on from one to the other. Do have a look! It's here: https://koenfucius.substack.com/p/are-agriculture-and-nature-poles

Expand full comment

Thanks for sharing that piece - I shall take a look soon. No doubt this is a subject I'll continue to think about a lot. If I ever do go vegan, I'll need to write about that, too!

A quick thought, and perhaps this is another way of reinserting human exceptionalism: perhaps the point is that humans, being able to transcend their nastier aspects (our natural propensity for aggression and xenophobia, for example) ought to extend this to non-humans. If we can rise above many of our evolved instincts vis-a-vis each other, why shouldn't we go even further? We can't be utopian about this, of course, and utopian attempts to change human nature invariably end in misery. I'm thinking rather of the expanding circles of sympathy and the better angels of our nature, which can and have produced moral progress without trying to radically alter human nature.

Expand full comment

Thanks for the further reaction, DJ (I am guessing that is your moniker :-))

Whenever I see 'ought', I wonder 'according to whom?'

When we start reasoning from the pre-human situation, there was plenty what we call 'suffering' - but it was just the way nature was, and before there was anyone to call it 'suffering', it seems somewhat pointless to even consider the concept. We are omnivores, and while it is technically possible for humans to survive without any animal protein, it is not the way of nature. Anyone arguing we 'ought' to change, should come up with a pretty good reason (at least if they want others to do so - a personal preference is perfectly valid). So most humans, like our ancestors did, consume animals, and like them we need to capture and kill them for this purpose.

Now we may want to adopt a moral rule to minimize harm in general (arguably this would be an emergent adaptive property from more fundamental traits of frugality with energy and resources), and hence kill our prey as quickly as possible, rather than torture them unnecessarily - I am fine with that. Whether this is truly a moral obligation that transcends arbitrary man-made prescriptions, I don't know. That would fit within the better angels of our nature you allude to. But attempts at making dogs or cats vegan would definitely cross a line for me.

Ultimately, we can only rely on what we believe is right as our lodestar. This is, for many, a complicated cocktail of our spiritual/ideological choices and upbringing, reasoning, and - inevitably - some kind of magic dust that we are endowed with (we are not a blank slate, not even morally, I think). For example, in my view it is not wrong to slaughter animals for food, but this should be done as 'humanely' as practical. We have a choice, we can make trade-offs, and it is up to us to decide where we draw the line between acceptable and unacceptable.

Expand full comment

DJ stands for Daniel James, my first and middle names :)

Yes, I'm pretty much on-board with a lot of what you say, I think. The pre-human 'suffering' issue seems a bit of a distraction though - we can only talk about suffering (and moral concerns generally) in the here and now. Perhaps the 'ought' comes from the point you make about it being technically possible to be vegan - millions, if not billions, of people are, and are fine (and this includes historical cultures) - so if we can do it, why shouldn't we? Isn't the burden on the other side - to argue why we should continue to inflict suffering and death on non-humans when we have the choice not to do so without any harm to us?

I do wish they'd hurry up and make affordable lab-grown meat that's identical to the real thing in taste, nutrients, etc though - that would solve my ethical dilemma!

But yes, I agree, making cats and dogs vegan is both dumb and cruel. Unless they could live more or less normally on a vegan diet, as we can, there's no justification for it - you might as well beat them and go all the way in being cruel!

Expand full comment

I wasn't sure whether to just call you Daniel, considering the explicit presence of a middle name! :-)

I do think the entire issue of morality is inherently problematic, though. It is very hard to justify establishing all manner of (moral) obligations and prohibitions without at least implicitly adopting human exceptionalism (since no other species engages in this kind of frivolities).

Now you could argue that the very emergence of morality (and of superstitions and religions) in humans itself justifies exceptionalism. The problem then is that we have developed a framework for moral reasoning, but we do not really have any objective rules with which to populate the framework. People just make up things as they go along, and that is a poor basis on which to establish a general morality.

For me, the only plausible way to approach this is from an evolutionary perspective. Morality evolved as we began to cooperate, and groups with a practical framework that facilitated such cooperation were more successful than those without. (In case you are not familiar with it: https://www.lse.ac.uk/cpnss/research/morality-as-cooperation).

This combines with more primitive capacities we possess, namely of working out what is right and wrong - all the way back to the first amoebae: those that could determine what was nutritious and what was poisonous were better at staying a live and dividing into new cells. Step and repeat for the arrival of predation, and once again when sexual reproduction evolved, for finding the right mate to successfully procreate. The ability to distinguish what is right and wrong *for us* is a fundamental feature for evolutionary success. But that doesn't mean it cannot be reused, once a sufficiently advanced intelligence has evolved. So we have repurposed this system to establish what is right and wrong *in general*, and used adherence to or rejection of the rules we thus proclaim to define ingroups and outgroups.

On that basis, the whole idea of suffering becomes, literally, a figment of our imagination - along with morality as a whole.

Wow. I hadn't quite expected to end up here, but that's what happens when you live type a stream of consciousness. I will now go and reflect on this a little... :-)

Expand full comment

Great essay. I agree that there’s nothing fundamentally different in the human animal, and that any difference between us and other animals, whatever trait we consider, is a difference in degree and not in kind. But still, it seems to me that this view can mislead us into minimizing the magnitude of certain differences, particularly those related to our level of consciousness and self-awareness and, consequently, to our capacity for suffering.

A difference in degree may still be of “cosmic” proportions. As an analogy, some animals have demonstrated a basic understanding of numerical concepts, but calculus, for example, is so far beyond the comprehension of even the smartest animal as to be practically in a different realm.

Likewise, because of our much more “advanced” consciousness, I think the suffering of human beings is much greater than that of animals. Human suffering typically goes far beyond mere physical pain: we experience despair, anguish, anticipation of future suffering, awareness of previous suffering, etc. All this intensifies our suffering in a uniquely human way. At an extreme opposite to ours, we can imagine a very basic organism capable of feeling pain but unable to remember it from millisecond to millisecond, let alone to anticipate it in terror. I think it’s fair to say that such an organism would barely be capable of suffering. As to the animals that fall between the two extremes, I would place them closer to that organism than to us. Although perhaps some animals, such as chimps and bonobos, dolphins, and elephants, for example, should be placed closer to us. How close? Hard to know with certainty, of course, but I still very much doubt that even those animals are capable of the intensity of suffering that humans are subject to.

Expand full comment

Thanks for another thoughtful comment! Yes, as I briefly mentioned, I think there are myriad ways in which human animals can experience a higher pitch of suffering. But I think the case for non-human suffering as I put it is still very strong. And you're quite right to say that the level of suffering open to non-humans is on a continuum - it's much worse to be cruel to a dog than to swat a fly (unless you're a Jain). It's interesting to consider that this would be the same vis a vis an extremely advanced alien species, whose suffering might 'matter' more than ours in the same way that human suffering can be more intense than non-human terrestrial suffering.

I think the only thing I can conclude is that it's all very complicated! As it should be, I suppose.

Expand full comment