BRETT HALL
  • Home
  • Physics
    • An anthropic universe?
    • Temperature and Heat
    • Light
    • General Relativity and the Role of Evidence
    • Gravity is not a force
    • Rare Earth biogenesis
    • Fine Structure
    • Errors and Uncertainties
    • The Multiverse
    • Galaxy Collisions
    • Olber's Paradox
  • About
  • ToKCast
    • Episode 100
    • Ep 111: Probability >
      • Probability Transcript
  • Blog
    • Draft Script
  • Philosophy
    • Epistemology
    • Fallibilism
    • Bayesian "Epistemology"
    • The Aim of Science
    • Physics and Learning Styles
    • Positive Philosophy >
      • Positive Philosophy 2
      • Positive Philosophy 3
      • Positive Philosophy 4
    • Inexplicit Knowledge
    • Philosophers on the Web
    • David Deutsch & Sam Harris
    • David Deutsch: Mysticism and Quantum Theory
    • Morality
    • Free Will
    • Humans and Other Animals
    • Principles and Practises: Preface >
      • Part 2: Modelling Reality
      • Part 3: Political Principles and Practice
      • Part 4: Ideals in Politics
      • Part 5: The Fundamental Conflict
    • Superintelligence >
      • Superintelligence 2
      • Superintelligence 3
      • Superintelligence 4
      • Superintelligence 5
      • Superintelligence 6
  • Korean Sydney
  • Other
    • Critical and Creative Thinking >
      • Critical and Creative Thinking 2
      • Critical and Creative Thinking 3
      • Critical and Creative Thinking 4
      • Critical and Creative Thinking 5
    • Learning >
      • Part 2: Epistemology and Compulsory School
      • Part 3: To learn you must be able to choose
      • Part 4: But don't you need to know how to read?
      • Part 5: Expert Children
      • Part 6: But we need scientific literacy, don't we?
      • Part 7: Towards Voluntary Schools
    • Cosmological Economics
    • The Moral Landscape Challenge
    • Schools of Hellas
  • Postive Philosophy blog
  • Alien Intelligence
  • High Finance
  • New Page
  • Serendipity In Science
  • Philosophy of Science
  • My YouTube Channel
  • The Nature of Philosophical Problems
  • The Nature of Philosophical Problems with Commentary
  • Subjective Knowledge
  • Free Will, consciousness, creativity, explanations, knowledge and choice.
    • Creativity and Consciousness
  • Solipsism
  • P
  • Image for Podcast
  • ToK Introduction
  • Begging the Big Ones
  • Blog
  • Our Most Important Problems
  • Corona Podcasts
    • Brendan and Peter
    • Jonathan Davis
  • Responses
  • Audio Responses
  • New Page
  • Critically Creative 1
  • Critically Creative 2
  • Critically Creative 3
  • Critically Creative 4
  • Critically Creative 5
  • David Deutsch Interview in German
  • Audio Files
  • Lookouts
  • Breakthrough!

Humans and other animals

the ethics of eating meat
Humans and Other Animals​

Preface: In this piece I deliberately repeat myself, a technique I've deployed on other pages. I write this way with an intended audience in mind: those who do not read the things I read but might be interested in some of the central ideas from certain books...without wanting to read the whole book. Simply stating an (often complex) idea once with a note to "go read this book" will convince few. So this is a different strategy; ideas phrased in slightly different ways over and again. Not sentence after sentence but rather the same argument appearing two or three times over yet separated by many paragraphs perhaps, themselves on different themes (as a good paragraph should be). The reason I have been particularly careful here to construct this piece in such a way is because it deals with “the hard problem of consciousness”. I remember encountering the word “consciousness” when I first starting reading some of the classical philosophical texts and even though I was taking philosophy as a major part of a university degree, I just could not get my head around what people were even talking about when they used the word. I thought it just meant “awake” or something like that and I didn’t see the mystery. “Fine” I thought “I’m conscious now. When I go to sleep, I’m unconscious”. And so I fully understand even now why people just do not “get it” insofar as there's something there to get. It is both right there, on the surface as an easy concept, and yet extremely deep. It’s like looking at the sky (at night) and thinking “yep, there’s the sky. No great mystery there. A moon, stars and lots of black” without ever contemplating what is behind those first impressions. There is much, of course. But it takes a bit of understanding. And so for this reason, I return over and again to the same themes.


Part 1: Animals in Pain


There is increasingly a call from many corners for people to transition towards a vegetarian or even vegan “lifestyle” for reasons primarily to do with the welfare of animals. Or “animal rights”.  Such rational and critical modern day enlightenment figures as Sam Harris and Peter Singer have been vocal about this (in the case of Sam; a relatively recent convert, while Peter has been a vegan and animal-rights activist for many years now). Michael Shermer, who aligns with Harris on almost all other topics, disagrees (see here) but he seems to do so begrudgingly and admits to a moral failing. Like Harris (before he took the plunge into vegetarianism) the computer scientist and polymath Jaron Lanier likewise conceded he “eats meat, but only as a hypocrite”. My purpose here is to find a way out of the apparent hypocracy. In their case the hypocrisy is genuine: they believe animals can suffer and so on and admit even that their own practices cause the suffering. But they choose to do what they see as an evil anyway. This truly is a moral failing. In Harris’ case he bit the bullet: unable to square the circle of his moral realism, the central tenant of which (in his mind) is the “wellbeing of all conscious creatures” and his meat-eating, he publicly became a vegetarian on his podcast and subsequently the almost-never reliable online rag "Salon" published an article which it would seem is going to hold Harris to account: http://www.salon.com/2016/01/09/new_atheists_must_become_new_vegans_sam_harris_richard_dawkins_and_the_extra_burden_on_moral_leaders/ (the tone of that article appears to hold all meat eaters to account. Indeed it is holding "world leaders" (no less!) to account. This is one motivation for the present piece: some vegans, and meme of "meat eating is evil" are on the ascendancy politically as well as morally.  A certain kind of animal rights activist are altogether sure they are in a morally superior position and seem quite determined in a quasi-religious way to enforce their doctrines. Even through legal means.). So there is probably now no going back for Harris. To enter into vegetarianism publicly is to court accolades. To apostate out: I imagine the backlash would double the hate-mail he must already get on the topic of religion. 

I should state upfront that I believed we are relieved of a terrible moral blindness and it is a sign of progress that, unlike in the middle ages we are not burning animals alive for sport. There is increasingly a belief that gunning down endangered species for no reason but sport is a different "kettle of fish" ethically speaking to hunting an animal so you can cook and eat it. This is, I think, quite right. But when moral outrage begins to turn into political action and none of it is being argued for philosophically there is a real danger we are rushing headlong into irrationality. So let us keep our heads and consider: what's wrong with eating meat?

There are two reasons, quite distinct, it would seem to be vegetarian. One may endorse one or both. The first is purely for so-called “health reasons”. Various studies can be cited and data called on to show that this or that animal product when consumed in sufficient quantity is correlated with this or that disease (often heart, cancer or brain) and so some, out of fear turn to the medicalisation of diet and prescribe themselves much in the way of vegetables. Putting aside that the brilliant Aubrey De Grey - one of the world’s foremost expert on living an extra long life - argues that diet changes and exercise might offer a person a couple of years extra at best - the argument for vegetarianism/veganism from a health perspective is a real one. The argument must of course ignore those many meat eaters who go on to healthy lives well into their 90s and beyond as has always been the case while vegetarians and vegans still can die early. But such people believe they will be part of the “aggregate” - those people who the “statistics say” meat eaters have worse “health outcomes”. In this present piece I will turn to the other reason many give for becoming a vegetarian: the welfare of the animals. (I have dealt with diet and exercise, and by logical extension vegetarianism elsewhere separately in a post “More or Less: Eat what you like” found here.)  Until we really can constrain the multitudinous variables to do with diet and how that might impact on your own, person, quite unique physiology in an environment of food aplenty and variety unlike any other seem before in human history: you're probably doing yourself more damage (mentally if not physically) being concerned about your diet than simply enjoying what you put into your mouth.

Health concerns aside, the much larger group of people who choose to be vegetarian do so primarily because it is obvious, to them, that animals can suffer. How much they can suffer varies, according to different people. Sam Harris wonders, for example “how far down the phylogenetic tree our concerns should run”. He imagines that while humans are at the pinnacle creatures thus far known in the universe for capacity to suffer, the other great apes must be a close second. Dolphins and dogs, somewhere perhaps a little lower. Rats and chickens? Cows and sheep? They must be able to suffer - and more than fish. And in turn more than insects. And so on. This, on the surface has the appearance of a logic to it. It might even be said to be scientific, even. For surely whatever it is about us that enables us to suffer depends upon the complexity of our nervous systems. It is those nerves that transmit pain from extremities and interpret it in the brain.  This brings the moral question of the suffering of animals within the subject of neuroscience, and not philosophy. It is a matter of nerves and brains - and nothing more esoteric or metaphysical than that. For when you look at the nervous systems of animals you see different degrees of complex and we know in ourselves that our capacity to experience a given stimulus has to do with whether we have the relevant receptors and regions in the brain to decode the signals. The 1:1 correspondence between “potential to have an experience” and the hardware of our nervous systems is all that needs to be understood to appreciate that even if animals do not share identical perceptions of the world as we do: the experiences must at a minimum be very very similar. As similar as the physical anatomy and physiology visually suggests. I think this is wrong. But before we continue we must be careful to distinguish two related, but crucially different concepts: Pain and Suffering.

Firstly, “pain”. Pain is somewhat analogous to “blue”. It is a general word for a broad range of experiences (or impressions) that arise from a given stimulus. Pain is detected by “pain receptors” (specialised cells) in our skin (or elsewhere) and blue detected by cones (specialised cells) in the retina of our eyes.  The physiology of how it is blue light, for example, is transmitted to the brain where it is interpreted, or experienced as blue qualia (the subjective experience of blue) is well known up to the point where the objective sequence of events (from photons to chemicals in the retina to electrical signals travelling along nerves to reach the brain) is actually somehow encoded in that brain to produce an experience in the mind. That is: the energy changes (electromagnetic to chemical to electrical) is well understood. But what those electrical signals in the brain are doing to give rise to conscious experience is not at all well understood. It comes under the broad heading of “the hard problem of consciousness”. Why should those photons that hit the retina and are then encoded as electrical signals appear any way at all, let alone the way they do to us? Why does blue “look” blue to you? Philosophy has done much over the years to solve deep and interesting problems and pose new questions: but on this question? We can barely pose it coherently. Sam Harris himself has said of this mystery of consciousness that it is rivalled perhaps only by the question of why there is something and not nothing: “The fact that the universe is illuminated where you stand—that your thoughts and moods and sensations have a qualitative character—is a mystery, exceeded only by the mystery that there should be something rather than nothing in this universe.” from http://www.samharris.org/blog/item/the-mystery-of-consciousness-ii/ 
Harris admits right there as to not knowing what things are conscious. I think he is right to admit this. And by “knowing” I do not mean “infallibly certain of” I mean “have a good theory to explain”. We can ask people if they are conscious and they will say “yes” (or perhaps “No” in the case of philosopher Daniel Dennett, who wrote the book “Consciousness Explained” in which he denied the existence of consciousness. This prompted Jaron Lanier to quip that this is how to spot a zombie: they must be a professional philosopher who makes a strong case that they indeed are not conscious. And so, Daniel Dennett is the first actual zombie we know about. My guess about qualia is that we have them and we have them because of our special (and unique) capacity to explain absolutely anything. Whatever it is in the software of our minds that gives us this ability to explain requires that we have qualia (and by extension consciousness). We know we are unique when it comes to being universal knowledge creators (as explained in "The Beginning of Infinity" by David Deutsch) and it may just be the case that this brings with it, of necessity, the ability to experience qualia and be conscious of the world we find ourselves in. It might very well be the case that absent the ability to understand and shape the world through this universal capacity to create knowledge through the creativity of the mind, we would also lack qualia and consciousness. If that is the case, then we have an answer about the potential suffering and consciousness of other creatures. And it would have nothing to do with the nervous system hardware as most (like Sam Harris) seem to think. But that is untested and not well criticised pure conjecture for now, and absent a theory of consciousness, we must be more cautious and grant that animals may indeed experience the world and not merely be "in" the world like rocks.

Consciousness is central to our concern about the possibility that other creatures experience pain. But let us concede for the sake of argument that they do feel what we might term "pain". This would be much like if we knew they were capable of experiencing “blue”. Knowing that another *person* is experiencing “blue” tells us very little about the contents of that experience. Are they seeing the sky? The Blue Mosque in Turkey? A policeman’s shirt? Far more information than that “blue is perceived” would be required to give us an idea of what that experience might be like. We would need context. We would need an entire explanatory theory about what that blue might be like and it would take us into circles about “can they see shapes?” and “do they understand the relationship between those things?” and "Do they realise the sky isn't even a physical thing? (unlike what the ancients thought: they thought the blue sky was like blue paint on glass - they thought it was a surface they didn't know that what they saw did not even actually exist).

A botanist may take their young child to a garden. The botanist will see Ash Trees and Tulips and Willows and grasses from Africa. But the child may only see green plants of more or less different shapes. It takes theories to actually even see what is before your eyes: to understand. To understand this point, find some mathematics, or perhaps just text in a foreign language online that you don’t understand. Then understand that someone understands that and it will, upon seeing those symbols, conjure in their mind ideas which may even elicit visual images. But to you? A mess of squiggles. You need to know what you are seeing, even if it is before your eyes in plain view. You must have a way of translating the images into something you can understand. This applies if you are looking at green plants, it applies to mathematics and exotic alphabets - it applies to all things in your visual field. You need to interpret to understand. Things even before your eyes are not always obvious. A message may say "Your friend is in the hospital" and mere marks on paper can cause your heart to race and blood pressure to rise. But if that same message were written in a language completely foreign to you: nothing.

And so it is with pain. Even if we have an excellent cause and effect explanation of the physiology of how the stimulus we call pain is transmitted to a brain (and what might cause it to arise and what other stimuli, like say certain hormones being released into the blood stream) absent further information we know very little about how the pain might be interpreted. That is to say: even if we can describe the objective physical goings-on in the nervous system, this tells us nothing about the subjective experience of pain. For example: the very same signals might mean a person is exercising (and enjoying “pushing through” the pain or some such) or perhaps a person is partaking in jujitsu and pain is a necessary part of the learning (and so an objective good) or perhaps the person is suffering a heart attack. Or perhaps the person simply doesn’t even understand what that sensation is.

I recall a high school science teacher of mine saying one lesson that should a young-enough infant be unfortunate enough to be left alone with a hot object like a stove, or similar, they may, if they touch said-object, burn themselves far more severely than an adult, should the adult touch it. This is because the young child needs to learn what the pain actually feels like before the “reflex” action is itself learned. The first time touching an exceedingly hot object, so this teacher argued, a person will leave their hand on the hot object extra long in comparison to other people while the signal travels all the way to the brain, where it is interpreted, and then back to the hand to move it away from damage. But every time after this first time because the person (the software encoded in their nervous system) has learned the sensation it will then not bother to send the signal all the way to the brain first but rather just to a closer bundle of nerves in the spinal cord which transmits the “move hand” signal much faster. Because it has less distance to travel, less time is spent on the hot object and less damage is done. The pain sensation comes after the move-reflex happens. Now I can’t find any references about whether this is true (I would love to know) but it seems entirely plausible to me that we are born with a limited number of reflexes and in-born ideas and many of even our most apparently “obvious” sensations like the feeling of a damaging burn have to be learnt. As I recall personally, as a child becoming sunburnt was not immediately unpleasant. But I have now learned to associate a particular sensation of strong sun in the Australian summer with damage and it hurts if I am out for too long. A feeling I just never noticed as a child (and subsequently would end up terribly red and burned *the next day*. (Yes, I keep a close eye on my skin now for the telltale signs of cancer.) My point here: having never learned that the sensation actually indicated damage: I didn’t feel pain until later when I could see I’d turned red. And then it hurt. But I was also slow to learn. Now I *feel* the burn happening in close to real time (although I don’t because I avoid the Australian sun).

But not all pain is bad, as I have suggested already. Pain is a stimulus. It must be interpreted. The question before us now is: what is the valence of that “pain” stimulus? Is the question even meaningful? So is pain, in and of itself, an evil? Well we have already seen that in the case of, say, exercise pain may be interpreted as good and while training in martial arts a very part of the learning. So it can be good.

But then why do we have that other word: “pleasure”? It too is a stimulus. And yet it is difficult to imagine a sense in which the valence can be *negative*. I propose that “pain” is too broad a term to be useful when discussing the morality of whether animals experience it or not. There is indeed “pain which is good” (which is not suffering) and “pain which is evil” (which is). And there might be pain which is neutral: pain which suggests action is needed (which is good) but is nonetheless a discomfort (which is bad). I think here of something like: having remained sitting in a particular position for too long (you’re at a torturous 3 hour self indulgent Tarrantino movie) and you feel the “pain” sensation of needing to shift in your seat. The feeling was one of discomfort, but you “obey” the sensation and move and the problem is solved. So “pain” is a term labelling such a spectrum of sensations as to be almost useless for the purpose and yet the idea that “animals can feel pain” is one deployed in defence of not eating meat so that we do not “inflict pain” upon animals. And yet even if we did inflict pain on animals, just as we “inflict” pain upon ourselves it is not obvious that would be an evil. And I note here than “inflict” does as much work in the sentence as the word "pain" does; it carries all the negative connotations that imply the pain is, prima facie an evil. You do not “inflict pleasure” upon someone.

The pain experience in a human being arises in the mind. The mind is the software that runs on the hardware of the brain. And that software is unique, so far as we can tell, within the universe. What it is, unlike all other software (like that running on your computer), is a universal explainer. People can explain things (unlike other computers that we know about). This software is very special and we know next to nothing about what the code must be like (if we did we would be able to program computers to explain stuff which means we would have artificial general intelligence - already). So while the hardware of our brains and nervous systems are similar indeed to those of other great apes and mammals and animals, the software is wildly different. And we know it’s different in the same way we know the hardware is so similar: we simply observe the physical manifestations of both structures in our world. In the case of the hardware: the nerves and brain - the whole physiology looks similar from human to chimpanzee and so on. But the software? Well look at what our minds have accomplished compared to what the minds of other animals have accomplished: the entirety of our civilisation, our technology, our cities, our art, our science.  All of this makes a vast difference between human and non-human animals minds even if the brains are similar. Why would we not expect all the "contents of our consciousness" to be equally and dramatically different?  This complete categorical difference in quality between our minds and the minds of other animals manifests itself in the knowledge we have created and the way we shape our reality. The reality shapes us far less now than we shape it - unlike in other animals where it is quite the opposite. Their environment utterly controls them. But not us. It is plausible that this very same software confers on us experiences of that same reality that have no analogue in our closest animal species. They know nothing of epistemology and knowledge (though at times it might *seem* they understand things (my cat looks at me with a "knowing look" at times. But seeming to understand what I say is certainly not understanding what I say).

It is thus plausible that when we use words that other humans understand like “pain” and “pleasure” and “blue” and so on, there is just nothing like that encoded in the mind of other animals. Although superficially we might project upon an animal sensations that we feel, this might be no more reliable than projecting onto beavers an understanding of civil engineering principles because they fit logs together to make a dam. In one case it is genuine understanding that creates something like The Three Gorges Dam in China, in the other: random natural selection of genes has created instincts in beavers who just do what they do without knowing why. To quote Jaron Lanier (who to be fair was talking about chickens, not beavers) it is as if they are mere "servo-controlled automatons". Which is to say: there is no internal subjective experience there at all. (I don't want to misrepresent Lanier who is quite clear in a similar context that he thinks many animals are conscious and especially cephalopods (squids, etc)). But Lanier, like Harris and others do not appreciate that between humans and non-human animals: It’s a difference not just in quantity of knowledge, but quality of experience. And this stems from our qualitatively different capacity as universal knowledge creators.

I wish to draw 3 preliminary conclusions here: 

  1. The relationship between objective physical reality and our internal subjective experience of that reality is not well understood. This is the “hard problem of consciousness": how and why things “seem” any way at all to us. Absent a workable theory, we cannot know if animals experience the world like us, if indeed at all. Does your friend see the same “blue” that you see? We do not know, but we guess it is at least similar unless they are a type of colour blind. Perhaps animals cannot feel pain. Perhaps! (Yes, I equivocate!) 
  2. Not all pain is an evil. Some is good. Some is neutral. Merely knowing that something is experiencing pain does not tell us what the quality of that pain is like, much less what they interpret it to mean, if they are able to "understand" it at all.
  3. The human mind is a unique structure in the universe, able to explain the universe in which it finds itself with increasing fidelity. Being so different to that of other animals (even if the gross anatomy might be similar in some big-brained species) we can only imagine how different the subjective experience of other animals might be. It might be that we alone are conscious: and that consciousness is an expression of our universal ability to explain the universe.

Part 2: Animals Suffering

Some may well reject all of the preceding section and simply argue, almost as though it were a premise: animals do feel pain. It is, to them, as obvious as that animals breathe. I think this is mistaken, but I will concede the point to move on. Let us admit: animals experience something and let us call it pain and assume that the "impression" is similar to what a person might instantly feel. Another hurdle is now yet before us. And that is the hurdle of suffering.

​Suffering is not identical to pain, even though some people seem to use them as synonyms. The two are, however, related. Suffering is a word that seems to capture the “bad kind” of pain. Indeed suffering as a word is required because of the very problem I identified earlier: the word “pain” itself describes a spectrum of possible sensations - some good, bad or neutral and we actually have a word for the “bad” kind: suffering. But to distinguish between one kind of pain and another, we need to interpret not just the sensation of pain, but the deeper meaning of the pain (the implications of the pain). People who exercise hard know this well. If, the day after a strong session at the gym, your bicep hurts then whether you suffer all turns on what you know about the pain. What you believe the implications to be. If you have never felt this pain before you may well interpret it as an actual injury. You might even visit the doctor if you are that ignorant of what is happening. On the other hand you might, almost gleefully, relish in the pain; you realise it’s a sign you’ve worked hard to achieve exactly what you wanted: a workout where you've expended all your energies to achieve pain. Or you might accurately interpret it as an actual pulled muscle, or a tear. In that case it is a sign that you won’t be working out that muscle for weeks to come as you have done terrible damage. And these sensations feel similar if not identical (especially in the mind of someone inexperienced with the pain of extremely hard exercise). So first you have a sensation. You then interpret it (it’s pain, you figure). Then you interpret it again (oh, it’s the  good sort: I worked out hard. Or: oh no, it’s the bad sort: I’ve actually torn that bicep). And THEN you figure out if you’re suffering or not. If it’s torn: you suffer. If you figure: it’s exactly what I expect before I get stronger with bigger muscles because that’s what I want: you do the opposite to suffer: you feel extra good: pride in accomplishment. All of this happens in an amount of time not much more than an instant. If you watch the movie documentary “Pumping Iron” you can see Arnold Schwarzenneger describe a certain kind of pain in the bicep in a way few expect: https://www.youtube.com/watch?v=RuueLiEsWGs The “pump” sensation most “gym-junkies” will describe as painful: certainly the excercise preceeding the pain. Some will never push through the pain to know what Arnold is talking about. Some will interpret the pain as suffering. Indeed most will - and we know this because if Arnold is speaking honestly (and I’ve no reason to doubt him) then the same stimulus provides wildly different interpretations. One person suffers, while the other experiences exquisite pleasure. Everything about the sensation turns on interpretations. If two humans can experience the same stimulus in wildly different ways: then how much different will two different species? (We must also admit here that another layer of complexity supervenes above all this: animal physiology is a complex system of feedback loops. Pain does indeed cause pain-relief chemical to be released in the body and brain. Some theories would have it that the "runners high" experienced after exerting oneself beyond some certain threshold released endorphins and other kinds of "bad pain" might become addictive if natural opiates in the brain are released frequently enough. But that, for my purposes here, only adds to the argument: pain of itself is not an evil. And an animal that feels pain might very well have well adapted mechanisms for converting pain into pleasure through things such as endorphins and opiates).

When pain is interpreted as an evil, by a person, this might be because the person recognises (for example) it is causing damage to the body for no good purpose. And so the person is suffers. So with the bodybuilding example: a person who has no interest whatsoever in being stronger or getting bigger muscles, then if *they* lift weights to the point of failure and their muscles “pump up” then they will suffer. They will feel pain and see no good reason for it. And their muscles really are at the level of the cells damaged - that is why they grow. But for a person who does not want such damage (because, say, they are a concert pianist and require their arms to not feel pain) it could be terrible indeed. But Arnold does not experience this: the pain indicates, to him, something wonderful: he’s worked hard, getting bigger and stronger.

But not only physical sensations cause suffering. You don’t need to be in physical pain. There are mental events too. When a person is repeatedly frustrated at being able to solve some ongoing problem: they may be suffering. A person might want to complete a project but other events keep intruding. They may have a desire to complete a book: but their office work is busy. They suffer as they are torn between priorities and unable to decide how best to allocate their time and solve their problems. This may be one of the primary sources of human suffering: people with too few hours in the day and too little energy to achieve all they wish. People in our world suffer at seeming to not be all they could be. They suffer at opportunities they feel are lost. And none of this may have a physical sensation associated with it. In many ways “it’s all in the head” - a result largely of internal ideas. Such suffering is not what any animal is capable of. Cows are not concerned that eating this patch of grass means they have missed the chance to munch on those buttercup flowers over the fence. The lion is not feeling a sense of regret while it dines on a wilderbeast only to see a fat and lumbering bison stroll past. Animals are, it seems caught in the present moment, like perpetually enlightened Buddhists. They do not appear concerned about the future nor regretful of the past.

Human suffering requires a person be able to come up with some explanation as to why they are in pain (mental or physical). We know humans can suffer. A human will be able to give an account of their present physical and/or mental state and relate this to some expected future state. They explain their suffering because “this pain I’ve been having the doctor says is a tumour. And now I have cancer which could be life threatening.” So that is suffering: the present pain indicates that the pain will continue and is serious in a way as to cause ongoing unhappiness (to put it mildly). In this sense, I argue, it is not clear animals suffer. Insofar as they have thoughts of the future or past, they are not of the “hoping” or “regretting” kind. And yet even the most simple animals seem to have memories not necessarily encoded purely in their genes. The “goldfish” memory effect is a good example: wrongly thought to have poor memories any goldfish owner knows that goldfish reliably return to the same corner of a tank or aquarium to be fed. And you can experiment with changing the corner of the tank. It takes some days to retrain the goldfish but as you walk towards the tank with the food they will gather in (for example) the front left corner and wait. If you walk away, so they swim away. If you feed them in the back right corner instead one day, they will still go to the usual front left corner but gradually, over some days if you repeat the new pattern, learn that that is where the food now appears. So they associate events in the past with events in the future. But whether if you walk away having not fed them they feel something as complex as “regret” I doubt. I doubt they feel anything like that emotion. And whether when they notice you walking towards them they feel “hope” I also doubt. I sense this is more like an automatic door that opens as you approach. It neither hopes you go through, not regrets if you do not. It is simply detecting certain flashes of light and reacting. But goldfish have brains that humans can program from afar. 

Part 3: What about the Science?

Let us turn us consider the studies that suggest an animal can suffer. There are a number of these of varying quality and I will not bother to cite any of them as it seems they all have particular philosophical errors in common, underpinning the conclusions they try to reach. Already I have suggested that there is enough philosophical difficulty with even asserting animals can feel pain, but these studies often load the deck against critical analysis of the results by deeming some physiological response “suffering” and yet, as I argue in the previous section, this is a complex idea likely unique to human cognition (The Salon article marshals, in defence of its strong claim that "animals have the capacity to suffer" nothing more than a statement signed by some scientists (so an appeal to authority, not an actual paper) outlining their "declaration" that animals have consciousness "of near human-like levels". See here. Analytical philosophers may laugh at this silliness, but such ideas have now political force and scientists such as that so-called "Cambridge Group" seem more and more keen to make such proclamations with almost no caveats.).  I will explain more about why the unique capacities of the human mind are morally relevant to this question of animal suffering in yet another way in this section. But broadly on the point about scientific studies, and scientists themselves,  purporting to show that animals do indeed experience “suffering” or “pain” is exactly like a study which purports to show that a dog can see the colour blue (or that it can’t). No experiment can settle this matter. This is not to say there is no fact of the matter; there must be. It is just that experimental science (in the form of neuroscience or whatever) is not the right instrument to probe the question. The dog might, for example have cones (colour sensitive cells) in its retina that are sensitive to the blue wavelengths. But can a dog see blue? Does an automatic door that detects light see you? What does "see" mean here? These are not scientific questions but rather the questions we need to define properly before we attempt to conduct experiments (at least for now, while we lack a theory of what consciousness is). They are, importantly, questions of philosophy and not all opinions are equal here.

To recap here: we can conjecture a dog sees the colour blue - or sees the colour blue as we do (or not), but we do not know enough about the relationship between the objective world (blue lights, chemicals in the retina, nerve signals, etc) and the subjective world (the conscious experience of qualia) to have a good explanation of what a dog *actually* sees. That is to say: we don’t know what a dog sees. We have some guesses, but it is not at all clear just what the relationship between a subject’s *knowledge* and their *experience* of the world might be. Perhaps, without ever being told what “blue” actually is, the dog’s mind has an interpretation of the world wildly different to ours, even if the physiology of the eye and the nerves are not much different. This can be explained in this way: the software running on the hardware (that is to say the mind which runs on the brain) is so very different in a person compared to a dog as to admit of no good guesses about the experience of dogs. I have been through this with respect to the above discussion on pain and believe this shows the vast philosophical leaps  made in these "scientific" studies. What these studies all share is that some animal’s nervous system (or broader physiology and anatomy) is similar in some way to a human's. The study makes the philosophical assumption that this means the animal must experience *pain* in the same way people do and then make a further philosophical assumption that therefore this mean animals suffer. But none of those assumptions are at all obviously reasonable. They are simply assertions: not argued for, but simply believed. They are a premises, not conclusions (the premise being: certain physical processes are identical to certain mental ones. A particular sequence of nerve firings really truly is the mental state called suffering). And so in many ways such studies are pure circular reasoning. Assume first that the evil of suffering is any type of pain. Assume that the subjective experience of pain is identical to the physiological processes in a nervous system. Find those physiological processes in some animal. QED.

The philosopher Ludwig Wittgenstein famously remarked that “If a lion could speak we could not understand him”. He did not mean that the lion could not speak English: he meant that the internal workings of the *mind* of the lion may have been so far removed from our own as to have no analogue that could be captured by our own vocabulary. Another philosopher Thomas Nagel asked the question “What is it like to be a bat?” when considering the question of the relationship between objective physiological processes and the internal, subjective, conscious experience of a thing. Nagel gave no final strong answer about bats and animals broadly but made the famous claim that if there is something it is like to be a bat, then the bat is conscious. If not, then a bat is not conscious. And that’s what consciousness is: whether there’s “something it’s like” to be it. There’s nothing “it’s like” to be a cup of coffee - so cups of coffee are not conscious.


And yet let us assume, along with almost everyone else on the planet, that there is something that it’s like to be a bat or a lion. If we take Wittgenstein seriously: even granting that,  a lion’s experience is so far removed from our own, that perhaps words like “pain” and “pleasure” let alone something quite more emergent like “suffering” simply do not apply. A lion is altogether a mental world away from us. And we already know it must be, in one sense at least: our “world” - our civilization is under our control because we understand it more and more as we create knowledge about it. The world of the lion entirely controls it. We explain things. The lion does not. It should be no great surprise if the differences go all the way down to what might seem "clear and distinct" perceptions to use Descartes formulation of that which is most obvious to us. And yet nothing, it turns out, is obvious. We must learn almost everything we eventually come to know. What blue is. What bad pain feels like.

We should take seriously the claim that: suffering arises out of the capacity to form explanatory knowledge about the world. Absent an explanation: you won’t suffer. What can this mean? If you experience a very sharp pain everything turns on how you interpret that pain: what explanation you believe. If you find yourself in a gym lifting weights and you “feel the burn” on the 30th repetition (rep) [you’re going light because, well, let’s face it, you’re getting older] then the sharp pain is an excellent sign you’re pushing yourself hard. The pain, as painful as it is, is good. Indeed you feel great about it: you want more. The only reason you stop is that your muscles give out not your mind (which was quite enjoying the pain). On the other hand, if you’re only on rep 3 and feel a similar pain to what you'd normally feel on the 30th, and in an instant you guess: you’ve torn a muscle. Now you’re suffering as you quickly go through how long this pain will last, how you will be unable to do your usual tasks like carry the groceries home, whatever. It’s bad. Or perhaps one day walking down the street you hear a “bang” and feel a pain and see blood and a man with a gun running away. Now you’re really suffering as fear grips you. And perhaps the shock takes the pain away (so many shooting victims claim). But you are definitely suffering.  Or perhaps you hear a bang and feel a pain and turn to see your friend having snuck up behind you and slap you on the back. The pain remains, but the laughter following turns it to a delight. All of this is to say: pain, alone, without interpretation, is not bad. 


But suffering is. 


And suffering requires you to make some assessment of the causes of the pain.


So to animals. They experience pain, is my guess. But this is only a guess. And I'd further guess that whatever it "feels" like to them is not what it feels like to us. We interpret with knowledge all our sensations and our mind moulds how those sensations appear in our consciousness. I guess that some sensations are amplified from when we were children (we complain more about the same sensation we had as a child) and some are dulled (we more easily tolerate silence, say). As I said: whether animals actually do feel pain can only be known once we have an actual theory of consciousness. This may or may not link physical processes and abstract mental ones. It may require a “third way” where physical brains and mental minds are mediated by some third thing (though I doubt that). My guess is that, as most who think about this: consciousness has something to do with the complexity of the brain and the software running on it. But it is possible to be wrong even about this much. I would not be surprised if philosophers like Galen Strawson were correct: consciousness is in everything and so even electrons have some rudimentary form of it which, when matter is assembled in certain ways causes more and more complex forms of consciousness to “emerge” from the physical substrate. I do not know and would not be surprised by any of this, but then I would not be surprised if I am completely surprised by something completely unexpected (which is to say: the mystery of consciousness is explained or explained away by a theory completely unimagined so far by anyone who has ever thought about it. Somewhat like the "dark energy" of consciousness: something totally unexpected and counter to all prevailing views). 

I guess animals experience pain but I also concede in saying this that though they experience something (we call pain), it might not be like what we experience as pain - at all. The word “pain” in the case of animals could just be a useful placeholder for “sensation that indicates damage” or some such. Which is to say: it might not be unpleasant. Because unpleasant itself requires knowledge of a human sort, perhaps. I don’t know. But my guess is it is not too far removed which is to say: it's probably not going to be pleasurable given how similar the physiological reactions of humans and other animals are (screaming, face contortion, attempting to get away from the pain when it’s causing damage). Why would *humans* do these things? Deeply ingrained genetic knowledge? I think so: and if so, we share that with animals. The sensation itself, of pain of various types, might be encoded in our genes as well as theirs. But, and I stress this again, even if true this does not mean animals suffer.

Suffering, I have argued, has something to do with our ability to explain the world around us: to explain the pain as bad. But perhaps explaining pain as bad is not the only way to experience pain as unpleasant? Can an animal interpret pain as unpleasant, and unwanted? Well certainly unwanted, it seems. Seemingly an animal will flee pain if given the chance as any pet owner who has attempted to have their animal vaccinated at the vet can attest. But this is a selection effect: not all animals always flee pain. Animals will fight and so instinct tells them that sometimes some things are “worth” experiencing pain for. 

An animal who goes away after such an encounter to “lick its wounds” may be in pain. But is it mulling over the bad decisions it has made? Is it concerned that the injury will prevent it from getting more food or mating? No. An animal cannot explain its vision of the world whatsoever, even to itself. It merely acts on instinct. And that “licking of its wounds” serves an important purpose: to lessen the chance of infection. 

But a person? A person is a universal explainer. It is this unique capacity that enables them to explain any pain (that's the "universal" bit) they experience (note that this does not mean they are correct about their reasons why) and so they can suffer or not as a result of pain. If a person does not know why they are in pain, they can still guess. And they will guess. And the guess will result in suffering, or not, to a greater or lesser degree. Many people know someone, or know someone who knows someone who mistook heartburn (relatively harmless acid reflux) for a heart-attack. Or vice versa. Both “feel” similar. But one can kill you while the other might just disrupt your dessert.  If you suffer all the way to the emergency room only to be told by a kind and wise doctor “you’ve just eaten too much meat too quickly. It’s your oesophagus, silly, not your heart” then the pain probably becomes a rather funny relief. It might even be pleasurable for the journey home.

But non-human animals are not universal explainers. So they may experience pain and not suffer (the latter requires an explanation that the pain is bad for some reason, recall).  The capacity of a thing to suffer could be as stark as the capacity of that thing to construct explanatory knowledge. Which is to say: a binary distinction. Either it can or it cannot do such a thing. If this is the case then we have a clear answer: animals besides humans cannot create explanatory knowledge and therefore cannot suffer. This is a conclusion, and not a premise, that follows from the few simple premises outlined about the nature of humans and what suffering is as distinct from pain.

Bet let us turn now to the exploration of another possibility: perhaps there is a far lower, rudimentary capacity of an animals to create inexplicit knowledge that may provide them with the capacity to feel unpleasant sensations that are an evil? Cat owners know that the cat knows when the food is coming. The owner need only pick up the can opener. This so-called “Pavlovian” response is in one sense encoded in the genes. But can-openers are not. So the cat learns something about the world that was not passed onto it from its parents. It has associated a thing (the sight or sound of a can-opener) with what comes next (the appearance of food). Pavlovian responses like this have their antecedents in the genes, but not the totality of the knowledge required for the apparent cause-and-effect relationship which has been constructed in the mind of the animal to be formed. However, all of the knowledge needed to form the “cause and effect” relationship in the mind of the cat is in the genes, as we will see presently.

While there is most certainly a qualitative difference between people and animals there appears to be some kind of rudimentary ability to construct knowledge-not-in-the-genes. How do dogs learn tricks after all? This “aping” is not genuine construction of explanations which are the necessary precondition for suffering, however, it would seem. Or is it? Is my cat able to explain (give a cause and effect account satisfactory to itself linking) the appearance of the can-opener and the appearance of the food? Is my cat understanding (in any human-like sense) the relationship between a tool used by a human and the satisfaction of its hunger? How can this cat’s response not be explained by the existence of an inexplicit causal relationship instantiated in the mind of the cat that is not encoded in its genes? (At least not in full). This is not mere “aping” as such is it? It may be. There is a reasonable explanation that cats and all other non-human animals display apparent understanding only because of "behaviour parsing" to which I shall turn now.

The physiology of the animal nervous system seems to be close to ours. There seems to be an increase in complexity through the “phylogenetic tree” to use Sam Harris’ phrase. But does this mean there is a gradual consciousness awakening as we climb higher branches in that “phylogenetic tree”? As brains get bigger and nervous systems get more complex, must we expect richer experiences? That seems obvious, doesn’t it? Brains increase in complexity as a mammal grows larger and especially as the brain grows larger with respect to the rest of the body. Some animals do indeed seem “cleverer” than others. So can’t cats explain their world, if not as well as us, then in a way like us, though only lesser? No necessarily. In “The Beginning of Infinity” Chapter 16 “The Evolution of Creativity” David Deutsch summarises part of animal behaviourist David Byrne’s research in which he conjectures that a process he calls “behaviour parsing” can explain how complex mammals (like apes) can solve simple problems. Although not all stages are encoded in genes, an animal has an inbuilt library (so to speak) of behaviours it *can* imitate which are encoded in its genes. So if there is a complex series of behaviours required which, taken together, the animal has no hope of imitating without an explanation of what it’s doing, the animal must break down the sequence into chunks - the individual chunks being the bits that it can imitate. So for example if an ape watches a human (or some other ape) crack a nut and open it, it can’t get it the first time. But it knows the nut is edible as it sees the human eat it. The ape takes a nut and eats it. And spits it out: it doesn’t understand it needs cracking first. It sees the human hit the nut on a rock. So it imitates that and then tries to eat it. Again, it spits it out: it hasn’t yet figured out the nut must crack. But inbuilt in its genes somewhere, somehow, it codes for “cracking” (maybe this is for breaking branches off trees more generally or breaking anything needed to access food and so forth - it is a "general purpose" behaviour). So it bashes harder until it cracks. So now the ape gets a nut, hits the nut, cracks the nut. Eat the nut. But this is just a linear sequence to get the whole result. The ape does not *understand* the hitting causes the cracking which permits the eating. It’s “aping”. It just following the recipe. *If* nut cracking involved, at any point, an activity not encoded in the genes (say it involved *cooking* the nut) the ape would have almost no hope. For say the nut was cooked by a human on a hot stove first. And the ape watched this enough times to “get it”. Now let's say the stove was replaced by a microwave in which the human repeatedly cooked soup. Could the ape transfer the knowledge from one situation to another? I don’t know, but I doubt it. I don’t think the ape would associate the common feature between stove and microwave. I think the ape would associate the stove with the nut and the microwave with soup. 

But a human - a child - could figure out that heat was the common factor to both, is my guess. The human can form an explanation, because the human is qualitatively different. And suffering is a type of explanation: an account of why something is painful or undesirable. Humans are different to other animals in this morally crucial way. It is not clear animals can suffer in the same way that humans can. There is a difference.

Part 4. And yet…

A simulation of a bullet is not a bullet. There is a clear difference between a simulation of a bullet (or even a whole virtual universe) and an actual physical bullet (or whole physical universe.) The *matter* that makes it up. The matter is a real thing that, however well simulated, is not *actually replicated* inside of the computer doing the simulating. It is possible that something is lost. The physical world is, in some sense, logically prior or metaphysically prior to our knowledge of the abstract laws (or even the abstract laws themselves) which describe it and which might be approximated in computer code to simulate a virtual world. The lesson of the universality of computation, to paraphrase David Deutsch, is not that the universe is a computer - or software running on hardware outside this universe. It is that we can create computers within our own universe that can simulate anything within the universe - including the entire universe. But simulated things are not the same as physical things. Even if we had a universal quantum computer simulating something like the most high fidelity model of water flowing out of a hose to study turbulence, it would not *be* water flowing out of a hose. The physical world is not inside a computer. And this raises a further question with respect to animals and what they experience (if anything).

I think it is the case that, for example, computer game characters experience nothing. They have no “internal subjective experience”. Almost by definition they do not feel pain. Unless “pain” is encoded into the program, it will not appear in the program by magic. So if I am playing a Star Wars game as a storm trooper and I shoot the Han Solo character in the leg, the character of Han Solo may (eventually) die. But it’s not feeling pain even if it acts as if it is. It might run or limp away from danger. It might even scream in apparent pain. This is all very much like what an animal does. But just as a simulation of a bullet is not a bullet, a simulation of pain is not pain. An real animal might behave *like* a computer game character, but only for the same reasons a simulated bullet acts like a bullet: it’s a simulation. But real bullets are, well real: they’re made of atoms. There is a qualitative difference that is as stark as any other in this universe. And I guess the same is true of “pain” as it *appears* in computer simulations and “pain” as the sensation is actually experienced by an animal. Neither actual matter nor actual qualia can be replicated in a computer simulation. I don’t know what an animal feels, but I guess it feels something. In the case of the computer character, I guess it feels nothing. To feel something, there needs to be an algorithm to describe the internal subjective state not merely the outward behaviour. There’s nothing “it is like” to be a computer game character. I think one day there will be: when we can program a conscious creature. But we haven’t done that yet. I further guess that won’t happen until we have artificial general intelligence. And I do not see any progress on that front as I describe *here*. (It is important to note that a virtual AGI would be a real person. Because in both cases they would be abstract, universal knowledge creators. And once an AGI is “inside” (say) a silicon computer then the AGI’s *body* is the physical atoms of the computer and its mind is the software running on that computer. The superficial difference where our physical body is “wetware” of the brain makes not one jot of difference to how we, and silicon AGI would be identical as persons: both universal knowledge creators capable of the whole spectrum of thoughts and so on that all people can have: including suffering. This also says: you won't be able to have AGI computer characters you can shoot: unless they choose to allow you to, and it's not fatal and so on.

So the argument that some animals are basically like robots: which means they are basically like computer game characters, is not reasonable. We know the algorithms used to code computer game characters. The games make the characters seem *as if* they are suffering just like cartoon characters seem as if they are suffering. But I know when the anvil lands on Wile. E. Coyote when he’s chasing the Road Runner, nothing there has experienced anything. Except me, who is the person watching. We can program a computer to simulate a bullet, but it’s not a bullet. We can program a computer to simulate a cat. But it’s not going to be a cat. Something will be missing. The physical stuff - the atoms are missing, yes: but there may be more besides. What exactly? In the case of the cat, perhaps whatever it is that gives consciousness which we simply do not know enough about. Again I will repeat a theme in a different way:

What the relationship is between consciousness and the physical world is not known. This is a problem of what is, in philosophy, known as a “phenomenological” problem. That is to say: how does consciousness manifest itself? How is just “raw awareness” (to use Sam Harris’ pared down definition of consciousness) arise? Is a light-sensitive door which opens “aware” of you as you walk towards it? Certainly in some sense it is and yet the entire physical description of how that process works from the breaking of the beam of infra-red light through to the processing of the data through to the opening of the door at no point refers to anything as complicated or emergent a property as “awareness” let alone “consciousness”. Perhaps that’s because a description of that process is a description of awareness. Or perhaps we simply do not understand awareness.

We must admit when all answers to a problem are, as yet, unsatisfactory. In many situations this might admit of a pragmatic solution: just do what works. So, for example, we may not know of the best way to make a rocket get human beings safely to Mars. But perhaps if we wait for a good solution, delays will mean that no action is taken in a reasonable time. And so we do something pragmatic: we adopt a solution that may contain problems that we do not yet have answers to and which might be life threatening but some people may be willing to bear those risks. But that would be on them: fully cognisant of the fact that the “rocket solution” which will take them to Mars has flaws: well known and themselves unsolved. But perhaps not fatal in their design. Perhaps, simply “risky”. And so it may be with animals: what is the risk associated with undertaking the pragmatic solution to the problem of “Can animals suffer?”. It would seem to me that animals do not suffer because that would require an appreciation of explanatory knowledge of the type explained earlier. That which animals, humans aside, manifestly do not possess. But they may experience pain, or something like pain. And it may be that they experience a type of pain, which although we do not call it “suffering” is nonetheless an evil. It is a problem that the animal itself does not want. It is “unpleasant” and in some cases better if it were not present in the world. This is especially true when people are actually involved. Some people genuinely do suffer if they see an animal in pain. Especially their pets. Now these people may be wrong in some epistemological sense for doing so (that we may discover in some distant future - though I doubt it) and for now they must be included in our moral calculus. Mere ignorance is not a reason to dismiss the suffering of a person. “Oh you should not suffer” (think we) “your god does not even exist” when blasphemy occurs. We should not blaspheme around such people. “Oh, you’re silly for worrying. Animals can’t feel pain”. We don’t need to make animals feel pain in front of these people. Especially because: we might actually be wrong. So where we can avoid it, we should not cause pain to animals that some people themselves might see and interpret as the suffering of animals. Because those *people* then suffer.

Conclusions

The ability to feel pain (or anything else for that matter) is just a special case of the broader question: are animals conscious? Which is Nagel’s question again: What’s it like to be a bat? Perhaps there’s nothing it’s like to be a bat. We won’t know until we understand consciousness. Perhaps, and I am inclined to agree with a possibility raised in The Beginning of Infinity, consciousness, creativity, intelligence, universal knowledge creation: these all come along in one jump. These jumps to universality, described by Deutsch, are momentous and surprising in how different a process before a jump is compared to after (hence the use of the word “jump” to really convey that break in a continuum). I think it is quite plausible, therefore, that animals are not conscious. But this in no way commits me to that position. Animals may be more-or-less sophisticated automaton with basic “trial and error” and mimiking algorithms. My little BB8 robot has many of the characteristics of an animal. But we know its code and we know that nowhere in that code is anything as mysterious as consciousness. But perhaps it is conscious. In which case we may be forced to believe Galen Strawsen that there is a fundamental consciousness in all things (so tables and electrons are conscious too). But this would mean: destroying a table causes the table to “experience” something, internal to itself, or its atoms. It “feels”. Or if the table does not feel, nor do molecules suffer when they break into constituent atoms (or perhaps combine) then we are returned full force to the claim that: even if animals are conscious we have no way of knowing if they *suffer*. If Strawson is correct and there is a rudimentary consciousness in all matter we know nothing of the nature of the qualia experienced. It does not really get us far to know that “everything is conscious”. Perhaps it is: but perhaps only humans have that special consciousness that allows for explanatory knowledge creation and morally important suffering. Maybe everything else exists in a state of perpetual mystical bliss, even when it appears to be in pain? As silly as it seems, this cannot be ruled out by any experiment or theory we yet possess.

Science cannot solve this question. All the studies in the world about nerves firing or hormones being released cannot tell us *what it is like* to be that animal. The animal *seems* for all the world to be in pain at times and happy at others. But as much can be said for any computer game character. And yet few are willing to say that a computer game character really is suffering (Superman fights Batman in a computer game here. Does either avatar experience anything, let alone pain? Let alone suffering?). The computer game character will reliably give the appearance of pain and suffering (or not) under similar conditions.

So the question as to whether animals feel pain remains an open philosophical one. And even if it does it does not answer the more interesting *ethical* question: can animals suffer? I argue that animals do not suffer in the sense that we do. There may be a form of pain that causes discomfort - but “what it is like” is an open question and so we should be cautious.

There is no reason for cruelty (delivering pain when it could easily be avoided. Or delivering sadistic bad-pain because it seems fun to you). There is no reason to “shove in the face” of animal right activists your love of meat. Indeed that is entirely counter-productive. It may be hard to change their minds, but being mean to people only hardens their hearts against you and your position. And of course, similarly there is no reason for animal rights activists to be certain to the point of violence about how animals do suffer. The experience of an animal is going to be wildly different from that of a human: if it were similar, they would behave similarly in most respects, not just some minimal respects (like making noise if they suffer physical damage). We should proceed on the assumption that animals do experience a sensation that is in some sense like our “bad pain” and try not to inflict it upon them. But know that animals inflict far worse upon each other (and us!) and only we, as a species, care to really do things on a grand scale to help *other species*. And we do this not because we are in some symbiotic relationship with animals but because we alone can do something about helping animals: human and non-human with the power of our capacity to explain and control the world. Until we know better, I think it would serve us all better to treat animals just a little better as we continue to use them as food and so on. If animals can only “live in the present moment” then let us make their present moments - up to their last - as pleasant as possible. Only then can we be know we are not inflicting evil upon conscious creatures. A cow in a field, tended well, is as happy as it can be. And should it one day, without any foreknowledge - with no fear or concern, be killed without pain to feed hungry people: perhaps no better higher moral purpose could it have served in this world.

Postscript:

I just want to deal in brief with a piece by Peter Singer and show why I haven’t bothered to engage with this most prominent of animal activists. I was disappointed to read his work. An example can be found here: http://www.animal-rights-library.com/texts-m/singer03.htm

Titled “Do animals feel pain?” I do not want to engage much with his conclusions. Let us concentrate primarily on his methods. That is to say: the philosophical techniques he uses to establish his position. They need to be valid arguments, or we can ignore his conclusions (which will be as bad as simply false, or as good as mere assertions). He does write “We also know that the nervous systems of other animals were not artificially constructed--as a robot might be artificially constructed--to mimic the pain behavior of humans.” which I agree with, as I stated. But when he asks the question “If it is justifiable to assume that other human beings feel pain as we do, is there any reason why a similar inference should not be justifiable in the case of other animals?” he answers “no”. He argues, “It is surely unreasonable to suppose that nervous systems that are virtually identical physiologically, have a common origin and a common evolutionary function, and result in similar forms of behavior in similar circumstances should actually operate in an entirely different manner on the level of subjective feelings.” but as I have argued this is completely false. You can indeed share an almost identical architectural hardware (as say chimps and humans do with respect to their brains) but the software (the mind!) can be altogether different. And yes there are hardware differences, of course - and perhaps those hardware differences contain the specialised processing and memory capacity required to run the special “universal knowledge creation” software of a person, but the point is: similar hardware says nothing about software. Two identical Apple Mac computers can run totally different software. One might be running a computer game. Another, a spreadsheet. That look nothing alike. The brain of a chimp might superficially look kind of like the brain of a human: but the mind? Totally different. And so the experiences might be totally different. Indeed I argue they are totally different. But Singer, like most people concerned about this topic, is completely confused about (because he is ignorant of) the relationship between the physical and the abstract; between hardware and software. The brain-mind connection. The mind really is a causal agent. Like software controls the hardware. He does not know about universal knowledge creators and the morally central role concept this plays in our understanding of the potential for a creature to suffer. Of course, this is no fault of his, at the time of writing (that article predates “The Beginning of Infinity” by over 20 years) but I think most people agree “animals can feel pain and all pain is bad so that’s that” kind of thing. But more worrying to me is the following, where Singer writes: “The overwhelming majority of scientists who have addressed themselves to this question agree. Lord Brain, one of the most eminent neurologists of our time, has said: “I personally can see no reason for conceding mind to my fellow men and denying it to animals…”

So Singer resorts to *appeal to authority* and the authority he appeals to resorts to *argument from ignorance*. Singer says “Look, other scientists agree with me” (inference being: scientists are clever people who get things right. Always though?) And the authority “Lord Brain” says “I don’t see any reason to suggest animals don’t have minds like people do” which means “I don’t understand the differences”. Now if I read this from a journalist, or even a scientist I could perhaps forgive these sort of mistakes. But Singer purports to be a professional *philosopher*. One who constructs arguments and explanations in order to establish conclusions. One who knows the logical fallacies - and how to avoid them. But he has not avoided them here. He has deployed them!

He concludes:

“…there are no good reasons, scientific or philosophical, for denying that animals feel pain. If we do not doubt that other humans feel pain we should not doubt that other animals do so too. Animals can feel pain.”

As I have argued: animals may well feel pain. But so does a person exercising: and it feels good, even if painful. An animal that feels pain does not suffer - that is a philosophical position that no science experiment can undermine (yet). These are critical distinctions that, if you are engaged in arguing for so-called "animal rights" and talking about something as ethically important as the morality of pain: you need to take seriously. But given the terrible philosophical arguments made by Singer we must, unfortunately, conclude he is not actually philosophically serious about one of his most cherished areas of expertise. He resorts to arguments from authority, arguments from ignorance and a good measure of the emotive thrown in. Philosophers should be far more cautious because if they have important points to make, people might just stop listening if they demonstrate they cannot "ply their own trade" with competence. 
Proudly powered by Weebly
  • Home
  • Physics
    • An anthropic universe?
    • Temperature and Heat
    • Light
    • General Relativity and the Role of Evidence
    • Gravity is not a force
    • Rare Earth biogenesis
    • Fine Structure
    • Errors and Uncertainties
    • The Multiverse
    • Galaxy Collisions
    • Olber's Paradox
  • About
  • ToKCast
    • Episode 100
    • Ep 111: Probability >
      • Probability Transcript
  • Blog
    • Draft Script
  • Philosophy
    • Epistemology
    • Fallibilism
    • Bayesian "Epistemology"
    • The Aim of Science
    • Physics and Learning Styles
    • Positive Philosophy >
      • Positive Philosophy 2
      • Positive Philosophy 3
      • Positive Philosophy 4
    • Inexplicit Knowledge
    • Philosophers on the Web
    • David Deutsch & Sam Harris
    • David Deutsch: Mysticism and Quantum Theory
    • Morality
    • Free Will
    • Humans and Other Animals
    • Principles and Practises: Preface >
      • Part 2: Modelling Reality
      • Part 3: Political Principles and Practice
      • Part 4: Ideals in Politics
      • Part 5: The Fundamental Conflict
    • Superintelligence >
      • Superintelligence 2
      • Superintelligence 3
      • Superintelligence 4
      • Superintelligence 5
      • Superintelligence 6
  • Korean Sydney
  • Other
    • Critical and Creative Thinking >
      • Critical and Creative Thinking 2
      • Critical and Creative Thinking 3
      • Critical and Creative Thinking 4
      • Critical and Creative Thinking 5
    • Learning >
      • Part 2: Epistemology and Compulsory School
      • Part 3: To learn you must be able to choose
      • Part 4: But don't you need to know how to read?
      • Part 5: Expert Children
      • Part 6: But we need scientific literacy, don't we?
      • Part 7: Towards Voluntary Schools
    • Cosmological Economics
    • The Moral Landscape Challenge
    • Schools of Hellas
  • Postive Philosophy blog
  • Alien Intelligence
  • High Finance
  • New Page
  • Serendipity In Science
  • Philosophy of Science
  • My YouTube Channel
  • The Nature of Philosophical Problems
  • The Nature of Philosophical Problems with Commentary
  • Subjective Knowledge
  • Free Will, consciousness, creativity, explanations, knowledge and choice.
    • Creativity and Consciousness
  • Solipsism
  • P
  • Image for Podcast
  • ToK Introduction
  • Begging the Big Ones
  • Blog
  • Our Most Important Problems
  • Corona Podcasts
    • Brendan and Peter
    • Jonathan Davis
  • Responses
  • Audio Responses
  • New Page
  • Critically Creative 1
  • Critically Creative 2
  • Critically Creative 3
  • Critically Creative 4
  • Critically Creative 5
  • David Deutsch Interview in German
  • Audio Files
  • Lookouts
  • Breakthrough!