In the Western World it has recently been claimed that so-called economic stagnation has not seen the middle classes benefit from the great technological boom in terms of real wage growth. This, it is said, goes some way to explaining the rise of "populist" politicians and economic protectionism. I use a personal anecdote to illustrate a refutation of these ideas and conclude we are, all of us, more wealthy than the economists, politicians and pessimists want us to believe.
"Stagnation" is a term used in economics to denote a period of near zero economic growth. This is to be contrasted with inflation (high growth - or precisely "price increases") and deflation (negative growth or price decreases). It has been argued that much of the Western World, but especially the United States is in an extended period of stagnation. Emblematic of this idea is the work of Economist Tyler Cowen, whose 2011 pamphlet "The Great Stagnation" argues that the causes of growth in America are largely spent and we are now in a period where there has been little "real growth" in wages for some decades and will be for some decades to come. My aim here is not as a critique especially of the work of that economist or even that pamphlet (which is worth reading) but rather the broader idea that things are not much better now than they were a decade or more ago as measured against the index of "real wage growth" or the thesis that is contained in the article already linked to above by Amanda Novello where, writing about the economic "recovery" that is discussed post the 2008 financial crisis:
"Digging deeper exposes that middle and low-income workers and their families in the United States have not reaped their share of the benefits of the apparent recovery, benefits that such a recovery should produce for all, and not only the few. Data shows that, in fact, it’s only wealthier households and larger corporations that have gained noticeably since the recession ended a decade ago. This is because long-developing trends of inequality have proven impervious to the decade’s economic growth."
Is it true that "only wealthier households and larger corporations" have experienced benefits over the last decade? Economists will say numbers speak for themselves. Look at "real wage growth", for example. Real wage growth is a measure of how much wages have grown as compared to the rise in the cost of living (or broadly the average cost of other things in life). Real wage growth is supposed to be a proxy for a quantitative measure of one's standard of living. So if there has been no "real wage growth" is it true one's standard of living has not improved? This is all a very abstract way of talking about people's actual lived experience: their work, their lives, their day-to-day activities (including, in small part, their spending habits). In particular we must consider actual individuals - not groups of people. People seem to think that if those on minimum wage have, as a group, not seen "real wage growth" that this is a cause for concern. But which person on minimum wage 10 or 20 years ago is still there, in the same role? Don't people change jobs - and, in part, because they no longer wish to be on the wage they were? Don't people take on other responsibilities (like study) in order to improve their lot? Economists are quick to define into existence something like a "real wage growth" metric and claim this indicates some deep truth about the lives of individual people. Rather than bring to bear various other metrics that might stand in contrast to this, I want instead to simply consider a narrow aspect of my own life and ask the question: am I no better off? Obviously a single data point cannot refute a trend, but I am doing this for the pessimists who are complaining their lives are no better off. Those who, at 30, 40 or 50 complain things are not much better for them now than when they were 20. I hope that any reader who persists with this piece simply compares my life with theirs. I note that my story parallels that of all my family and friends - many of whom, I would suggest, are far worse off and less mobile than I will demonstrate I have been.
(What follows is a true story, and you may be able to predict where it’s going. So, if you want, skip straight to the final two paragraphs.)
My father was (and remains) what has become known as an “audiophile”. These days the suffix “phile” is added to just about anything one likes to indicate a passion for: numberphile (i.e: a mathematician), retrophile (one who loves cultures of the past), bibliophile (you get the picture). Anyways, before the term existed, my father was an audiophile of the kind that today is rarer than one might think. Or at least I might think. He used to obsess - during the early stages of CD audio - about whether the CD was recorded in DDD or some lower quality like AAD. The "A" was for "Acoustic" and the "D" for "Digital" and the three letters in a row told you something about each stage of the recording process. DDD was clearly "Digital recording" at all stages - so of the highest quality. I knew of no one else who cared about this. But today - I get it. So often I walk along a street to hear a person blaring music for themselves from an iPhone or some other smartphone. I mean - public music played from an iPhone speaker! Now don’t get me wrong - the latest iPhones have reasonable speakers given their size. But outdoors on noisy streets? Putting aside what I consider the discourtesy to fellow pedestrians and others to have their senses assailed by music they may not like following them to the train station, there are very very cheap alternatives that solve all the problems of: faster battery drain, annoyance to fellow travellers and chief among them to my mind: the quality of the sound. Any half decent (and cheaper by the week) set of ear buds or phones completely outclasses inbuilt phone speakers. If one can afford a smart phone, one can afford a reasonably cheap, reasonably high quality pair of earbuds. Whatever the case, I have inherited (ok, learned) this preference from my father. People who listen to the sound from the television’s inbuilt speakers rather than always ensuring it runs through their separate amplifier and high quality speaker system instead - a mystery to me. People content to remain using the included white wired headphones with their iPhone - I just do not understand. I also do not understand Apple's AirPods, period. Given the price - why is their sound quality so low? Why aren't they noise cancelling or at least noise isolating? Earbuds half the price do a far better job. But I digress.
When I was a child - under 10 - I really wanted some good, private set of speakers I could tune into a radio or - even better - play cassettes. I wanted to emulate my dad, of course, and be something like a connoisseur of sound. The first bit of tech I got in this regard was a little mono radio - and I was very proud of it. But within a year - I guess for a Christmas present - I was bought a portable stereo cassette player with radio. And that, to me, was simply amazing. Stereo I could carry around…and play cassettes on. I’m not sure I ever carried it far. It ran on something like 6 D-size batteries. It looked something very much like this.
Next I found, I guess in a catalogue, a pair of over-the-head headphones that had an aerial and could be used to tune into the radio. Well now that was really it! I could walk around listening to the latest hits and not annoy anyone else. These didn’t predate the Sony Walkman - that had been out for almost a decade already - but the Walkman was well over $100 - and in our family - back in the 80s - $100 may as well have been $1000.
But the problem was, it only played whatever the radio stations were playing. I wanted to be able to play my own cassettes. Back in those days, the technique was to wait by the radio station until your favourite song came on, and hit record. This way you could make your own "mixed tape". I wished I could play my various "mixed tapes" on some portable audio device. Alongside my love of portable audio, I had begun to develop a love of hiking. I lived in a part of Sydney surrounded by bushland (forest, in other words) - and in other parts quiet suburban streets. I could imagine few greater pleasures than walking, jogging or running and listening to music. The problem was, of course, the batteries never lasted long with these things. A few hours at most. And, back in the day, you really did stand out as odd wearing such a contraption as pictured above on your head. They simply were not that popular. Especially among people my age. Nevertheless I do recall dreaming of the possibility that I might be able to actually record my own favourite music rather than have to listen only to what the radio was playing at any particular time. This was something a walkman - with in built cassette - would allow me to do. But, again, they were for rich people…not children from the suburbs until, I guess, sometime towards the end of the 80s. By then, there were cheaper (Chinese, I guess) knockoffs. And so finally I was able to get a portable cassette radio. Now I was really cooking because I could record my own music, from the radio on my stereo system (no doubt in violation of copyright law at the time), onto a cassette and then carry it with me. This was the height of technology and personal agency. I think it was in 1993 I was able to ask for my first “digital” actual Sony branded Walkman. I say digital, because it had an LCD read out. It looked exactly like this:
The absolutely remarkable thing about this walkman was that it could store in memory your favourite radio stations. So by hitting the 1, 2, 3, 4 or 5 position one could quickly switch from station to station at will rather than, prior to this, having to manually find the station by tuning using an analogue dial. I was able to record from CD onto cassette all my favourite music - and some comedy radio shows I enjoyed. The first CD player had arrived in our home in 1988 and so I was building a library of cassettes to carry about with me. The only problem with this procedure was that I would often hear a song on the radio and have no way to record it on the fly. I would either have to wait until it came on the radio when I got home - or (increasingly) buy the CD and then transfer it to cassette. I dreamed of the capacity for a walkman to record onto cassette whatever was playing.
When I left school I went to university - full time. By which I mean, 5 days a week, for 7 or 8 hours a day. Lectures commenced at 9am and finished at 4pm, except on Wednesdays when it was 5pm. Uni was located a considerable 90 minute journey away using public transport (which I did) and after university on some days of the week (especially Thursday) I was a security guard at the largest shopping mall in Sydney - and also on weekends. This left very little “free” time except travel time (which was around 3 or more hours a day), but a part time job did make me more wealthy than most of my friends - at least in those early years - because while they went to university as well, they did not tend to work jobs as I did to pay their own way and save a little. Or where they did work part time jobs, they chose to work in fast food or at a grocery store and so on. A security guard required a little training, and there were hazards, so there was a monetary reward for making that choice over some other kind of minimum wage position. Nevertheless it was never paid well (for example, in the late 90s, a weekday shift would be around $13 Australian dollars per hour. McDonalds paid something closer to $10 per hour. In both cases the evening and weekend rates were more (on Sunday, I got "double time"!).
So it was, then, in the late-90s I was able to upgrade my older “play only” Sony Walkman for one that could indeed record. Not only did it have a record function it had so many other features (like a digital equaliser and “bass boost”). The great advantage now was that my journeys too and from work and to and from university could be accompanied by my favourite radio shows even if they were on whilst I was at work or in lectures…because I was recording them for my travel time. I absolutely loved train/bus journeys with music of my choice, or radio show of my choice while reading/studying my university notes...or rather more often some popular science book I had bought. I seemed to have reached the absolute zenith of what I wanted from portable audio. Although I did imagine the possibility of having a recordable CD. Whatever the case this walkman also accompanied me on long patrols of the shopping centre late at night (always at low volume, sometimes with one earpiece out so I could still hear if there was ever any broken glass. Only ever once did this happen - and the alarm system was loud enough that no set of earbuds at whatever volume would ever drown those out)
But, as the 90s rolled into the 2000s and I was merely a security guard on minimum wage in an unskilled job - I was nevertheless able to afford almost as much technology and creature comforts as my imagination allowed me. I was able to build my own PC from buying the best motherboard, CPU, RAM, Hardrive and so on I could find…and I could afford among the best portable audio. Somewhere in the 80s the Sony “Discman” came out - it was a portable CD player…but it was never popular because it wasn’t really very portable. The slightest bump and the machine skipped making listening and walking (for example) an intolerable experience. In around 1999/2000 I did buy one of the first CD-walkmans which came standard with a RAM buffer which meant if it did get bumped, it was able to store about 30 seconds worth of audio on solid state memory rather than skip. But actually my top of the line Sony Walkman then had sound quality that easily matched the discman - because the earbud headphones had really increased in quality. One of my friends, who was also in a low-wage job, bought this for me for my birthday:
That walkman was set apart by the quality of its earbuds as well as its excellent record feature and pseudo-digital fast forward and rewind (it could tell where songs stopped and started again, making your favourites on the cassette easier to find). All of this made my life absolutely wonderful because…as I said, I loved walking. And the more I loved walking and hiking, the more I wanted to listen to music and other audio (like my favourite radio shows I had recorded). But the problem was, one had to carry additional casettes and each cassette was usually only 90 minutes or so. For long hikes that just really did not do. And during this time I went to Africa (Zimbabwe) on Safari which included lengthy hikes...and lengthy travel times and also South America for some months, even hiking the Inca Trail in Peru…a three night long trial at high altitude through the Andes. I think I carried 6 cassettes with me for that journey. There are only so many times you can hear your favourite hits over and again. If only cassettes were smaller…or could store more songs?
Now in truth the Sony Minidisc player had been out since 1992. But it was well outside my budget. So it was not until 2002 I bought one - and what an astonishing device it was. I still have it. It looks like this:
This was merely incremental progress in some ways, but seemingly revolutionary for my life. Casettes had improved in quality markedly over the decade, but now the option of a small optical disc - much smaller than a cassette - could store many hours of audio. Indeed one could choose the sampling rate - the highest selection meant your minidisc could store about 70 minutes of audio, while the lowest quality meant 4 times that amount. There were science, philosophy and other radio shows I could download/record straight from the radio in low quality and keep, while I could transfer my CD audio music collection to minidisc - all stored on generic branded discs which were very cheap, and getting cheaper all the time as competition entered the market. And of course, at this time, this was one of the first devices one could actually hook up to a computer and download songs and other audio directly to via USB. Now that, I guess, deserves the term "revolutionary".
Throughout this time I changed jobs - going from being a security guard, to a “science communicator” with the university (which actually was paid quite a bit less - but this was an exchange I was happy to make as the “confrontation” - physical and otherwise - which is the life of a security officer - had become something I felt I had outgrown).
In the early 2000s, parts of the education system in Australia permitted graduates with just a Bachelor’s Degree to work as casual teachers in schools - so I took on this while I completed a Bachelor of Teaching (which would entitle me to work in schools on a permanent basis, for substantially more money). This brought with it a real increase in my financial position - money like I had never had before and didn't even know what to do with (I should probably have invested - but no, I was having too much fun travelling). But, once more, during this period I was working, then studying, working, then studying. Nevertheless I was able to save more. I paid my way through and completed two more degrees and then used these as tickets to travel for even longer periods. I had been to Africa, and South America and to numerous places within Australia (Tasmania was and remains my favourite. I agree with Edmund Hillary who described it as "the greatest hiking country on Earth".) After saving up from all those jobs, I moved to London and my minidisc player went with me - as I could download music from NAPSTER (which ages you, if you remember it) and radio shows from Australia to salve the homesickness as early versions of podcasts began to become popular. But, of course, the minidisc player still had the problem that the finiteness of the discs meant carrying quite a number of them, if one did not want to get repetitive with their audio. Solid state MP3 devices early on never had much memory - but were better for jogging (minidiscs were still liable to skipping). But the move to solid state seemed inevitable.
As I have continued to work, I migrated fully to an Apple device. At first it was the iPod nano - which was amazing - great for the gym and for jogging because it could store thousands of songs and podcasts in a device one barely noticed they were even carrying. But the first iPhone was for me also truly revolutionary because now, here was a device effectively with unlimited storage: the cloud meant that radio shows were there for download so long as you could find a WIFI or 3G signal. Especially for exercising and jogging this was a true game changer. Suddenly everything on the internet was accessible from my pocket for the first time - and streaming became a thing. The finiteness of the memory was barely a factor anymore.
And now we come to today -and on my wrist is an Apple Watch, as small as an iPod nano with so many of the features of an iPhone - and in my ears are wireless bud earphones. This is the stuff of dreams for my 10 year old self. Or my 20 year old self. Or my 30 year old self.
If I had remained a security guard at that shopping mall all these years - I guess I would not be paid much more now in "real wages" than I was then. Indeed I know, because I can look up, what that job pays here in Australia. And given the rise in living costs - indeed, it’s not like “Security Guard” is a more attractive job now than it was then. Why should it be? Jobs like those, in the main, are not meant to be kept for life - unless one really wants to get into the security industry say and own their own security company. That certainly could be a reasonable ambition. But had I stayed there, in that shopping mall, wearing that uniform, I expect I would have been promoted to supervisor, then manager and so forth up into the corporate section of the centre (interestingly the rank-system in a large shopping centre like that was quite a complex affair!). So no one stays in an entry level job like that forever, that’s common sense. Unless they really try hard not to try to get promoted or find some other job more attractive. People do get promoted, they do gain experience and so are moved into “higher” positions of greater responsibility, or sideways into a position where the ladder is easier to climb. Or they do a course for some hours a week and retrain to take on a different, by their lights better, role that pays more (or is more interesting, more fun, less hazardous and so on).
But say, for argument's sake, I never did any of that and remained a security guard in precisely the same position. Is the lack of increase of income of “security guards” relative to the cost of living - some sign of “stagnation” as it is often suggested to be? People say things like “real wages have not increased” as if people are stuck in the same job, forced to make the same choice day after day? Whatever the case - say I did make the choice to stay in that job and not made the choice to spend the rest of my time studying whenever I had the chance (when I wasn’t listening to recorded radio shows) - would it be fair to say I would have been “no better off” now compared to then? That because my "real wage growth" had been near zero that I was someone being "left behind"?
No. No way! Not by a long shot. Because today, even on that same wage, I could have afforded an Apple Watch and wireless earbuds. Which is exactly what I have now and the pinnacle of portable audio technology for me, so far as I am concerned. The Apple Watch I have comes to me on a “plan” via my mobile provider. It costs me $20 a month to pay off. I could afford that, even as a security guard. Easily! And yes, my data on top of that costs a little more (which I can share across multiple devices) - but the point is - the very best technology and access to the world’s information and music library - almost unimaginable technology to me 20 years ago - is available even to some of the least wealthy people in modern western societies - and soon to everyone else too!
Wealth is not about how much money you have, or cash you can pull out of your bank account. It includes that - but it also includes all the many things that money can buy and which you already have. My Apple Watch - if I could travel back 20 years - I imagine would have been regarded as one of the most astonishing devices in existence making me one of the most wealthy people on the planet. By this measure: the technology on my wrist would have been bought by Bill Gates or some other billionaire - for many billions of dollars if I could have convinced them what it truly was. If you have seen the movies: it would have been like the chip from the first Terminator which, if you recall - was not destroyed when Arnold’s evil character was killed. That last remaining chip was used by a technology company to “go in directions they never could have imagined”. It was basically alien technology. So too my Apple Watch placed in 2000, or let’s say 1990
The Apple Watch really does confer wealth onto you far beyond what its price would suggest. If you own one, you are more wealthy than anyone living in 1990. In 1990 there was no way to get any book in the world fed wirelessly into your ears - read to you by a machine. To thus learn the knowledge that could, potentially, improve your lot so easily. There was no way to call overseas…all from your wrist. People are rather pessimistic about the idea there has been such astonishing progress and an increase in wealth over time. They point to statistics like: wages have not increased while the cost of homes has. Some use this to explain the appeal of particular political movements. The same house today in some town costs 10 times what it did some years ago while the wage for the same job has only increased by a factor of 2. Doesn't this mean society is "going backwards" in some way? Now there may be some legitimate concerns here: there may be government regulations making the cost of housing greater in some places and more or less appealing in others. But none of this is really about how "wealthy" one is. Or if it is, that is merely one metric: how big is the house that a particular income earner can purchase now?
The security guard that I was from 1996 to 2000 no doubt was right to think he was near the bottom of the “Australian” wealth pecking order. But today - were I in the same job, being paid the minimum wage today - I would nevertheless be far far more wealthy. Not because my income relative to other jobs would have been greater - it isn’t. And shouldn’t be expected to be. But rather that the “purchasing power” of that same amount of money is unimaginably greater than what it was in 2000. Namely it can purchase technology absolutely unthought of in that time and which makes any security guard today in Australia on minimum wage the equal of the most wealthy on the planet by the metric that they can buy the best of certain things. I don't know what Elon Musk wears on his wrist in terms of smart-tech - but I know it's not much better than what I do, if at all. And the quality of his earbuds and audio he experiences each day I can bet is not much better than mine. In many ways I am just as wealthy as Musk on a number of metrics even though I have but a fraction of his income. Yes: he can build rockets. But I don't want to build rockets. I quite like doing with my time...precisely what I do with my time, much of the time.
If I had been told in 2000 all the features of an Apple Watch and then asked to guess what it cost, I do not know exactly what I might have said. But given that the cost of the best Walkmans at the time were well over $1000, and the best earbuds (wired of course) some hundreds, I guess I would have thought $5000 would have been a steal. And back then I could not have afforded the best quality walkman with all the best features. But now - the Apple Watch I has does precisely what the best smart wearable tech can do for the wealthiest. Everyone now is far far more wealthy according to that standard: they can afford personal technology that is not super outclassed by people who have much more income. Wages have all gone up in the sense we can all buy more than we ever could because there is more stuff to be purchased - more innovation and creation and technology to make our lives easier, more interesting and more mobile. And by more mobile I mean both more portable and more able to move into other jobs or other interests. Because we can put on our wrists (you don’t even need an Apple watch - there's lots of "wearable tech" far less expensive with almost all the same features) devices that can feed into our ears lessons that can lead us down lanes that in decades gone by would have required us to enrol into university courses at great expense. Now, it’s so much easier. So much more fun, and all so liberating. So is there stagnation? Stagflation? Recession? Cause for pessimism? Whatever the technical definitions from economics behind these terms, it should not cause one to think it has any direct bearing on their own individual life (unless they lose their job, let’s say). Those terms are never about individuals - but groups. Individuals are mobile and move between jobs and thus income bands and, meanwhile, as they do - the innovation continues despite what the naysayers say. Because whatever the gross metrics happen to be, they tend never to account for all the other ways life has improved, individual wealth increased and our personal purchasing power so much greater. Those who claim you’re worse off or that things have not improved are trying to sell you something. Something political rather much of the time. The truth is rather different: wealth continues to increase - you can do far more for far less cost. David Deutsch says in "The Beginning of Infinity" that wealth is “the repertoire of physical transformations that one is capable of causing.” Now just consider all the ways in which your own life has been transformed by technology and ideas, regardless of your income having increased or not and all the ways in which you can, now, if you choose make choices to transform your own life through - for example - education at near zero cost by downloading anything you like - the knowledge - so you can make things better for yourself. By any measure, almost all of us are far more wealthy now than we have ever been before.
I found this an interesting Tweet because in just 3 sentences a remarkable number of claims to object to have been packed into Twitter's character limit. I want to enumerate 3 of them. I begin with the strictly technical.
1. The second sentence says that there are an infinite number of true statements. This is wrong for a technical reason in logic. A statement has no truth value. Only propositions do. Granted, Twitter is not a reading and debating room for philosophical or mathematical logic. Nevertheless, if a tweet is ostensibly philosophical and about logic - we should aim for precision. So there are no true statements. There are only true propositions. But people cannot utter propositions which is why logicians use variables like “p” or “q” to stand in their place. For more on this distinction, see: https://www.buffalo.edu/content/cas/philosophy/events/bl-colloquia/_jcr_content/par/download/file.res/SPJSF.PREPRINT.pdf As that paper admits, there are edge cases. And why? Because, again, we can only use statements even in our definition and discussion of propositions. David Deutsch once had something to say on this point:
There are also many more ways of being wrong than right. A far better emphasis is that: there are infinitely many ways of being wrong on any issue where one could potentially be right. The truth - any amount anywhere - is very very hard won. Error is the normal state of things. Yet Sean’s sentiment is that the truth can almost be tripped over and because it is so easy to find the truth (because “there are infinitely many true statements”) the problem is in choosing among all this truth. But that is not our circumstance at all. The infinite sea of misconception and falsehoods really do run deep - far deeper. Don’t be afraid of truth: celebrate getting nearer to it. Misconception is also nothing to fear but accept that it is ubiquitous and it is choosing among misconceptions and choosing which misconceptions to concentrate upon that is more like the challenge before us.
2. On the first sentence in Sean's tweet above: a less technical error and more a matter of psychology or motivation of intellectual pursuits (or anywhere else for that matter: intellectual activity is on a continuum with every other activity. Theoretical hydrological engineering is no doubt an intellectual activity...but it is not entirely disconnected at all from clearing blockages in a toilet). Are intellectuals really trying to say true things as the first sentence claims? I find it misleading to claim they are because in science (or anywhere else) people are trying to solve problems. Whether they would describe - of their own intellectual work - it as a primary attempt to say “true things” is something we should doubt. In truth we should expect each solution or theory or explanation given by the intellectual to contain some error. There is no error free state and though it might not be a consciously understood theory by many of them, intellectuals tend to seem to accept their fallibility: that, broadly speaking, they could be wrong. And so they are often not trying to have the final error-free word: which is to say trying to say true things. So I doubt the first sentence. I think intellectuals are better described as trying to solve their most pressing problems and trying to say things that approximate the truth in some ways.
3. Of the last sentence, I also do not have a technical objection. But this one is perhaps the most important:
It says: “I’d like to better understand how we do - and should - decide which ones are the ones worth saying.” So the concern here is about which true things we should say (putting aside for just the moment that saying true things is, as we have seen, technically impossible). What this injunction is about is a policing of the mind. It is already the case that the mind is an error correction machine: within a single mind much can (and is!) be criticised successfully and rejected without it ever being uttered. But it is impossible to have all the best criticisms in hand and other minds and other voices are often very good to bounce ideas off. We cannot prophesy beforehand what conjecture might actually be useful. We are fallible. The idea that we can have a recipe or plan or other set of rules that can help us decide what to say and not is a form of tyranny. Given that we have already established that there are not an infinite number of true statements - indeed that we cannot think “true” claims. At best we have approximations - then given all we have are approximations, given our best attempt at getting to the truth or solving the problem - we should feel free to share our ideas. But we are being told by so many quarters right now to be careful what we say; that this or that term or word is damaging. This chilling effect is stultifying debate and making younger people fearful. It is becoming the moral zeitgeist right now: an agitation to be the first to proclaim what else should not be said.
So the misconception here is that public intellectuals should be standing up for, or compiling further criteria for standards about what not to say. We have already been given so many reasons to avoid saying what should be said for fear of reprisals or pile ons or even actual violence in response to mere words: there is a pervasive culture especially on social media and in the media and on campuses that might be called a culture of social compliance. There is a quiet from some quarters and there has been a chilling effect on debate such that the idea that right now in 2020 we need more criteria for not saying things.
Few people have so very many things to say publicly that their major concern is in determining how to sift what should be said from what should not. The main problem for most people is either in having too little to say (period) or too little courage to say what is really on their minds. This latter concern is not an especially new phenomena, but there is a local maximum in the historic landscape when it comes to 2020 and pressure to say what one side (and sometimes the other) in politics deems is correct speech. What I grant may have been lost in determining "what are the ones worth saying" are simple traditions of courtesy, respect and politeness. There are cultures where traditions of what is worth saying and what is not have generated some rather robust ways of avoiding the truth when it needs to be said. But on the other hand these cultures can also help people to avoid the pitfalls of simply saying whatever happens to pop into their head an commit faux-paxs. The problem we have now is that older "traditions of criticism" that might be called "polite society" have themselves been rejected for either "just deliberately offend people you disagree with" or "you must obey our language codes for fear of violent reprisal". Norms in polite society already had distilled out much that would have been called "offensive" - but much of that is being overturned in an attempt to artificially engineer culture rather than allow it through the slow trial and error evolution of culture that has typically occurred throughout human history...except for all those times politics gets involved by insisting on this or that code of behaviour.
Whatever the case, what we do need right now is not more new criteria for helping to decide which things are worth saying (for this is rarely actually a problem for anyone) - but rather we need more straight forward honesty coupled with courage. We need more people to feel free enough to genuinely try to utter explanations of one's own best guessed solutions to problems: without fear of the mob (or in some places government censorship and retribution).
We especially do not need the additional abstract fear that one might be failing to do the impossible: like utter "A Truth" for fear it wasn’t worth saying.
Important inspiration for all this comes from a Tweet by David Deutsch, reproduced below.
The above image links to the original video, or just click here. As always: a wonderfully insightful talk. This is not a summary of the talk but rather my two favourite quotes (passages?). The talk contains important material on what an AGI must be and why we must not treat them differently to any other "race" of people. There is a wonderful part about how the feared existentially deadly "black balls" in urns can be turned into white balls - given the existence of people and when they create the requisite knowledge. The black balls are supposed to be problems we cannot solve in time - as though our inventions are random things we pluck from the unknown and cannot learn more about over time, let alone in time. I leave those matters aside for now.
Emphasis (in bold and/or underlined) is my own.
Quote 1 (From 13 mins 30 secs)
“Once we have a universal constructor, all construction, all repetitive labour, will be replaced by writing computer programs to control the universal constructor. And wealth will consist of our library of programs. The universal constructor can be programmed to self reproduce, so once you have one you soon have 2^n of them and it can be programmed to perform self maintenance too, all from scratch: starting with mining the raw materials – perhaps from the asteroid belt using solar energy or whatever. The program may be hard to write, but once it’s written and if you own the rights to those asteroids – you can sit back and watch your 2^n Teslas roll in, with zero additional effort. And no, we are not going to have a universal constructor apocalypse and be converted to grey goo. A universal constructor is just an appliance – it can’t think – it doesn’t know that its present job is to make 2^n Teslas and it doesn’t want anything. Unless of course you put an AGI program into it. Then it does become indeed potentially dangerous without limit. But that’s for the same reason that you are. Each of you is precisely one of those universal constructors endowed with an AGI program. Or I should say GI? It makes no difference.”
Quote 2 at 17 mins exactly (this is after describing two kinds of dangers most people are concerned about - known dangers like deadly viruses without cures, for example).
“The third category of dangers are the ones to which most efforts should be devoted and yet they are the ones that are currently least feared because they are ones that are not yet known. Like in 1900 no one knew that smoking was dangerous. By the time the knowledge…that it was dangerous had been created decades later, cigarettes had killed hundreds of millions of people. Again – if that had been an existential danger, whom could we sue? So: how can we create the knowledge to protect ourselves from existential or near existential dangers that we do not know? How to address the risk that by the time we do know, we won’t have time enough to create the requisite knowledge? The answer is: by creating general purpose knowledge – deep and fundamental knowledge – as fast as possible – the more we know of the world, the faster we can create new knowledge about novel aspects of it that turn up and become urgent. This is important – I don’t think it’s widely appreciated – the survival of our species depends absolutely on progress in fundamental research in science and on the speed at which we make progress there. And here the key thing in the medium term is understanding the theory of universal constructors, so that we shall know in principle – in theory – how to program them to produce, say, a billion space ships in a hurry customised to deflect an approaching shard of neutronium* or 10 billion doses of a new vaccine in a hurry against a sudden and deadly disease. So that’s how we deal with the third category of the unknown. By rapid progress of every kind: especially the fundamental. The fourth category is at once even more dangerous and yet, in a sense less worrisome because we already have the knowledge -at least the theoretical knowledge to deal with it. This fourth category is not the unknown – the unknowable. It’s a bit paradoxical that the unknowable is less dangerous than the merely unknown but that’s because the only thing that is unknowable is the content of explanatory knowledge that hasn’t been created yet and so the only truly dangerous things in that sense in the universe are entities that create explanatory knowledge - us - people – AGIs too are people. Now, the knowledge of how to prevent people from being dangerous is very counter intuitive – it took our species many millennia to create it but now we do have that knowledge. The only way to prevent people from being dangerous is to make them free. Specifically it is the knowledge of liberal values, individual rights, the open society, the Enlightenment and so on. In such societies, the overwhelming majority of people regardless of their hardware characteristics – are decent. Perhaps there will always be individuals who aren’t: enemies of civilisation, people who take it into their head to program a universal constructor to convert everything in sight into paperclips and they may devote their creativity to doing that – but the great majority will devote – that is the great majority of the population of such a society will devote some of their creativity to thwarting that and they will win provided they keep creating knowledge fast in order to stay ahead of the bad guys."
*Note: neutronium is a kind of matter out of which neutron stars are made. It is composed entirely of neutrons and is thus exceedingly dense and would, for example, destroy a planet made of normal matter by causing severe gravitational effects on the planet (by, say, converting all of the planet itself into neutronium if it came in contact with the planet).
Years ago I produced a video response to a conversation between David Deutsch and Sam Harris on the topic of morality. It can be found here: https://www.youtube.com/watch?v=5kPSI6djlwE
Just to give him "right of reply" right up front, though Sam said he watched part of it and it was a "nice job", he disagreed that he was as much of a foundationalist as I was claiming. See here: https://twitter.com/SamHarrisOrg/status/1057867691573166080?s=20
Whatever the case, this video remains my most watched and liked one (by quite a stretch). At the time I argued that what I interpreted David Deutsch’s position to be (fundamentally, but excuse the pun) on morality was that there can be no moral foundations. This is simply consistent with David’s view of all knowledge: it is not built up from “firm” foundations to ever more lofty precepts. Knowledge, following Popper, is always conjectural so even the most cherished foundations can be questioned and are liable to error because we discovered all of morality and we are fallible. Morality is about moral explanations just as science is about scientific explanations. In neither case do we need to concern ourselves with some set of foundations that act like axioms from which we derive (i.e: “justify”) the rest of our knowledge. Instead the correct approach is that we encounter a problem (a conflict between our ideas) and then go about seeking solutions. We do this in a critical manner: a solution is proposed and we try to find fault with it. If we cannot and there are no other rivals, then that becomes our best explanation and our tentative purported solution to our problem situation. Where there is no such single solution we have to continue the process of conjecturing better ideas and setting them against our criticisms. This process is all objective: it is about what solves the problem and not (in the overwhelming majority of cases) what one thinks or feels about things. This is why morality is not built on feelings (like empathy or happiness or even “well being”) and nor is epistemology - knowledge - built on feelings of trust or certainty, hope or faith. In all cases it is about what “out there” in the world solves the problem. And that is a matter of fact in the objective world: either the problem is solved or it is not. I must also mention that there is an important link that David Deutsch has discovered that unites morality to epistemology. That link is that the only moral imperative is that we ought not to destroy the means of correcting errors. Error correction is what knowledge creation and epistemology is all about and destroying the means by which we correct errors and thus create knowledge is a choice we cannot make as from this only evils follow. See his book "The Beginning of Infinity" for more on that.
Keeping all this in mind, our present circumstances have brought into bright relief some of the feelings that people have been relying upon in order to arrive at this or that opinion on what to do to help others, or not help others as the case may be. I have been inspired this time, therefore, in large part by Yaron Brook who speaks eloquently about morality (and many other things). If you are not aware of Yaron, you should begin at his Youtube channel and listen to what he has to say. He has a lot to say: and that is good. Because he uses a few poorly understood explanations of the world in order to guide his thinking on some of the most pressing problems of our time. Because the explanations he is relying upon are themselves so poorly understood (even though they are true) the conclusions he often reaches are counter-intuitive to those who have not heard them before. For some of us, though we understand where Yaron begins and can imagine where he’ll get to, the force of his conviction and the eloquence of his speech are inspirational. His channel is here: https://www.youtube.com/user/ybrook and he can be found on Twitter and Facebook.
What follows is what I am calling The Christian worldview of mainstream atheism and a version of it (without the above remarks) can be found in audio form here: https://soundcloud.com/brett-hall-653181617/christian-atheists
And, while I have you, my own podcast largely devoted to the work of David Deutsch can be found here: https://brettroberthall.podbean.com/
The Christian Worldview of Mainstream Atheism
What is an atheist? By definition nothing more than someone who rejects theism. It’s not a high bar. Strangely, perhaps, many atheists will say they “reject all religion”. See Ricky Gervais for that. But do they?
Let us first admit that many atheists will concede they will not throw the baby out with the bathwater. They want to retain rather much of what in religion is good. Sam Harris says there is a “kernel of truth” with respect to the spiritual insights Christians and other religious devotees are seeking. Douglas Murray will praise the tradition and even the ceremony to some extent. And I myself will argue that there is inexplicit knowledge in these cultural practices that we would do well to respect if not always adhere to. Religion can be a stabilizing force.
And yet rather many of us notice that there is very little bathwater and rather a lot of baby. In the case of prominent atheists - let us take Ricky Gervais - it is mere drips of bathwater on a bath sized baby. In particular Ricky will reject God (that’s the easy part) and angels and saints and all the other Gods too. But really - and I do mean really - this is the icing on the cake. Yes some religious people are deeply moved to take seriously the notion that God is answering prayers and has a plan for them. But rather all of them take the morality seriously.
What is Christian morality? It is altruism - self sacrifice. Do good for others at cost to yourself. The more you sacrifice the better a person you are. That is the example of Jesus. He gave up everything for everyone else (so it is said). “What greater gift can any many give than to lay down his life for another?”
But some may object: that is not CHRISTIAN morality - that is simply, morality. That is good virtuous stuff. A moral universal that no Christian can lay claim to owning. That is human morality.
And that, in a nutshell, is how deep the rot has set in. People will defend Christian morality as universal while claiming they are not Christian. They enact the ritualistic defense. “This is a universal moral code. Altruism is a virtue. Wealth is an evil we should be skeptical of. It is harder for a rich man to enter the kingdom of heaven than for a camel to pass through the eye of a needle.”
Yes, religions like Christianity were an improvement over warring tribes. If all the warring tribes come together under one common religious banner then at least some internecine violence can be quelled. What an advantage. But this is the first step, surely. This is just a first approximation. We had to get rid of the tribe. We had to get rid of violence between small groups. But the first solution was just “make a bigger group”.
Another solution loomed: tend not towards the group but the individual. Tribal hostility can end when there are no tribes at all. When we act instead as individuals motivated not by group identity but by our common individual drives and hopes and dreams.
Altruism - self sacrifice - is an evil. It says: do for the other and if you hurt, then you are even more virtuous. And that is the morality of socialism. Work for the other - for the collective. Pay heavy taxes to help the least of us. And wealth? Be skeptical of that. Give away your wealth. Be like Bill Gates: gain great wealth, earn scorn but then give it away to charities and earn acclaim. That is the moral thing. And no one needs to consider it is actually the Christian thing to do.
Gates helped billions with Microsoft and made the world an amazingly better place. It was extraordinary work unprecedented in human history. Is his malaria work good? Of course - but it pales in comparison to what he did without regard for others. He feels some sense of guilt with his worth - so he gives it away.
But what can morality be if not about the other? It is about yourself. It is about pursuing your own values. It is about not coercing others and promoting freedom for others. And it is about benevolence and kindness. Because together we can achieve more than we can achieve alone. We should aim to make the world a better place - and that takes working with others but it never entails sacrifice. It entails doing deals - finding common humanity with people of like mind and pursuing your interests when they align with those of others. Anti-altruism does not mean anti-kindness or anti-benevolence or anti-compassion or anti-cooperation. What it means is be kind and kindness will come to you. Be benevolent and benevolence will be shown to you. Cooperate and people will cooperate with you. What a beautiful win win situation that is.
But the Christian idea - the entrenched Christian and socialist idea is zero sum. You must lose a little so others may win. You must sacrifice so others may rise. And the more you sacrifice the better a person you are. This evil ideal has become a virtue in mainstream society and only because of religious precepts. It had to be invented. Religion invented this old idea. But there is a new morality - a modern morality. A morality of individual kindness without coercion. A compassion without sacrifice. A concern for all of humanity without losing sight of the centrality of you yourself in your own life. We must be rational. We can look upon our fellows now and feel pity or even sympathy. But we are not mere animals driven by primitive urges. We are different. We are thinkers - users of reason. The pity and sympathy we all feel signals a problem of some kind in the world. We want to help others so they too can solve problems and help us to help them and the cycle continues and together we all rise. But if, as the Christians and socialists say it is virtuous only when there is no thought for yourself there is no cycle. There is a one-way street.
There is a zero sum. It’s win-lose.
It’s time you thought it though. Is it received wisdom on morality that drives you? Are you a Christian thinker even if you reject God and Jesus and all of the prophets and Saints? Are you driven by emotion and skeptical of wealth? You remain trapped in a mindset, caged by religious orthodoxy. You may call it socialism. You may call it altruism. But that is the trick of religious memes. Forever repackaging themselves with a simple change of label. A new font, a colour change to the logo. But unwrap it an peek inside: and there’s Jesus.
Cast it aside. Look instead to your brothers and sisters as truly capable of reason as you are. As able to contribute as you do.Look on them with kindness and encourage them not to sacrifice themselves but to work to better themselves and to solve their problems so they can generate more wealth so they can go on to do greater things. To pursue their passions and be inspired by flying free of bad ideas about “giving up” for the other or for the “common good”. When we all work for ourselves we quickly find common cause, make friendships that are not coerced but built on bonds of mutual advantage. No one loses. Everyone wins. Not at any cost: at no cost ideally to yourself or anyone else.
“It is not from the benevolence of the butcher, the brewer, or the baker that we expect our dinner, but from their regard to their own self-interest. We address ourselves not to their humanity but to their self-love, and never talk to them of our own necessities, but of their advantages” - Adam Smith
And so it is. We can all rise by reminding each other how much we can do for them. Not by sacrificing ourselves but by meeting our own necessities. Necessities to live, survive, thrive and genuinely feel alive.
The Kingdom of Heaven must resemble something like the worst depravities at the worst times of the most failed states for whatever it is like if it is hard for the rich to enter it, we may all do well to avoid it.
On a recent “Making Sense” podcast with Sam Harris, Dr. Nicolas Christakis called for the closing of schools across the United States. Dr. Christakis list of qualifications is extensive: a sociologist and physician, the director of the “Human Nature” lab researching the ways in which (and presumably extent to which) evolution drives human behavior. He has an M.D and a PhD (in sociology). Crucially, he has worked in epidemiology. At a time such as the corona crisis of 2020, he must have much to offer so if he says “close the schools” we should listen: https://www.sciencemag.org/news/2020/03/does-closing-schools-slow-spread-novel-coronavirus
On another recent “The Joe Rogan” podcast, Dr Michael Osterholm called for schools NOT to be closed. See that podcast or here: https://www.washingtonpost.com/education/2020/03/09/is-it-really-good-idea-close-schools-fight-coronavirus/ Dr Osterholm likewise, among his considerable qualifications, lists “epidemiologist”. Unlike Dr. Christakis, however, Dr. Osterholm lists little else among his interests. He is a specialist, not a generalist. He is, crucially, Director of the Centre for Infectious Disease Research and Policy at the University of Minnesota. His life’s work seems to be devoted to: infectious disease epidemics and is a consultant to the World Health Organisation on this.
Dr Amesh Adalja has been more prominent in the media than almost any other expert on COVID-19. He has appeared on The Yaron Brook Show, The Rubin Report, Making Sense, Fox News, CNN and many more besides. He consults with John Hopkins University and has devoted his life to pandemics and biosecurity. He is, again, a very narrow specialist. He has advised against closing schools in a somewhat weaker sense than Dr Osterholm: https://www.livescience.com/should-schools-close-for-coronavirus.html(Other experts at that link agree with the stronger stance of Osterholm)
In Australia, Dr. Norman Swan advises the Australian Broadcasting Corporation and has been prominent in the media. He is a physician and journalist who has appeared on a number of popular television programs including “The Biggest Loser” measuring contestants’ “bio age” and comparing it to their “calendar age”. He is across many areas of medicine. He is a generalist. He has repeatedly called on the government to “close the schools” and does not vacillate about it: https://www.abc.net.au/news/2020-03-15/dr-norman-swan-recommends-proactive-national-lockdown/12057956 or here: https://au.news.yahoo.com/health-expert-blasts-government-for-keeping-schools-open-093418822.html
The “close the schools argument” seems like common sense. After all: kept in a confined space, won’t a single infected student infect all the others? Wouldn’t it be safer to keep them home? “Why don’t they just” questions are aplenty in these times. So "Why don't they just" close the schools?
The argument against closing the schools is harder to make.
It is no sin to be a generalist. I call myself one. And in truth there is no “pure specialist”. But it does take all sorts and some people are far more specialized than others and this is never more true than in medicine. I have no expertise whatsoever in epidemiology (nor indeed any kind of medicine) but if I can, perhaps, claim some modest expertise in anything at all, it might be expertise in experts. I make this claim having spent much time speaking with geologists and geophysicists, astronomers, cosmologist and astrophysicists, chemists and philosophers, theoretical physicists and experimental physicists, medicos of a couple of kinds, lawyers of other kinds, engineers and so on. And I’ve noticed over time certain kinds of blind spots as well as astounding levels of depth. There are some experts, among them a very good friend of mine and he won’t mind my saying – a doctor of exceedingly specialized skill that there are perhaps few in the world who can do what he does – and who is so very careful to say when he is out of his depth on areas of medicine outside his narrow area. In the area he is an expert, he will speak with confidence and clear sighted knowledge. He seems to know the danger of making medical pronouncements about things that he covered once long ago in lectures at university or perhaps as a general surgical registrar. He simply recognizes he may not be “up to date” with the latest on whatever medical trouble his friends decide to trouble him with on any particular day. His day to day work is about particular organs and particular troubles and while no area of medicine is utterly disconnected from another, he knows there are others out there to defer to when the organ, mere centimeters away, isn’t the one he is known for treating.
But he is one kind of expert. There are others, and the medical community is not immune to this, where they feel obliged to make public policy pronouncements when the airwaves are already thick with opinions from actual highly specialized experts. Now should such non-specialist “experts” be shut down? Of course not. But it makes things very difficult for the populace and then for actual authorities (the government) to explain clearly the best policy given the best information. It does not take long, in my experience, for astronomers as a group, to be very willing to give all kinds of advice (and it’s usually always the same) when it comes to economics, public policy, climate policy, university administration and so on. There can be a kind of group think among these experts when it comes to areas outside their narrow field of day-to-day work. And, seemingly, the more specialized they become, the more their other opinions tend to conform to the “man-on-the-street” view. That’s not always a bad thing, but perhaps there is a kind of “creativity limit” where they have exhausted their creative thinking for the day whilst at work and when it comes to other matters they fall back onto more prominent memes circulating among their colleagues. It can certainly make the work day easier and socializing with peers less fraught with friction if ones other ideas conform. There are sociological forces at play, of course. But this phenomenon rarely has any serious effects – but in a time of crisis it may be deadly.
The word “expert” is a sliding scale. I trust none of them. But then I have taken on Fox Mulder’s maxim “trust no one” to the core. An expert is someone whom I know (fallibly, as always) has gone though experiences and gained the knowledge via a process of error correction through encounters with reality in the narrow domain within which they call themselves an expert. People like Dr Osterholm and Dr Adalja are the real deal. They are actual infectious disease experts who have worked on outbreaks like SARS-1, MERS and Ebola and tested their theories in the field with WHO and the CDC and so on. They have not merely read the literature or know the theory or kept up to date with the latest texts. They have been actively involved in situations before that are the closest thing to what we face now - at the coal face on these kinds of things. The average M.D, “General Practioner” or even PhD in epidemiology or (especially!) PhD in Mathematical Modelling of Epidemics are not the primary people for government (and media, mind you) to be consulted. Not in the first instance. Secondary people: sure. Helpful in some general sense to critique the primary critiques. But they should seek to clarify not add to the noise and at a time like this, not publicly “think on their feet”. And they should be challenged whenever their pronouncements diverge from that of the primary experts. At best the secondary people should act as knowledgeable conduits of the primary people. If they think they have better ideas: wonderful. They should not be backward in coming forward. But they should likewise have first considered what those primary experts have said. The secondary people should have gone through a careful process of error correction alone, or with their close circle of collegues before going public with some kind of advise to government and others with things like “close the schools” when the top tier specialist people are saying otherwise.
Not all people called experts are expert in the specific thing in question. Not all are highly specialized and have devoted their life to, essentially, one thing and had it tested in reality and not merely in some theoretical model. And no expert is an “authority” which is to say: none should ever have the power to shut down debate or direct people against their will. No one should ever do that.
However, in a democracy, we have elected officials – politicians – who we have decided can exercise such a power in a crisis. This power should never be delegated and should be used only in extremis. We are not yet “in extremis”. Times are urgent and the word “crisis” is apt. But this is no time to leave behind a sensible concern about which people we can best assume have the knowledge to guide us through.
It is rather like we are all on an Airbus A-380 and the announcement comes over the PA system from the chief steward “This is an emergency. There is a fire in an engine and (by remarkable coincidence) the 4 pilots have all fallen gravely ill. We must make an emergency landing. Are there any pilots on board?”
5 people step forward to help.
The first is a mathematician who has flown 10,000 hours on an A-380 flight simulator.
The second is a light aircraft pilot with 5000 hours experience in a Cessna.
The third is a Learjet pilot with 8000 hours experience.
The fourth (you can see where I am going with this) is an A-320 captain who is learning to fly the A-380 and has clocked up 1000 hours.
And the fifth is Captain Richard Champion de Crespigny - a 15000+ hours experience A-380 Qantas pilot being shuttled back to his home airport and who successfully landed QF-32 when one of the engines caught fire in 2010.
It is a wonderful thing the passengers are so fortunate that day to have such a wealth of people to advise them. But though experts according to some broad definition of the word, they are not all "equal" by any means. There is clearly a “best person for the job”.
Without the fifth person, I would happily take the advice of number 4. But given the fifth person, and given I know nothing about how to fly a jet of any kind, if there is a dispute between number 5 and any of the others (or even all of the others) I will defer to number 5. Even better if I can understand the debate and 5 sounds reasonable. This is no matter of consensus because the 400 passengers on board may well be serious adherents of the religion of the Spaghetti monster and may vote to turn off the engines and attempt a safe glide into the holy Spaghetti pond.
And we need not “obey” the expert here as an authority. Upon safely landing, should our number 5 A-380 Captain then proclaim, upon the tarmac, to indefinately remain on board and wait patiently for the fire brigade as flames began to lick at the windows, I would disobey and help others seek to open the emergency escape for I know enough about aircraft and fuel tanks and fires to know that time is of the essence. I need no "expert" for that kind of thing. There are good reasons to defy an expert (see the actions of the Captain of the South Korean tragedy of the "Sewol ferry" for more on that kind of thing.) Experts can be wrong just like the rest of us but unlike the rest of us are more able to reliably and routinely convey to the rest of us the best available knowledge at any given time. They are not there as inerrant authorities and they should not be expected never to err. That too is a hazard of the “authority” moniker. It means that if the “authoritative expert” makes an error – as surely they will like the rest of us – they should not be blamed unless they have done less than their best. For their best really will be THE best at the given time. That’s what it means to be an expert. They too can fail.
So should we close the schools or not? The striking thing about the split among the experts is that the more specialized in infectious disease outbreaks, it seems the more likely the expert is to advise “no” with caveats and a seeming willingness to change if new evidence is found. But with the generalists: the trend seems to be to advise “close them now” with few caveats. But those generalists risk little because in general they aren’t advising those making those calls.
But we should be there to back them to try again, and again, because at times like this, that’s our best bet.
I was driven to this post by running out of toilet paper and being unable to find any at supermarkets near my house so walked in the rain to the next suburb over to only have to push past people filling whole trolleys with packets of 9 double length rolls as shelf stackers restocked…
Have an opinion on the severity of this virus after listening to an expert. Like this guy. He is a scientist who specializes in viruses like this and advises the US government on outbreaks like this.
I think everyone who wants to venture strong opinions on this virus - including having strong opinions on whether to stockpile things - should either be an expert or watch someone who is, like this guy. But if you cannot do that much, here are some takeaways:
The death rate for Corona (COVID-19) way way less than 1%.
Over 80% of cases are mild…many people won’t even realize their 2-day long sniffle was corona-19.
Its R0 value (how contagious it is = how many people an infected person will typically infect) is relatively low at 2.0 to 2.4. Measles, for example, has an R0 = 15.
Politicians shouldn’t comment on things they have no idea about. Eg: when Bernie Sanders criticized a pharmaceutical company some time ago, so they stopped working on a vaccine for a related virus due to all the bad publicity…so we don’t have all the vaccines we might have had.
Wild animals are a key way these viruses spread. All corona viruses originate in bats but then end up in some other animals and only then in people. Eg: MERS which was a worse form of corona was transmitted by camels. MERS had a 35% death rate but was far far more difficult to spread human-to-human but EASY to spread camel-human.
There is a “severity bias” with COVID-19. This means you only hear about the very small number of severe cases because they end up in hospital being tested. Most people who get it – you included – would probably never know you had it.
In 2009 the H1N1 virus looks like it was worse. In the US alone well over 200,000 infections and 12,000 deaths. This new one looks to be “mild to moderate” – it won’t “go away”. Viruses are just a part of life. This coronavirus may end up “endemic”: we may just have to live with it for years.
The Australian government is overreacting. We cannot contain it. We just have to be resilient and help the rare people who get really sick. Politicians tend not to get punished for over reacting on these things. But they should. China overreacted majorly, according to this expert, because the virus could never be contained and now it sets a bad example for other countries and when it breaks through the containment people will think that containment never works. It does…but not for this virus.
Masks are not needed for the general public.
This virus is especially not severe for children.
Wash your hands.
Modern healthcare systems really reduce fatality rates. Eg: ebola was thought to have a 90% death rate….but simply giving people IV fluids in a hospital reduced that number to 20%.
The main effect on the economy is due to misinformed shutdowns by the government.
“All evils are caused by insufficient knowledge” is what David Deutsch has called "The Principle of Optimism". A reason this is a principle of optimism is that it means whatever is going wrong - whatever the evil thing is - whatever we no longer want (or do want but don’t have) is just due to some lack of knowledge (in other words: ignorance) on our part. If we try to create knowledge, seek solutions and search for answers, then with effort - sometimes great effort - we can find them.
Logically this is simply equivalent to “problems are soluble”.
There is some confusion, I believe following deeply entrenched traditional religious memes - about how “evil” should imply intent. This too is a moral claim, of course, about how words should be used. I guess the idea evil must entail intent comes from the idea that there was once considered to be “anger of the gods” or other beings (so natural disasters have traditionally been interpreted in the pre scientific era as evidence of god’(s)’ anger. When we learned lightning was just static electricity sparking, we regarded there as being “no intent”. So god's anger (or perhaps the anger of demons or whatever) was not there, so how can lightning be an evil?). Even avowed atheists inherit this way of speaking and thinking to some extent. Evil can only be evil if there is intent. "...the line dividing good and evil cuts through the heart of every human being" wrote Solzhenitsyn. This can be interpreted in a number of ways. Here's two of them. (1): there are mystical type forces (Satan versus Yaweh, say) or even just "psychological forces" pulling the soul (or mind) in one direction or the other. (2) People possess genuine knowledge (the good) and far more ignorance (sometimes bad) and the latter can cause real suffering. People can also sometimes know what just isn't so: and that's a form of ignorance which can also cause suffering of an even worse kind. The first "force" type way of thinking can be a useful way of framing things but it can also mislead one into thinking that all evil actions must be motivated by evil intent. But they need not be, even if evil philosophies - dogmas of all kinds - can act very much like psychological forces. But there too a person is a victim as much an as active agent. Their intents are shaped very much by ideas they learned on mother's knee and for which they only have partial responsibility because they are only dimly consciously aware of them. Many ideas are unconscious or inexplicit: and if one doesn't know one even has an idea which is directing one's behavior to what extent is one responsible for that idea?
It is often just rank ignorance that causes a person to commit an evil act. People can intend great good - have the best of intentions - and yet they are so profoundly ignorant and mistaken they cause the worst kinds of suffering. It's been well documented certain kinds of suicide bombers apparently feel nothing but (apparently) positive emotions: joy, ecstasy and elation. Even if these emotions aren't actually positive in any ultimate sense, the INTENTIONS of these people are good* - they’ll send their “victims” to heaven - eternal paradise where they can eventually be reunited with everyone they love. So do such people have evil intents? Clearly not. They are motivated, on some accounts, by deep love as perverse as this may seem. (See here for that: https://samharris.org/islam-and-the-misuses-of-ecstasy/) But it’s evil nonetheless. Their intentions don't matter to the evil of the matter at hand.
Intention does of course make an important difference in a legal sense (though I disagree with some of what the legal system has to say about this: for example that "mental illness" is a defense because the intentions were different - but this is beside the point for now). But there is a clear difference between accidentally knocking someone into traffic and deliberately pushing them. There, intent is everything. In either case a civilized legal system shows compassion, but in the former it might very well result in no gaol time at all, while in the latter one should probably be locked up until a very good explanation is known about whether one is predicted to do that sort of thing again.
I recall quite vividly having a thought myself around age 8, when in Catholic Church, hearing about all the ways we might end up in hell and how salvation came through baptism. I reasoned myself into what I thought was the best possible thing that could be done for the newly baptized: swift death. Indeed what would God think, I wondered, if someone took on an "Angel of Death" kind of position and went around quickly murdering anyone newly baptized so that they went straight to heaven? And what if that person's thoughts were "even if I get sent to hell, I feel it worth it because I have saved so many souls" - wouldn't God look favorably on such a serial killer? Wasn't that be almost like Jesus or even better: the sacrifice of oneself for so many others? Why don't well meaning people just murder babies right after baptism? Why don't loving parents? Are they selfish? After all - babies are necessarily innocent and would go straight to heaven once the original sin had been removed. I couldn't find a logical loophole. I still cannot...except that I've relinquished much of that religious mindset. But the point is: the intention can be of the greatest good, though terrible suffering (evil!) follows. And why? Because it's the ignorance that makes the difference here: a serious lack of knowledge about reality. There is no supernatural heaven and no God to reward such an Angel of Death.
But if one remains unconsciously in a religious mindset, one thinks “evil” necessarily has to be about intention. One might say a person has evil intent and only people can be evil and so on. In reality, let’s just observe that the typical “evil genocidal dictator” seems to think they too are doing good purifying the world. They have, in their mind, “good intentions”. Of course we see their behavior as abominable. I think it more parsimonious to just say: problems can be an evil. A hazard. A real threat to life, liberty and the pursuit or actualization of happiness.
But if you insist “Only people can be evil” very well. *Shrug*. The terminology doesn't matter to the Principle of Optimism which runs deeper than whether we use the word "evil" or something else. Lightning that strikes an innocent child is, we must admit, a bad thing. What’s a bad thing? A problem - in particular a problem that causes some kind of suffering rather than one which elicits joy.
A bad thing? An evil thing. It doesn’t really matter what words you use. I like David’s formulation of “evils are due to insufficient knowledge” - I think it’s quite right and it follows in a lineage of usage of the word that tries to capture something of its fundamental relevance to morality and of course illustrates, therefore a deep connection between what might have been hitherto thought the separate domains of morality and epistemology. But if some people would rather we just say “bad stuff is caused by insufficient knowledge” or just “problems we’d rather not have are caused by insufficient knowledge” or even "suffering is caused by insufficient knowledge" so be it. It seems that these are just logically equivalent ways of saying the same thing.
“All bad things from murderous intentions to murder, from traffic jams to jamming fingers in doors, from Earthquakes to being hit in the head with rakes, from falling down stairs to attacks by bears and solar flares: these are all bad things and all are caused by our lack of knowledge. If we know how to prevent people having murderous intentions, we should prevent them, if we can find the knowledge needed to predict and stop Earthquakes or mitigate their destruction to zero - we would and should if we could. It all means the same stuff. I’ll continue to call these things “evils”. Of course an earthquake has no intent as such - it’s not motivated, twisting a moustache and concocting plans about how to hurt lots of people. It literally feels nothing (panpsychists aside). But then apparently nor does a psychopath feel anything, or rather anything but joy (though I’m not convinced).
And once more this highlights “other” kinds of approaches to morality and epistemology that call themselves “objective” but in fact fall back into subjectivist “what’s going on in your mind?” and “how does a person feel?” kind of way of thinking. That “objectivism” is at heart subjectivist for that reason.
Actual objective morality and epistemology says: feelings are not the criterion. Private thoughts are not the criterion. What matters is what goes on out there in the real (ontologically objective) world. Are problems being solved or exacerbated? Problems being solved: that’s a good. And being exacerbated: that’s an evil. And why would problems be exacerbated? Because someone somewhere (perhaps everyone) is lacking sufficient knowledge. Recently in Australia many lives and homes were lost in bushfires. One reason is that very well meaning people did not want so-called "back burning" to occur (the preemptive burning of forest and undergrowth around people's homes). Well meaning people didn't want the environment hurt, others didn't want people hurt from the smoke coming from the back burning. No one intended and no one really foresaw the December-January bushfires around Australia. Those bushfires were exacerbated by insufficient knowledge. And the death and destruction was a real evil. Yet no one intended evil (modulo arsonists - but again many of those were children ignorant about the possibilities and others even more ignorant than that). Evil isn't a parochial property of individual minds (though it can be that) - it's just a problem that causes suffering. And natural disasters on this view are an evil to be solved.
Or if you want they're just "bad things". I don't mind. :)
*When I say above "the INTENTIONS of these people are good" I am merely conceding the way the words "intention" and "good" are often used. Namely: if one feels one is intending to do good...then one really is intending to do good. But there is a philosophical subtlety here: one can intend to do good and yet still have bad intentions. This is the difference between a "subjectivist morality" and an "objectivist morality". On the former thesis (subjectivist) things are good to the extent people think they are. It might be yourself in your own life or some kind of consensus (a greatest good argument). On any subjectivist view: it's about feelings and/or personal psychological states including states sometimes referred to as beliefs. So if you really truly believe (or feel) that you're trying to do good...then you are in some way. But on the objectivist view, this is all mistaken. It doesn't matter how you feel: it matters what happens in the world by the measure of: are problems being solved or not? Now on that view your intention does not matter a jot. Your intention can be utterly mistaken. Moreover because morality isn't about personal psychological states we also do not condemn a person for that personal psychological state that was mistaken. We simply correct them. Error is the natural, near ubiquitous state of things and all of us are "equal in our infinite ignorance" as Popper said. So a person who intends to do good (our proverbial suicide bomber thinking they are sending those murdered souls to heaven) has an objectively evil intent. Their own feelings on the matter don't matter. Murdering innocents destroys many means of error correction (namely people) and that halts progress in some ways and slows it in many ways and creates all kinds of terrible problems while solving none. That is an objective state of affairs having nothing whatsoever to do with feelings and beliefs and other private psychological states. We can talk about suffering as a problem - even a painful one and the solution of it without ever worrying about receptors in the brain or, in abstract moral terms, being concerned about how it feels or why it's unpleasant. We just recognize: it's a problem that needs solving. And compassion for others and fast progress in this domain depends on us not getting hung up about who feels what exactly and why. The application of criticism to creative solutions about the problems that harm us most depends on us soberly, rationally applying reason. Everything else is either going to slow us down or cause us to lose focus on finding the solution.
After listening to the first half of the “Donald Hoffman with The Harrises’” podcast I wrote down a bunch of thoughts here. I found little to disagree with, because so much sounded like something between mainstream ancient philosophy (we don't have direct access to ontological reality) and the work of David Deutsch in "The Fabric of Reality". So in the second half I was looking forward to hearing something controversial. It did not disappoint. What I want to say at the outset is that although I ultimately disagree in places on points of epistemology and science and even whether Donald's theory even constitutes science, I think his overall approach is interesting and there’s no easy way to refute what he is saying. But this is just to say: almost any theory of consciousness is not easy to refute. Panpsychism is not easy to refute. As we have no theory of consciousness nor even a good working definition of what it even is, all approaches are more or less welcome at this point. So I want to say upfront: I think this is useful work. It’s worth trying. Almost anything is worth trying. That said, let’s get into trying to explain what he is saying and what criticisms one might conjure in response. Also: I actually read a couple of Donald’s papers to supplement his podcast. At times he seemed to deviate (especially under questioning from Annaka) with what he has said in published material. I’m going to stick with his words rather than try to argue with “the two Donalds” as I see them (spoken and written Donald). For what it's worth I read http://cogsci.uci.edu/~ddhoff/Chapter17Hoffman.pdf and also link.springer.com/content/pdf/10.1007%2FBF03379572.pdf and may get to commenting more on those in more detail at some other time.
Donald wants to find “a mathematically precise theory of consciousness” – and by this he means something like looking at all the ways consciousnesses interact (note the plural there: so we are conscious of course, but it’s not clear “how far down the phylogenetic tree” Donald sees consciousness as running. At times he even speaks as though he agrees with panpsychism and Sam basically seemed to put him in the panpsychist box at times – and I tend to agree) – but whatever the case he wants a way of mathematizing the relationships between individual instances of consciousness. So he talked about, for example, big data and how we might use that to analyze how humans interact online and everywhere else. If he could analyse these interactions or model them in some way then he could have some sort of mathematical description of what human consciousnesses are doing. This was an interesting approach to my mind because he was contrasting it to the “usual” way of doing things which, as he explained, was basically to look “in” matter for consciousness. So most people begin with neurons and try to work their way to consciousness. They start with the physical and try to get to the mental. Donald said he was trying to take the opposite approach. He’s doing this, he says, because he’s motivated to find a “dynamics” of consciousness. But for this he needs a mathematically precise definition of consciousness. He says he has such a precise definition. Now I should say: definitions are one thing – but people have tried defining something like “the electron” again and again and it’s no easy task. And, along with Popper, definitions are possibly misleading and always fallible anyways. What we need are explanations. So right away Donald is sort of in an instrumentalist camp. We will return to this. Whatever his "mathematically precise" definition we never actually hear what the rough-in-English definition is.
Whatever the case – at the 1 hour and 14 minute point in the podcast I hear the first remarks I can really begin to raise my eyebrows at – although not disagree with out of hand. What he claims is that his notional theory of consciousness – his “conjecture” one might say – is that consciousness is the most fundamental thing in the universe. So it is more fundamental than quantum field theory and general relativity and evolution by natural selection and indeed everything else we basically know of science. So his theory, if correct, must be able to generate quantum theory, general relativity and evolution by natural selection and everything else. And indeed he'd be right that the latter follows from a theory purporting to be the most fundamental of all. He says at one point “All of these scientific theories are wonderful tools but they have only been the science of our interface, not science of objective reality.” On this view even space-time is part of our “user interface” that he defined in the previous hour.
Now here is where I may part company with him. What is the role of science then? And explanatory knowledge generally? If we are trapped inside this user interface – a sort of Platonic Cave – then science becomes about us - a parochial arm of psychology. It's about what we are doing and how our brains (or minds, I'm not clear) are doing. But I'm not sure on his thoughts about the relationship between brain and mind (see below). Even general relativity, on this view, is mainly about us – and not a separate space time. It seems on Hoffman’s view space time is not “out there” but rather “in us” in some way - as the manifestation of an interface with objective reality or an interface between conscious creatures, or something. Now I am of the view: general relativity is a theory of stuff out there – it describes a really existing space time. Now could it be the case that general relativity gets refuted? Not only do I think this could be the case, I think it must be the case. In some ultimate sense it has to be false because (1) it disagrees with quantum theory so there must be a deeper theory than both (2) as a matter of principle all theories are improveable – we cannot have a final theory. If general relativity is the final word we must ask: WHY? And if there is no answer to that question we can find, progress stops and we are doomed because some problems are insoluble not because a physical law says but because our capacity to understand the universe is bounded by a brain that is unable to grok something about physical laws. This would mean our mind is not universal (now I just as an aside admit here that in some places – namely here: http://cogsci.uci.edu/~ddhoff/Chapter17Hoffman.pdf) that in Hoffman straight up admits: the mind is not what the brain does (see page 11). I am unconvinced and he never grapples with the universality of computation anywhere - not in the podcast and not in his two papers that I read. This is a deep problem for his thesis, so far as I can see it. And his quick dismissal of actual, literal, realist quantum theory is probably also disqualifying. But then, few physicists take quantum theory as being literally true. We’ll come back to this also.
The idea that spacetime is part of us, or of “consciousness’ user interface” seems to me to be the mistake of thinking that a scientific theory is at best a fiction – perhaps a useful model – but not really connected with reality. This is an epistemological claim. A realist says: no – although ultimately false, scientific theories are not merely fictional though useful models. They are accounts of what is actually there. There is a connection, therefore to ontology. Hoffman seems to be discounting this. Even if some consciousness theory is deeper than all other scientific theories this does not mean those other scientific theories never have a basis in objective reality. Consider by analogy any other demonstrably false theories. I’ll go to my favorite example: Newtonian Gravity. It says that there exists a force between particles that varies inversely as the square of the distance between them. Now that theory has been shown by experiment to be false. But is it utterly false? A complete fiction? No. There really is a way that particles influence each other in space (so that's correct) but it is via the spacetime between them. Einstein’s general relativity replaces Newton’s theory but it does not say of it: utter fiction. The motion of things in space does indeed approximately obey the inverse square law. It’s not a random guess. Approximations are approximations to something: and they cannot be approximations to simply other complete fictions. They have to be getting to something if the predictions being made are reliable to some extent (more than random guessing).
Whatever the case: Hoffman argues that because his theory of consciousness means consciousness is more fundamental than space and time that it means consciousness itself is outside space and time. Well…that’s rather neat. I don’t think there is the slightest bit of evidence refuting the claim that consciousness is IN space and time, but there we have it for now. We need an experiment (and more precise theories too) in order to decide between the two claims: consciousness is inside space time vs consciousness is outside space time.
Annaka asked a question about “The Fundamental Nature of Reality” and this comes up more than once. Hoffman responds at this point that his theory of consciousness would indeed answer to that label. So, ultimately, Hoffman believes in an end-to-science: he thinks there can be a final theory and that he is on the way to finding it. It won’t explain “everything” – he concedes that – but within the realm of fundamental physics – he would have a final theory. It would be non-improveable (he doesn’t say this: I’d like to ask him – but his language is telling “final” and “fundamental nature of reality” and so on). So while he doesn’t think quantum theory or general relativity or natural selection can be the final word (quite right) he does think his theory could be.
But once we have that theory, why can we not ask: why isn’t it some other theory? Like if someone came up with an alternative – why must it be ruled out a priori? Will his theory explain why all other theories are not possible? And if so, will it then explain why his mathematics is infallible? Why he cannot possibly have made a mistake and so on? These claims to “absolute final theories” all run into these questions. The alternative is: however deep your theory goes, that is no guarantee you’ve reached the bottom. There may be no bottom. There may just be better and deeper places to delve.
Hoffman speaks of his theory of consciousness as describing consciousness as a “field” not unlike the fields in a physical theory. I suppose he might think – though he didn’t say this – that particular instances – your mind and mine – are excitations of that field rather like the way electrons and photons are excitations of the quantum field. I’ll need to read more about this to find out but that could be a neat idea. Annaka rightly observed that this means this would be a field outside fields – so a field outside spacetime. But Hoffman says he is using the word field in a different sense to that in quantum field theory but admits the conscious field will be outside spacetime (see the 1 hour and 18 minute mark).
Soon after Sam interjects with a question about panpsychism. Incidentally Sam says a thing he often has: consciousness is the one thing that cannot be an illusion (following Descartes). Hoffman is no fan of panpsychism and rejects the label because he says unlike the panpsychists he is not starting with the material world as an assumption. So Sam later suggests he is an idealist. Hoffman also rejects that label. On the "cannot be an illusion" thing, I used to think the same. I was talked out of it by David Deutsch. My own way of putting this now is: what's not an illusion? Consciousness? What about that string of letters? Does it infallibly label what you think it does? Is that an illusion? If it could possibly be, then what exactly is not an illusion? And given we know that this moment actually is being experienced in the future in some sense (because neurons take time to fire), then consciousness is already a complicated thing that could be illusory. But I digress.
Hoffman makes some interesting remarks to further explain his “user interface theory”. He says: when you look in the mirror you see skin, hair, teeth, a face and so on and yet you know there are emotions, sensations, hopes, aspirations there. So the face is like a portal – it’s an icon so to speak – which gives you access to consciousness. So too with other people. Now I quite like that. But he then goes onto say that a cat is an icon for consciousness too although it’s “dimmer” (his word) – he thinks this as he can tell his cat likes some food but not others. Hoffman then goes even further: a rock or a glass of water is an icon of consciousness. Now at one moment he rejects the idea that the rock or water are themselves conscious – instead he says the consciousness is “behind” those icons in some way. And some of those consciousnesses are greater and lesser than ours. This is where he lost me. He seems to be saying that what we see as rocks or glasses of water and so on are the means and ways in which we conscious entities interact. At least that’s my steelman version of what he is getting at. After all – were there no physical objects out there, what point would there be in consciousness and what would we have to interact about?
Sam rightly suggests that this is all panpsychism but it’s in virtual reality.
Hoffman dismisses the quantum multiverse for poor, sadly incoherent reasons revealing a level of ignorance about quantum theory. He says, for example, that it “does not explain the measurement problem”. I do not have the time here and now to explain entirely why that’s wrong but here is a brief sketch:
The so-called “measurement problem” is the question as to why before a measurement is taken in quantum theory, the mathematical formalism describes a system (say a single electron passing through a slit) as taking all physically possible paths. Upon measuring it on the other side of the slit (the observation) we see only ONE path having been taken. But the very point of the multiverse is to take seriously all those paths described by the formalism as equally real before, during and after the measurement. They all exist. But you only ever see one because you only occupy the one universe (in a sense, speaking loosely). But quantum mechanics describes many many universes. This is really not that mysterious. It’s rather like how even Kepler’s laws describe an orbit as consisting of many many points in space…and yet Earth is only ever observed at some point in time to be at one of them. The others really exist though – but we’ll never observe the Earth at all of them. Sure: we can take photos, perhaps when it’s somewhere on January 1stand then 2ndand so on. But we can do the same for the electron – repeating the experiment and noticing it really does take all the possible paths. Whatever the case, Hoffman fundamentally – deeply misunderstands what quantum theory says and how the multiverse solves these misconceptions people have. (Look forward to an upcoming ToKCast for this in just two episodes when we go through “The Multiverse” chapter of “The Beginning of Infinity”…or if you can’t wait – read it yourself now!). For anyone playing along I've written a number of short articles on The Multiverse like this one.
Hoffman is thus also confused about the relationship between quantum theory and probability. I recommend https://www.youtube.com/watch?v=wfzSE4Hoxbc which, in a strongly competitive field of amazing talks by David Deutsch is right near the top for me. The long and short of it here is that: randomness is always just subjective. In the multiverse everything that can happen really does happen. So it happens with Probability = 1. Probability was invented to explain games of chance but has been repurposed and perhaps even perverted for all sorts of things that it perhaps shouldn't be. Now in the case of the multiverse some things happen more often (with so-called “greater measure”) than others but happen they do. And doing probability on infinite sets doesn’t make sense because we can never “do infinite probability experiments” to check if our theory of probability is actually correct. If the theory is supposed to be about the physical world we need to be able to test it against the physical world. So probability theory runs into this deep problem of being untestable. And if it's not supposed to be science but rather a branch of pure mathematics, then it doesn't apply to the real physical world. Whatever the case: Hoffman says that quantum theory simply does give us objective randomness which is a terrible mistake and a gross misunderstanding of the theory. But these misconceptions are all intimately tied together and I think a conversation between him and David Deutsch on this point would be fascinating. But for now, if you do read this Donald – please watch that video and read David’s book “The Beginning of Infinity”. I say this not because I'm wanting to refute your entire thesis but rather - like Annaka - I think there could be something there which might be improved by fixing up these quibbles I have. Moving on…
Free Will: Hoffman broadly agrees with Sam about free will but at the same time Hoffman uses the word “agent” with rather wild abandon in places JHe is tripped up on this by Annaka and it’s not entirely clear what he means by “agent”. He does say he has a mathematical description of what “choice” is and how choice must arise from unconscious factors and hence these choices cannot possibly be real (free) choices. He goes so far as to say (on his theory): you cannot be conscious of making a choice because the mathematical formalism says as such. But I think most of us refute this by personal experience. I am conscious of making a choice and no matter what your mathematics seems to say: I'm choosing to finish this sentence.
This brings me to my most major gripe from the podcast and Hoffman’s approach:
Hoffman very heavily emphasizes the mathematics of his theory whilst deemphasizing experimental testing. He insists upon the formalism. It’s an interesting “tell” by which I mean: many of us have bemoaned so-called “scientism” – the application of scientific methods to problems that are not science (like purely philosophical problems like the very question: are there philosophical problems?) Scientism is something thought of as the stance that no knowledge is “valid” or some such except for scientific knowledge. Now I know this isn’t a word - and I tend to hate neologisms (a point I've made over and again) but I’ve often accused some of “mathematism” – the claim that mathematical theories have some special status or that without some formalism, the theory has less status. I think this is a mistake. Now I’m not (yet!) accusing Hoffman of making this mistake necessarily – he might think the mathematics just arises naturally. But the issue I have is that a scientific theory is typically able to be explained in natural language at least somewhat well to some crude resolution perhaps. Even string theory is about...well, strings - which can be rather simply described using words. I concede the pioneers of quantum theory often failed at this. But even they were able to say something in English. Hoffman doesn’t say much about what consciousness is exactly in natural language (i.e: simple English) – he just says it’s “mathematically precise”. Now this may be all well and good to some extent. For example: we should take something like The Schrodinger wave equation seriously. And that equation – the equation that describes (for example) the motion of an electron over time as having a wave-type shape – tells us that the electron really does occupy many possible places simultaneously and can take many possible paths simultaneously. It is taking this equation – the mathematics - seriously that leads us inexorably to the multiverse (among other things). I hasten to add here: the electron itself is not a wave: it is a thing that can occupy a “sharp” position in space – but the wave aspect means that it has many (initially fungible) instances that can take all the different possible paths. Now the difference here is that: there exist experiments in quantum theory. Many many experiments and observations. We don’t start with the formalism: we start with certain problems prompted by strange observations. In the case of quantum theory some observations are: patterns generated by certain interference experiments. So then we seek an explanation - a solution to our problem. And THEN we seek to formalize all that. This is science – this is reason. It seems to me that Hoffman has no problematic observations he’s really trying to explain. I mean: he has consciousness but it’s not an observation of the form: if we see that, then this refutes my theory or is not refuted by my theory. Instead he seems to be defining into existence something he calls consciousness and then doing some mathematics. But we don’t merely define electrons (for example) into existence – we see their actual marks on photographic plates and so on. Experiments matter, so we absolutely need some experiments to check Hoffman’s thesis. But what would these look like? I cannot even imagine. And crucially we need so-called “crucial experiments” (see here for more about that). An experiment that decides between at least two theories about what’s going on. If the result goes one way, one is refuted and the other theory is not. But without a good working existing definition of consciousness how can we even begin to know how to test Hoffman’s theory. Test? Test against what? People fundamentally misunderstand the nature of science on this point which is why David’ Deutsch’s original paper on all this is so important. (And again, if you want the crib notes on that, see here). So Hoffman uses the phrase “mathematical precise” theory/definition over and again as though if he could just get this robust, self-consistent mathematical theory he will have explained consciousness and other things. But we must recall: Newtonian Gravity is “a mathematically precise theory of gravity”. It is also demonstrably false (which by the way is a strength of the theory: it makes predictions we can observe in the real world and when we fail to observe what is predicted while general relativity is shown to be accurate, we have refuted Newton). So “mathematically precise” does not mean true. Indeed mathematicians make up mathematics all the time that has nothing to do with reality. Famously, Hardy in his "A mathematicians apology" celebrated doing this. Like so many pure mathematicians, he regarded the subject as more an art than anything else.
Now I hasten to add here that Hoffman is no crank (so far as I can tell!) because he readily admits that because he makes the strong claim that consciousness is more fundamental than quantum theory, general relativity and so on, that if his theory cannot produce quantum theory, general relativity and so on as “limiting cases” then his theory is absolutely false. So that’s something at least.
Hoffman invokes Godel’s incompleteness theorems (always a bit of a red flag to me) but not in a too egregious way. He concedes with prompting from The Harrises that taking LSD or mushrooms might help him because Hoffman says that he has never so much as smoked a cigarette. He has said he has spoken to friends who are “experienced psychonauts” and what they report back are certain visualisations and the feeling of being “outside spacetime” that are in very good keeping with his own theory. But of course: that would be some kind of empiricism: knowledge coming from the external world…wherever that world might be.
The final question from Annaka is perhaps the best one as it reveals something about the thought processes of everyone in the conversation, so I’m just going to transcribe the question and the answer in full and make some remarks afterwards. Italics and boldface are my emphasis.
Annaka: “Doesn’t it seem that where we’re left with all of this is that even if you’re able to make great progress with the math and with the theory and that you’re able to come to some point of feeling that you’re confident that this is correct or likely to be correct, what would constitute proof? It seems to me to be outside the realm of scientific experiment. When we’re talking about consciousness you could tell me there’s some theory that says that there is consciousness in the atoms of the sofa I am sitting on but how would we be able to confirm it just seems to me it’s impossible to confirm, so I’m just wondering if there’s some sense in which this can result in some type of scientific experiment or what would result in proof?”
Donald: Right. That’s a critical question for a scientist and the bottom line for me is that if my theory of consciousness when I project it back – so I have to have theory of consciousness and it’s dynamics that is absolutely precise. I have to have a mathematically precise mathematical function that projects it into our spacetime interface because that’s where we’re going to get any data for our experiments. And then the first thing – the first test of my theory will be – if the dynamics when it’s projected into space time does not give me back all of the consequences of evolution by natural selection, general relativity, quantum field theory…if I cannot do that, I’m wrong. I’m flat out – I’m wrong. So already all the sciences we have right now are going to be an acid test for this deeper theory.
Annaka: Except that your starting with this base assumption that you’re calling this foundational math – you’re saying that it represents consciousness but I’m just wondering if there’s any way that that can be confirmed or will that always be an assumption?
Donald: So…what I will be able to do is to see if I’m wrong. Right? So if there are things that if I cannot do them, the theory is absolutely wrong. What I cannot do is prove that I’m right. But that’s not unique to my theory. No scientist can ever prove that they’re right. The best we can do is get evidence that either disconfirms our theory or reduces our credence in the theory or that in some sense increases our credence in the theory. But it’s sort of an elementary point in the philosophy of science that no scientist can ever prove their theory. And even falsification in the strict Popper sense – you may not strictly be able to falsify a theory – you may, because there are so many auxiliary hypotheses that go into a theory that you don’t know which one is the faulty one. But, my theory can be tested as much as any scientific theory can be* and one test that I would take as absolutely critical is if I can’t get back evolution by natural selection and say quantum field theory when I project into our spacetime interface I would take that as clear evidence I’ve got the wrong theory of consciousness.
My comments: I’m going to leave aside all of the anti-Popperian stuff there more or less: lots of talk about “prove” and “confirm” (anti-fallibilist notions of science or confusion between purely deductive systems like mathematics and explanatory sciences like physics) and “confidence” (reducing epistemology to feelings) and even “credence” – the Bayesian mistake (I explain some about that here). As to “cannot strictly falsify” – this is The Duhem Quine Thesis and is all very well. And we cannot prove it true. Indeed.
But this claim that the test of “getting back” quantum field theory is an experimental test is a little dubious. I mean, he’s strictly correct that if he makes the claim everything else is emergent and his theory is fundamental, that if he cannot produce quantum field theory then he’s wrong. But quantum field theory rests not just upon the fact that classical mechanics is a limiting case of sorts but rather many many real world observations only explicable by quantum theory. So can his *theory can be tested as much as any scientific theory can be*? I don't think so. Consider: in the case of quantum theory alone. It is testable against:the photoelectric effect, Compton scattering, radioactive decay, interference experiments, quantum computation, etc, etc. I mean a deep theory seeks to solve many discrete problems and is tested against novel observations and experiments. At least typically. Even evolution by natural selection: solves the problems about homologous structures, the similarity of DNA, embryos, the fossil record, Galapagos finches, etc, etc, etc. And General Relativity: neutron stars and black holes, the precession of Mercury’s orbit, Eddington’s observations of starlight during an eclipse, the GPS system, etc, etc. But Hoffman’s theory – so far as I can tell – does nothing like this. So it's not as testable as any other scientific theory. Not by a long shot. What are its PREDICTIONS? It defines into existence something he labels consciousness – but he has not told us in the entire podcast what consciousness is in simple terms or what we might observe out there to notice if its on the right track or problematic because it's (say) postulating disembodied consciousness which direct the motion of clouds (or some such). So without a simple definition, explaining WHY that definition is correct except to say: here is some mathematical formalism from which we can generate quantum field theory, general relativity and natural selection and whatever else seems more like a mathematical trick than actual science. What predictions does it make BEYOND just: here is a theory that has as limiting cases all other theories? That’s certainly interesting…but is it consciousness?
General relativity explains what gravity is: the bending of spacetime by mass/energy. And then it predicts things like black holes. Which are observed! And which predicts the bending of light during an eclipse – which is observed. And sure it “gives us back” Newtonian gravity as a limiting case…but that is almost beside the point.
So for Hoffman’s theory to fit this scheme I expect not only for his theory of consciousness to “give us back quantum field theory (and the rest) – it has to tell us what consciousness is exactly (as General relativity tells us what gravity is) AND it has to predict things we hitherto do not know about like black holes and allows us to test this theory by making precise predictions about what we should observe. And if the response is: but that would be about “the interface” and not “objective reality” we have a theory immune from experimental criticism. And that ceases to be science.
But this does not disqualify it because it could be very interesting metaphysics and the beginnings of an actual research project.
I have to read more of his material.
A number of people have recommended the work of Donald Hoffman to me. I am not very familiar with him – I have not read any of his books nor listened to his talks (like his TED talk). I think I will soon. What I have done is listen to the first 1 hour and 7 minutes of his interview with Sam Harris.
I was expecting to encounter something other than I did – but I’m not sure what. Long story short: I did not find anything he said particularly controversial (I emphasize this as someone who has read Popper and Deutsch). I imagine what he says would be controversial to mainstream thinking about the relationship between mind, our mental models and objective reality – but linguistic quibbles aside, I found what he said to be refreshingly familiar to me. In particular what he said could almost have been conjured by anyone who has read Chapter 5 of "The Fabric of Reality" by David Deutsch. It was that close...some minor linguistic quibbles aside and perhaps some hints of relativism.
So, some notes on the conversation, roughly in chronological order in which they occurred in the podcast:
Hoffman says he is reacting against his colleagues (in psychology, I presume) who he says regard evolution as having shaped us to “see truths about the world”. In other words to see the actual, final truth in a sense – or objective reality as it is. This is all well and good. In Popperian terms: he objects to empiricism (the idea we derive knowledge about reality from our senses) – and quite right too.
Hoffman gives an account of his “user interface” idea – which, it seems to me on a cursory understanding of what he said in this podcast, is perfectly harmonious with the idea that we do not have direct access to reality. Indeed. We will come back to this later with a discussion on virtual reality. This is inline also with what Plato, Descartes and what the the Wachowskis conveyed in the Matrix and various other movies have told us over the years. What we tend to experience isn't reality exactly and we could be terribly deceived. Thinking we're actually in a Matrix is, of course, a step far too far. But the point is: our senses can deceive us and may not always (or even typically) give us reliable knowledge of objective reality.
Annaka Harris points out during the discussion that “we don’t know what light is” and “we don’t know the fundamental nature of reality.” I only have a linguistic quibble on the first point: we do know– so long as one takes on board the Deutsch/Popper notion of what knowledge and “to know” is. It means: have a fallible explanation of. It does not mean “grok the final theory”. So we know what light is – but we don’t know what light ultimately is. We never shall. All theories must be improvable because we don’t have direct access to reality. But we can gain fallible knowledge about light (and everything else) to some extent. As for not knowing the fundamental nature of reality: yes, correct. And uncontroversial so far as that goes for a critical rationalist/fallibilist. For more about what "to know" means see here for a brief explanation.
Hoffman says that “belief in science is not a helpful attitude” - which is to say believing scientific theories is not a helpful attitude. Wonderful. If he has not read Deutsch or Popper, then he has independently converged on that epistemological worldview to a large extent. Hoffman regards scientific theories as “the best tools we have so far”. A minor quibble on the use of “tool”. Tool conjures up a device for just solving certain problems. Now it depends on one’s emphasis here: if the only purpose of science is to predict (a tool for prediction) – I disagree. That is one thing science can do. But science gives us some approximation to reality as well. So I regard scientific theories as tools that tell us something about objective reality – they give us some account of what is really there. Hoffman seems to more or less agrees with this in some later remarks.
At 41 mins in, Hoffman invokes “the virtual reality metaphor”. He talks about putting on a virtual reality headset and seeing race cars in a computer game. Obviously the cars don’t actually exist. Now he says this is basically the state we are always in. Yes – exactly. I guess Plato would not disagree about this (the shadows on the cave wall being not the objects in themselves – and we are trapped in the cave (the VR simulation).) Now in “The Fabric of Reality” chapter 5 is titled “Virtual Reality” and it is a long thesis precisely about exactly this point and some other things. This "virtual reality" explanation is no mere metaphor either: we really are simulating the external world in our minds using sense data. David writes on page 120 in that chapter, “So it is not just science – reasoning about the physical world – that involves virtual reality. All reasoning, all thinking, all external experience are forms of virtual reality.” I actually think David is making a stronger claim even than Hoffman. David published his book in 1997 – that seems to predate anything of Hoffman’s on the topic by over a decade (I can see the user interface theory was published in around 2010: https://link.springer.com/article/10.1007%2FBF03379572) and of course Plato wrote about his cave in around 400BC - so these kinds of ideas, or aspects of these ideas have been refined and rediscovered or reformulated again and again.
Hoffman admits “something” is there in reality but he says he just doesn’t know exactly what. The word “exactly” does a lot of work there. The sentence is true with it, and false without it. Annaka says “We do not understand the deeper reality” (the reality deeper than the one more fundamental than what quantum theory says) – and of course this too is correct and uncontroversial: we do not know what we do not yet know (in this case, what the successor to quantum theory will say…and what the successor to the successor will say and so on).
David engaged with at least some of Hoffman’s ideas here: https://twitter.com/stealthytooth/status/723555137478905856?s=20
And that language Hoffman uses which seems to deny the existence of objective reality (or parts of it - they mention the moon and a glass of water on the table) he tightens up – corrected I should say by Annaka Harris. It seems Hoffman steps back from the claim there is no objective reality or that the moon or glasses of water in particular don't have an objective reality. So he can have a habit of sliding into a sort of soft-relativism or something…yet when pressed seems to concede realism. At one point Annaka rightly interjects when Hoffman uses the word "miracle" to describe "premises of a scientific theory". Hoffman seems to needlessly confuse the reader or listener with terms that make things seem more controversial than they are. A premise need not be regarded as a "miracle" (some mystical violation of physical law) just because it cannot be derived from any known theory. Hoffman admits he should just use the word "premise" or "axiom" instead. It is this kind of style that, it seems to me, may make some of his ideas sound more controversial or counterintuitive than they really are.
Overall, I liked what he had to say, with some minor quibbles over his use of language – especially that which touches upon anti-fallibilist thinking (but this is true of almost anyone who hasn't taken Popper on board). So, so far, what he says seems roughly in line with what David wrote in Chapter 5 of “The Fabric of Reality”. But I stopped the podcast just as they were beginning a discussion of free will and consciousness. I guess what comes in the next hour and a half of the podcast could prompt something else...but so far I am nodding in broad agreement and didn't once roll my eyes.
The most valuable thing you can offer to an idea