I have a tendency to avoid hyped books. Sapiens and its sequel, Homo Deus, were definitely part of this category, having been praised by presidents or thought leaders (whatever that means). But an article about the meditative practice of the author, who’s also a jewish gay historian, spiked my interest. So.... I finally read both of them.
Sapiens, which attempts to summarise all of history in 400 pages, exceeded my expectations, although I think it will be more interesting for people who don’t describe themselves as history buffs. The book condenses our presence on Earth into 3 big revolutions: cognitive, agricultural and scientific. The followup, Homo Deus, deals with the future, and I found this part to be much more mentally stimulating than the first. The book surprised me many times but also challenged me to debate the author multiple times. It’s very rare that this happens with a book so a post was in order. Before we begin I wanna clarify that the following is my attempt to put down some of my struggles with the author’s conclusions rather than a simple review. Because of that I will ignore most of the wonderful and insightful ideas that are sprinkled throughout the book and focus on areas of disagreement.
There are stories & there are stories
Homo Deus starts where Sapiens left us of, the 21st century. Harari debuts with an optimistic bang by claiming we are on the cusp of eradicating famine, disease and war. The author follows a familiar line of thinking, one that was popularised by Steven Pinker’s Better Angels of our Nature.
While famine and disease could be potentially eradicated through scientific advancements, the eradication of war line of arguments needs different (and better) arguments, as the science that we used to cure diseases can also be used to blow up cities. Noah Harari seems to be a big believer in deterrence as the ultimate defence strategy (if we all have atomic weapons, or their equivalent in the 21st century, than what’s the point of war?). He also believes that conquest has little value nowadays. For example, Russia can benefit tremendously by invading a country like Ukraine, because it benefits directly from it’s oil and gas reserves. However this is the case only because their industrial economy is still trapped in the 20th century. But what would Russia get if they conquered Silicon Valley, the author asks? All the value there is trapped in the minds of the engineers and codified in algorithms hosted on servers across the globe. As Harari notes, there is no silicon in Silicon Valley. I would like to share the author’s enthusiasm. Wars do have deep economic motives but people don’t fight for macro-economic indicators. They fight for deep held values, even if some of them can be criticised and proven false in hindsight.
These individual mental delusions (and as Harari would know from his Buddhist practice, we are all living trapped inside false narratives created by our minds) were less impactful in the grand scale of history until recently. Some values might have been empirically false (like the ancient Egyptian belief that the world is a stage for the fight between Horus and Seth) but they lasted because of their psychological practicality. Sapiens makes a very good case that have evolved our religions over thousands of years, telling ourselves better and better myths. Stories that were conflicting with human experience however have very little shelf value. Jedism (the Star Wars-inspired Jedi religion) may delude a few geeks to think they can control objects with their minds, but we don’t expect this belief to go on for too long. But, and this is a HUGE BUT, even a fundamentally flawed ethical system, like Jedism, can escape the evolutionary grip. All it takes is a crazy person with a big enough gun.
Like a lot of other intellectuals, Yuval Noah Harari, doesn’t seem to take these myths seriously. Ironically he notices the fundamental role their played in our upbringing, but … maybe because of his attachment to Buddhism philosophy …. he downplays their influence on the individual. There are thinkers like Sam Harris or Nicolas Taleb, who practice more caution and don’t surrender vigilance to this naive narrative.
The death of humanism
Before dealing with the future, Harari recaps the philosophical revolutions of the last centuries. He mainly focuses on Enlightenment thinkers and the placement of the individual in the centre of our ethical framework. A few hundred years ago we postulated the sanctity of each human and this allowed for extremely beneficial developments. We are now governed by democratic systems, our economy operates in free markets and capitalism has eradicated poverty and increased the comfort of billions.
But it’s all based on a lie, Harari observes. There is no self that acts like an independent decision maker. And the idea of free will, that sits as the foundation of our society is equally false. Harari probably reached these conclusions through meditation, but he backs his claims with science. And he is right. For example, Gazzaniga's split brain experiments, show that there is no indivisible us that’s in control of our body. Further more, Harari notices, there is no material distinction between cognition and biology. To that end, our individual existence is only the result of billions and billions of events that were put in motion with the Big Bang. We are acting in a big cosmic play, rather than sitting in the director’s chair of our pathetic existence. So far so good. I was never a believer in free will, but found the discussion to lack any practical implications. Free will may be an illusion, but since we can’t even phantom to model the structure of the universe, it is safe to treat this illusion seriously.
Harar disagrees. He separates consciousness from intelligence and proposes that the two are on different evolutionary paths. Intelligence, the author implies, will be taken over by machines. It goes without saying that the author thinks it’s only a matter a time until artificial intelligence leaves us, humans, behind. And that we would willingly surrender more and more of our consciousness to benefit from the pleasure generated by artificial intelligence systems. He looks beyond the AI applications that exist today and offers more sophisticated examples. Say we have to choose the person we would marry and find ourselves trapped in deep, long internal dialogues struggling with the best decision. We now seem to think that this sort of momentous fork in the road is too complex and too personal to have it turned into a sequence of steps that could be executed by a computer. What would be the situation a few years from now, when a Google like system, having access to all the data about ourselves, would make a very precise recommendation? One that would generate more happiness guarantee. Wouldn’t we all surrender our imperfect way of thinking to the uber-intelligent overlords?
The book, and its main argument, takes a dystopian sudden turn with this realisation. Harari predicts a future in which:
- the rich will augment and upgrade themselves, thus becoming immortal gods, while the poor will, be trapped in a cycle that resembles our currently dreary existence.
- a new religion will appear, one that takes the Enlightenment’s postulation of the individual, to its natural conclusion. If the self does not exist than the only thing that remains is data. Humans, and life in general, are nothing but collections of algorithms, some functioning better than others.
I find both of these lacking. Let’s take them one by one:
Rich Gods and Poor Mortals
Harari has very utilitarian (almost nihilistic) narrative of the world. It’s something that I would expect from a deep meditator, a person who is trying to mitigate suffering by severing our ties with desire for worldly goods. While he is chasing personal enlightenment in Buddhist temples, Harari is not so generous with his human human beings. He (arrogantly?) thinks that most people are trapped in cycles of chasing pleasure and avoiding pain, thus being ruled by a functional value system. It is true, Yuval notices, that in the past capitalism was the force for good as it distributed the benefit of scientific discovery better than anything else. Antibiotics, access to energy and so on have huge benefits for billions of people. But that period of growth is reaching its end, Harari thinks. He postulates that all the previous development was fundamentally caused by the greediness of world billionaires, as it was profitable to have healthier people working in the factories. But with automation taking most and most of the jobs, the profit motive would seek to exist and so the greedy capitalist would use their wealth to escape death and suffering while the poor remain mortal and miserable.
I find this argument to be both miss-informed and unrefined at the same time. Although he cites Hayek in certain parts of his book, his definition of value is very shallow at best. I wrote about this in my counter-argument to the AI-driven apocalypse scenario. Harari’s premise, one that is never explicitly stated, seems to be the same marxist idea (although tightly wrapped in intellectually sounding meta-philosophy): that the means of production are solely or mainly responsible for the economic output of a service. As I wrote a few moths ago:
We are complex beings and our needs, as consumers, are complicated as well.
The functional needs, represent just a small part of the characteristics that we humans appreciate when we make a purchasing decisions. When asked to justify a recent acquisition, we may point to discrete functionalities, but in most cases these are just after-the-fact rationalizations.
Our human limitations may keep us from being able to define our value judgement algorithm but there is still hope for the product person. HBR proposes that universal building blocks of value do exist, and advanced 30 of these elements of value. These elements fall into four categories: functional, emotional, life changing, and social impact. Some elements are more inwardly focused, primarily addressing consumers’ personal needs.
Dataism and the death of SELF
Let’s deal with his second prophecy. That we will renounce our religious attachment to the individual, and we will reform our moral system to value data instead of the human being. Intelligence, the author thinks, only needs data. Like the previous section, I challenge Harari’s premises:
- That intelligence exist as a one-dimensional metric
- That intelligence can be separated from conciseness
He seems to start from these premises because he has a very Platonic interpretation of intelligence. Intelligence, as I understand it from my reading of his book, is described as a an absolute decision making algorithm: Some people have a better algorithm than others, but fundamentally they are all imperfect, because our processor is working with interpretations of data (emotions) and our access to data is incomplete. Thus, Plato's philosopher king will most likely be a very advanced computer, making better and faster decisions, independent of the interference from the biologically constrained human.
But is intelligence one thing? Harari has no problem adhering to a modular theory of the mind and obliterating the idea of an indivisible self, but he is not as inquisitive with intelligence. If self is an umbrella term for a collection of mini-consciousnesses shouldn’t we at least explore the idea of a multitude of intelligences? It is something that Kevin Kelly advocates for, pointing out that there are a multitude of intelligences. Intelligence, Kelly notes, is not a single dimension, so “smarter than humans” is a meaningless concept. Most technical people imagine an intelligence the way Nick Bostrom does in his book, Superintelligence, as a literal, single-dimension, linear graph of increasing amplitude:
Our brains run many different modes: deduction, induction, symbolic reasoning, emotional intelligence, spacial logic, short-term memory, long-term memory etc. Like with the self, treating intelligence as single-dimension seems to be false and it leads Harari to questionable conclusions.
What about intelligence existing in a different dimension from our consciousness. Can we operate without purpose, which is deeply linked to our (faulty) interpretation of self? Harari thinks that it can. He does it by reducing existence to data and postulating algorithms that don’t need human’s fuzzy value systems. As an examples he asks how would a computer choose between a Beethoven composition and a Justin Bieber hit. His solution proposes to reduce both songs to bits, making it easy for a computer to recommend Beethoven because his music is more complex and thus contains more bits than Bieber’s silly pop repetitions. I find this oversimplification to be quite laughable.
I don’t know if intelligence can be detached from consciousness and to be honest I can not postulate how that would work, if possible. But I am always amazed when people (like Harari) take upon themselves to suggest an answer. I am not convinced by their argument but I respect their courage.
You may think I am critical of the book. I want to say it once again: I enjoyed Harari’s book(s) tremendously. I found myself amazed by some of his insights and arguments and I would recommend his book(s) to all of you. But writing things down is a one of the best tools in my arsenal for achieving clarity. It’s very rare that this happens and the fact that Sapiens and Home Deus made me put down pages of philosophical argumentation is a good testament to the value of the book(s).