The Origins of Reason

What does it mean, to be rational? This is the question that Hugo Mercier and Dan Sperber ask in their revolutionary book, The Enigma of Reason. This book was a complete revelation to me. It shattered some of my most deeply held beliefs about intelligence, and its implications for our understanding of human nature are profound. I here summarize some of its main ideas, weaving in some of my own thoughts along the way.

How reason evolved

Many of our most complex biological abilities, such as vision, also evolved independently in other species. Yet only humans have the gift of reason: of all the species on this planet, only we can calculate projectile trajectories, build aqueducts, and put a man on the moon. The authors ask: “If reason is such a superpower, why should it, unlike vision, only have evolved in a single species?” A possible answer is that human reason evolved as an adaptation to a very special ecological niche, one that only humans inhabited.

Over the last 30 years, psychologists have argued that human reason is deeply flawed: behavioral experiments have repeatedly shown that our minds do a rather poor job at being “rational.” It turns out that human beings (you and I included) are systematically biased in most of our decisions. But this begs the question, why do we have such biases in the first place? And how can we assess how bad they really are? This is something that the existing academic dogma fails to properly address. It is paradoxical that intelligent beings should systematically fail to be rational.

The problem has to do with our understanding of what reason is, and how it operates. The common view is that reason is “a superior means to think better on one’s own.” But the rationality that evolved in humans is actually not a static quality that can easily be attributed to an individual’s cognitive abilities. Rationality rather refers to our ability to engage in a collective process, in dynamic group interactions over time. Many people believe that they are “rational,” but really, they are not. We evolved to be rational as a group, as a tribe, as a small society — but not as individuals.

In order to understand this, we must pay attention to scale, and distinguish how reason operates at the individual level and at the group level. What we call “reason” can be traced down to a neurological module, a part of our brains that evolved to serve two purposes: (1) to produce arguments to justify ourselves and convince others, and (2) to evaluate not so much our own thought, but the reasons that others produce to justify themselves and convince us.

In other words, our brains evolved for self-justification and to criticize others. This is the only thing that the cognitive faculty we call reason does very well. Almost all empirical research shows that, at the individual level, we are incredibly bad at critically evaluating our own ideas and arguments in isolation, by ourselves.

According to Mercier and Sperber, “humans differ from other animals not only in their hyperdeveloped cognitive capacities but also, and crucially, in how and how much they cooperate. They cooperate not only with kin but also with strangers; not only in here-and-now ventures but also in the pursuit of long-term goals; not only in [various] forms of joint action, but also in setting up new forms of cooperation.”

Such complex forms of cooperation create unique problems of trust, which our reason became adapted to solve. Evaluating the arguments of others allows us to decide who to trust, and who to follow. This in turn explains why our species developed a brain designed to handle incredible amounts of social information and complex interactions. “To become competent adults, we each had to learn a lot from others. Our skills and our knowledge owe less to individual experience than to social transmission.” However, there is also a downside: “benefits we get from communication go together with a vulnerability to misinformation.”

Human societies favor a brain that spends considerable energy assessing how credible people around us really are. But the resulting brain, the cognitive hardware that we are using today, is both biased and lazy: “biased because it overwhelmingly finds justifications and arguments that support the reasoner’s point of view, lazy because reason makes little effort to assess the quality of the justifications and arguments it produces.”

This becomes obvious when people start reasoning on their own. We usually “start with a strong opinion, and the reasons that come to [our] mind tend all to support this opinion.” We are usually not aware of our tendency to rationalize. In contrast, when we are part of a group, ideas and arguments are subject to evaluation from our peers. Reason evolved to operate in such a social environment. Its benefits become most apparent at the collective level, where its outcome is to produce an equilibrium of socially accepted arguments.

People believe that great ideas are born from talented minds thinking in isolation. But reality is that contributions to science always depend on prior interactions with others. Take the example of Isaac Newton: it turns out that this scientific “genius” was a strong believer in alchemy. He spent thirty years looking for the philosopher’s stone to turn base metals into gold. He kept this mostly to himself. But in public, he developed a theory of physical laws that would revolutionize physics and engineering. The difference is that this was critically reviewed by his peers at the Royal Society. Great ideas are born from great deliberation.

The nomadic Raute people of Nepal (Photo: Jan Møller Hansen)

Logical and intuitive inference

You might think that the strong logical coherence of your argument is enough to convince others, that other individuals must also necessarily be logical like you, that this is enough to convince your interlocutors. But humans did not evolve to be humble philosophers. And as most students of philosophy learn, the fact that an argument is logically valid does not mean that it is true.

Logic never stopped human beings from having extremely different opinions; if it did play such a definitive role, there would not have been such a divergence of political ideologies. “What aggravates us is the sense that these people, who do not share our beliefs, do not make a proper use of the reason we assume they have. How can they fail to understand what seems so obvious to us?” But this is a naive way of thinking — and one that showcases the bias we all have in favor of our own view, which we may call the myside bias.

“When people reason on moral, social, political, or philosophical questions, they rarely if ever come to universal agreement. They may each think that there is a true answer to these general questions, an answer that every competent reasoner should recognize. They may think that people who disagree with them are, if not in bad faith, then irrational. But how rational is it to think that only you and the people who agree with you are rational? Much more plausible is the conclusion that reasoning, however good, however rational, does not reliably secure convergence of ideas.”

When one takes a closer look at logic, and how we actually use it in our everyday lives, we see a different picture. The main role of logic in reasoning is a rhetorical one: logic helps simplify and schematize intuitive arguments.

The most common type of reasoning — both in everyday and scholarly contexts — is what is known as conditional inference, which usually uses conjunctions such as “if… then…” There are four types of conditional inference, which we have known since the days of Aristotle, illustrated by Mercier and Sperber in the following schema:

As the authors explain, such reasoning involves a major premise of the form “if P, then Q.” The first part of such statements, introduced by “if,” is called the antecedent of the conditional inference, while “then” is called the consequent. In order to draw a useful inference, we need a second minor premise, which consists either in affirmation of the antecedent (modus ponens) or in the denial of the consequent (modus tollens). These are the two types of logically valid conditional inference, while the other two are fallacies.

What empirical evidence shows, through countless experiments, is that only two-thirds of people, on average, draw valid modus tollens arguments, and half of people commit the two types of fallacies. In fact, we don’t even really use conditional inference arguments in their proper sequence. “Often, when you argue, you start by stating the conclusion you want your audience to accept […] and then you give reasons that support this conclusion.” We can also observe that “some reasons may seem good enough to explain but not good enough to justify. We can accept an explanation and, at the same time, be critical of the reasons it invokes.”

The world of formal classical logic does not accurately reflect the messy and chaotic world in which we live. For almost every argument that we make, there are usually exceptions: no argument is true in an absolute universal sense. What really matters to us in real world interactions is how our arguments relate to empirical facts in a specific context. We evolved to argue in the here and now, the tangible and the practical, not in an abstract world of perfect ideas.

When it comes to formal logic, the “emergence of the theory of probabilities since the seventeenth century […] have rendered the Aristotelian model of inference obsolete.” Even if we accept that a premise is true, we may also believe that there are exceptions. This means that the conclusion derived from such a premise inherits its precariousness. It might be true most of the time, but it certainly isn’t a definitive proof. And as the philosopher Paul Grice has shown, words can be used to indicate a meaning that is narrower or looser than their linguistically codified sense, which implies that an argument can only be interpreted and understood according to the context in which it is formulated.

Like other animals, “we cannot perceive the future but only infer it. Like them, we base much of our everyday behavior on expectations that we arrive at unreflectively.” Our brains make inferences all the time, but they mostly do so through intuitive inferences, not formal logic. These “inferences involved in comprehension are done as if effortlessly and without any awareness of the process.”

Most of the time, even during social interactions, our brain is on autopilot. It evolved to rationalize and give meaning to our actions after the fact. “People commonly present themselves as having considered reasons and been guided by them in the process of reaching a belief or a decision that, in fact, was arrived at intuitively. The error we all make is to falsely assume that we have direct knowledge of what happens in our mind.” We don’t.

What makes intuitive inference possible is the existence of a world of “dependable regularities. Some regularities, like the laws of physics, are quite general. Others, like the bell-food regularity [to which Pavlov’s dog is exposed,] are quite transient and local. It is these regularities that allow us, nonhuman and human animals, to make sense of our sensory stimulation and of past information we have stored. It allows us, most importantly, to form expectations on what may happen next and to act appropriately. No regularities, no inference. No inference, no action.” Our brain is the result of cognitive adaptations to the existence and relevance of regularities, which it has become capable of exploiting.

This becomes apparent in our capacity for mindreading. “We attribute mental representations to one another all the time. We are often aware of what people around us think, and even of what they think we think. Such thoughts about the thoughts of others come to us quite naturally. There is no evidence, on the other hand, that most animals […] attribute mental states to others.” Humans, by contrast, start attributing false beliefs to others at the age of four.

For Mercier and Sperber, “what makes agents rational isn’t a general disposition to think and act rationally, but a variety of inferential mechanisms with different inferential specializations. These mechanisms have all been shaped by a selective pressure for efficiency.”

Reahu burial ceremony in the Amazon (Photo: Sebastião Salagado)

Why reason works

A peculiar aspect of human society is that individuals are subject to a social pressure to commit to their beliefs. “By invoking reasons, we take personal responsibility for our opinions and actions.” We repeatedly have to justify why we adopted our attitude and our behavior. We “encourage others to expect [our] future behavior to be guided by similar reasons” and to hold us accountable if it is not.

This commitment is one of the things that allows trust to emerge in a community of interacting agents. “The ability to produce and evaluate reasons has not evolved in order to improve psychological insight but as a tool for defending or criticizing thoughts and actions, for expressing commitments, and for creating mutual expectations.”

Human communication happens in an environment where we interact not only with close kin, but also with competitors and strangers. This is why lying and deception are a standard part of our behavioral repertoire, and children start practicing this art at a relatively early age. In an environment where deception is a constant risk, two factors facilitate cooperation: social norms and reputation. People care deeply about their social status, and reputation plays an enormous role in moderating our behavior. Such factors contribute to cheating being, on average, more costly than profitable.

This creates a rather special kind of society, unique to humans. “On the one hand, even the most sincere people cannot be trusted to always be truthful. On the other hand, people who have no qualms lying still end up telling the truth rather than lying on most occasions, not out of honesty but because it is in their interest to do so.”

The best strategy to adopt in such an ambiguous environment is not obvious. If we all systematically stopped listening to liars, this would potentially deprive us of valuable information, as these people may be reliable about some topics. Conversely, people who have been reliable in the past may still seek to deceive us. “To benefit as much as possible from communication while minimizing the risk of being deceived requires filtering information in sophisticated ways.”

The filtering solution lies in epistemic vigilance — the ability to constantly readjust our trust. The authors point out that “there is no failsafe formula to calibrate trust exactly, but there are many relevant factors that may be exploited to try to do so.” These factors are of two kinds. One has to do with who we should believe. Another has to do with what we should believe.

The authors explain that “whatever their source, absurdities are unlikely to be literally believed and truisms unlikely to be disbelieved. Most information, however, falls between these two extremes; it is neither obviously true nor obviously false. What makes most claims more or less believable is how well they fit with what we already believe […] Vigilance toward the source and vigilance toward the content may point in different directions. If we trust a person who makes a claim that contradicts our beliefs, then some belief revision is unavoidable: if we accept the claim, we must revise the beliefs it contradicts. If we reject the claim, we must revise our belief that the source is trustworthy.”

Epistemic vigilance creates a bottleneck with regards to the information that we are ready to accept. This is where argumentation and the use of reasons can help convince others, and thus open up the social flow of information, when people would otherwise not accept it on the sole basis of trust. Reason did not simply evolve to enhance the individual’s capacity for survival. It evolved to facilitate social interactions, in a way that confers survival benefits to the group.

The fact that individuals generally act without being aware of their own cognitive limitations can sometimes create problems. Take this example, which is typical of hierarchical organizations: “People in a position of power [usually] expect what they say to be accepted just because they say it. They are often wrong in this expectation: they may have the power to force compliance but not to force conviction […] As a communicator addressing a vigilant audience, your chances of being believed may be increased by making an honest display of the very coherence your audience will anyhow be checking.”

However, while rationalization and confirmation bias can create problems at an individual level, one could argue that, most of the time, these biases result in a beneficial collective outcome. “In an adversarial trial, the two battling parties are locked in a zero-sum game: one side’s win is the other side’s loss. While this highlights the utility of the myside bias, it might also unnecessarily tie it to competitive contexts. In fact, even when people have a common stake in finding a good solution and are therefore engaged in a positive-sum game, having a myside bias may still be the best way to proceed.”

In order to illustrate this, the authors draw up the following thought experiment:

Imagine two engineers who have to come up with the best design for a bridge. Whichever design is chosen, they will supervise the construction together — all they want is to build a good bridge. Ella favors a suspension bridge, Dick a cantilever bridge. One way to proceed would be for each of them to exhaustively look at the pros and cons of both options, weigh them, and rate them. They would then have to average their ratings — no discussion needed, but a lot of research.

Alternatively, they can each build a case for their favored option. Ella would look for the pros of the suspension bridge and the cons of the cantilever; Dick would do the opposite. They would then debate which option is best, listening to and evaluating each other’s arguments. To the extent that it is easier to evaluate arguments presented to you than to find them yourself, this option means less work for the same result.

Again, it is not individual rationality that matters, but rather how we apply our cognitive skills as a group. In this sense, the myside bias can actually result in a collectively beneficial outcome.

There is also a pragmatic dimension to reason as it evolved in humans. “Reason evolved to be used not in courts but in informal contexts. Not only are the stakes of an everyday discussion much lower, but its course is also very different. A discussion is interactive: instead of two long, elaborate pleas, people exchange many short arguments. This back-and-forth makes it possible to reach good arguments without having to work so hard.”

The traits of argument production typically seen as flaws become elegant ways to divide cognitive labor.

When we observe discussions in an office space, for example, we see that “the solution people adopt — starting out with what seems to be the best option and, if necessary, refining it with the help of feedback — is the most economical. For communicators, being lazy — using the shortest form likely to be understood — is being smart.”

There is an asymmetry between how people produce reasons–they are relatively lax about quality control–and how they evaluate others’ reasons–they are much more demanding.

This gives us an entirely new perspective on how reason operates in practice. “The most difficult tasks, finding good reasons, is made easier by the myside bias and by sensible laziness. The myside bias makes reasoners focus on just one side of the issue, rather than having to figure out on their own how to adopt everyone’s perspective. Laziness lets reason stop looking for better reasons when it has found an acceptable one. The interlocutor, if not convinced, will look for a counterargument, helping the speaker produce more pointed reasons. By using bias and laziness to its advantage, the exchange of reasons offers an elegant, cost-effective way to solve a disagreement.”

The implications for everyday life are significant. Imagine that you are debating someone. Many people believe that one just has to be rational to win an argument. But most people are very unlikely to change their mind when they are arguing with you alone — especially if you do not have any perceived authority.

Even if you had knowledge of some absolute truth, it would likely not matter. People are biased in favor of their own opinions, which they do not give up easily. However, you can still have an impact on what the group decides. When debating, do not address your opponent: rather, speak to the audience. What matters, ultimately, is how your argument compares to that of your opponent, in the eyes of others. The group is the ultimate judge. This is why it is nearly always better to argue in public. It imposes additional pressure on both you and your opponent, and the resulting outcome will likely be more rigorous (although there are certain situations where a private one-to-one discussion is better.)

There is also another important conclusion to be drawn from this: no one is born with a natural inclination to be humble. Insofar as humility is a virtue, it must be learned and cultivated. Since humility does not seem to be part of our natural inclinations, we can safely assume that perfect humility can never be attained. At the same time, it is hard to see how wisdom can be achieved without having a certain degree of humility. Wisdom requires that we constantly strive to overcome our own nature. Perhaps this can only be achieved by interacting with others.

Leave a Comment

Your email address will not be published. Required fields are marked *