The idea that man is a fundamentally rational being goes back a long way. Descartes believed that our capacity for thinking and reasoning would allow us to discover absolute truths. Kant’s motto for the Enlightenment era, sapere aude (‘dare to be wise’), assumes that men have an inherent capacity for reason, and that they need only have the courage to unleash it in order to become mature and enlightened men.
Over the last fifty years, an ever-increasing volume of scientific literature has started revealing a number of intrinsic cognitive biases in the human mind, thereby drawing a much more nuanced picture of human faculties.
This very weakness is one of the central themes of Joseph Heath’s book, Enlightenment 2.0, which analyzes the increasingly extreme and manipulative nature of North American politics. While the book encompasses many subjects, I will here summarize and extend on one particular aspect of Heath’s work, namely how to deal with our limited rationality.
The first part of this essay deals with examples of specific cognitive limitations which bound our rationality. The second deals with the “fixes” we use to deal with them. The remaining parts reflect on the implications of Heath’s analysis for organizational design.
The Old Mind and the New
Having to handle a lot of different tasks easily creates an optimization problem. How much time should be allocated to each task? In what order should the tasks be performed? Unfortunately, the way our brains try to solve this problem is far from ideal.
Multitasking – the ability to run several processes simultaneously – is actually an illusion. We can only pay attention to one thing at a time. People who consider themselves good at multitasking think that they are performing multiple tasks simultaneously, but in reality they are not. They are simply shifting their attention from one thing to another.
As Heath notes, there is some evidence that what they actually are is bad at concentrating, and that they multitask only because they are easily distracted, as a result of which they tend to perform worse on all tasks when compared to those who have a more “plodding” style.
Psychologists generally agree that attention is allocated through a system of competition between different stimuli. Psychologist Timothy Wilson estimates that at any given time, our brains are receiving 11 million discrete bits of information per second, of which no more than 40 can be consciously processed. Thus our brain has no choice but to filter and focus on an extremely limited set of data.
Similarly, a computer is doing only one thing at a time; it just alternates very quickly between them, using a scheduler to allocate CPU time between different tasks. Unlike the computer’s CPU, however, our mind has two important handicaps: first, it has only limited powers of pre-emption, and second, it is not good at reassigning priorities to tasks.
Suppose that you are quietly sitting and reading a book. Suddenly you feel an itch and find yourself scratching and shifting around. Then you start feeling drowsy. These are all manifestations of low-priority tasks breaking through consciousness and capturing attention.
Surprisingly, no task has the capacity to automatically override others. Intense pain, for example, competes on an equal level with other types of stimuli. Soldiers who suffer serious injuries in combat will not necessarily notice the pain until things have calmed down. It is quite amazing what people will overlook if their attention is drawn to something else.
Yet once we do notice pain, it is quite difficult to ignore. This is because we do not have the capacity to reassign priority to cognitive tasks which have captured our attention. We are prone to a weakness that psychiatrists call unwanted ideation.
We are often unable to stop thinking about a particular idea, whether it is an irritating comment made by a colleague, an anxiety related to someone we care about, or a sexual image or fantasy. Straightforward inhibition of such thought processes is quite difficult.
Despite these limitations, our capacity for conscious logical thought is real. We just first have to make sure we can properly exploit this capacity to its fullest extent. I discuss how this is achieved in the following section.
The sixteenth-century philosopher Michel de Montaigne is known for having acted on the knowledge that his mind was inherently flawed. He incessantly observed that his memory was defective. He kept informing his servants of his ideas as they came to him, so that they could remind him of them later.
Having witnessed a variety of extreme behaviours during a time of civil war, he also believed that a sensitivity to human reality and psychology often offered a better approach to most social situations, more so than relying on absolute principles.
As Montaigne probably would have agreed, it is difficult for men to directly control all their thoughts. Because of this, we need kludges.
This is a term commonly used by engineers, technicians, and computer programmers to describe a solution to a problem that gets something to work without really fixing the problem. Programmers often resort to kludges when trying to debug software. Joseph Heath provides a perfect example:
Suppose, for instance, you write some [code] that takes a number as an input, performs a complicated calculation, and then spits out another number as output. Everything is working fine, except that for some reason, whenever it receives the number 37 as input, the subroutine does something strange and produces the wrong answer. After spending hours staring at the code, trying to figure out why it’s not working properly, you give up trying to fix it. Instead, you just figure out manually what the answer should be when the input is 37. Suppose it’s 234. You then add a line of code at the beginning that says something like the following: take the input number and feed it to the subroutine, unless that input happens to be 37, in which case just send back 234 as the answer.
This is a kludge. It works, but it is a massively inelegant solution, in the sense that the underlying problem is still there. If someone was to overlook your fix when working on your code, it could create a whole new set of problems.
Our intelligence is made possible thanks to an enormous collection of kludges. We effectively rely on all sorts of “fixes” and “scaffolding” in order to bypass the inherent imperfections of our minds and increase our cognitive powers.
For instance, most of us are not really good with mental calculus because our (untrained) memory has a lot of difficulty simply retaining the different products of a multiplication – we forget them before we even have the chance to properly add them up.
So we use a pen and some paper, allowing us to record information somewhere outside our brain, while we focus our attention on the priorities. The pen and paper are man-made structures, or “scaffolding,” that help enhance our cognitive powers. Our modern world is literally filled with millions of these – pencils, letters and numbers, post-it notes, sketches, calculators, computers, internet search engines, and most importantly: other people.
Take another example: unlike body temperature, which we can maintain on our own, concentration is not something we can maintain under all circumstances. Our brain simply does not have the tools for this, and yet concentration is absolutely essential to the task of reasoning.
When we have difficulty concentrating, we may resort to different kinds of kludges. Some of it could be to make what we are trying to do more exciting, but the standard solution is to reduce noise, that is, manipulating the environment in order to make everything else less exciting. The first thing you could do is to try to isolate yourself. You may also benefit from an environment that is familiar: nothing new or interesting to capture your attention.
As psychologist Gary Marcus notes, biological evolution tends to favor genes associated with short-term gains. This is why our brain is not a fine-tuned organ, but rather “a clumsy, cobbled-together contraption”. Rationality is not what our brains were intended for; in fact, there are many observations which tend to indicate that human thinking is merely a by-product of our evolutionary process; our intelligence emerged only as a consequence of cognitive adaptations to increased social interaction between members of our species.
Hence we are not the fully rational and autonomous individuals that modern society would like us to think that we are. We are only rational insofar as we are (or have been) exposed to artifacts and institutions, or kludges, that allow us to amplify our aptitude for logical thought and critical thinking.
As Heath explains, “the genius of the human mind lies not in its onboard computational power, but rather in its ability to colonize elements of its environment, transforming them into working parts of its cognitive system.”
The case for tinkering
Institutions are prone to fixes that have accumulated over time to create the organizations that we know today. The British political system, for instance, may be seen as the sum of a series of both small and large institutional amendments that go as far back as the Magna Carta. This is in a way what Edmund Burke argued following the French Revolution.
Burke is often described as one of the founders of modern conservatism. However, unlike petty conservatives who favor tradition because of a personal fondness for old values, Burke made a particularly interesting point, which is that the radical “destruction and rebuilding” of entire systems from scratch is risky because the new system will not benefit from the accumulated wisdom of the old one.
This is a very important insight. The fact that we do not know why some institution exists does not mean that it has no reason for being there. This is a very dangerous inference, yet it is frequently made, both in government and business.
Imagine the following: a group of consultants is hired by a newspaper to revamp its content. They step into the newspaper’s offices and realize that editors and journalists are all working in isolated offices and cubicles. They determine that the newspaper’s employees would benefit from working in an open-space environment as this would give a boost to collaboration and creative content. The changes are implemented.
The newspaper’s managers then notice something odd. Although some staff seem to be doing better than before, most employees are negatively affected by the changes. The reason lies in something that the consultants oversaw. While open offices may work quite well in marketing and advertising firms, journalists and columnists need an environment in which they can concentrate on their individual articles.
What we can conclude from this is that we should not underestimate the wisdom imbedded in existing institutions. In this sense, and in this sense only, conservatives got it right. The more big and radical the change, the greater the risk of overseeing certain aspects of the big picture.
Such risks may sometimes be justified, particularly in periods of crisis, where organizations may need to be overhauled in order to survive and prosper.
The rationalistic mistake, however, is to believe that planned, top-down, radical systemic change can solve any problem. In many cases it simply will not, either because the system is too complex to understand, or because the people at the top act on the basis of limited rationality, or both. This is why gradual tinkering or experimentation may sometimes be a much better way of dealing with organizational problems.
The benefits of slowing down
Many people, including New York Times columnist Thomas Friedman, see globalization as a phenomenon characterized by increased competition between countries. People who hold such beliefs will tend to say that developing countries are literally stealing jobs from developed countries. In order to avoid economic failure, governments must figure out strategies to cope with this global competition.
While this view would seem to make sense at first sight, it is mistaken because it assumes that the relation between trading nations is similar to the relation between competing firms.
The fact is that trade between nations with radically different wage levels generates no downward pressure on wages in in the richer countries. The demonstration was first made by David Ricardo in 1817. His theory of comparative advantage showed that international trade is nearly always mutually beneficial, even if one trading country has the ability to produce all goods at lower labor cost.
The fact that an automobile-parts factory in Detroit has been relocated to China does not imply that trade with China is detrimental to the United States. It simply means that China has a comparative advantage in automobile-parts manufacturing. The fact that automobile workers have difficulties finding new jobs is a serious concern, but not one that is representative of all trade with China.
Hence policy makers should not necessarily be concerned about trade relations in general, but rather about the specific social consequences of declining industries. People may prefer to preserve traditional lifestyles rather than increasing overall economic welfare; they may genuinely want that third generation shoemaker to remain in business. This is an entirely legitimate preference, and constitutes a strong argument in favor of selective protectionism.
The point remains: saying that a country is losing in international trade is plain wrong. But this can seem counter-intuitive. In principle, people could go read a book or watch a video and try to figure it out, but very few ever do. And as Paul Krugman observes, it is quite easier to teach it to “docile students” than to teach it to an adult who “already has an opinion about the subject.”
The problem, as Heath notes, is that adults in an everyday social setting are practically unteachable. Ricardo’s argument requires abstract thinking and decontextualization. In most social situations, this would require a heroic amount of mental inhibition and self-control.
Imagine, says Heath, that you have to explain Ricardo at a dinner party. It is impossible to do so without breaking several social conventions. First, “it would take several minutes of talking, for which you would be considered a bore.” Second, there will “always be someone who interrupts, typically to raise a premature objection, make a joke, or raise an issue that is tangential to the central line of argument.”
Students are docile because they are constrained by specific classroom rules that favor the learning process. They are not allowed to interrupt. They have to raise their hand to ask a question, and even then, the teacher may simply say, “not now, wait until I have finished explaining this point.” This favors the learning process because developing a sustained argument requires a crucial ingredient: time.
Heath argues that much of what is described as “authoritarian old-fashioned educational practices” – the teacher’s control of the classroom, the organized desks in rows, the reading assignments, problems sets, deadlines, exams and the assignment of grades – can all be seen as “external scaffolding designed to enhance our own deficiencies of self-control when it comes to concentration, planning and goal attainment.” These institutions essentially force students to slow down, concentrate, and learn.
Managers are not so different insofar as they are prone to the same human flaws. This is why the learning organization can equally benefit from slowing down in order to treat initial individual responses with caution and think things through more carefully.
The critical issue is knowing when, where, and how to slow down. Strategic planning and brainstorming sessions, for example, may benefit quantitatively from a greater allocation of time, but perhaps only up to a certain point – because of diminishing returns and opportunity costs.
They may also benefit qualitatively from rules designed to enhance the emergence and exchange of ideas. This can be achieved by iterating between individual ‘thinking time’ and ‘group discussion,’ by setting clear rules on how to deliberate in meetings and assemblies, by designating moderators to give focus to discussions, and by integrating physical tools to facilitate and stimulate thinking.
Fast and efficient decision-making
According to economist Richard Thaler and legal scholar Cass Sunstein, the picture that emerges from all the known cases of limited human rationality “is one of busy people trying to cope in a complex world in which they cannot afford to think deeply about every choice they have to make.” And so people easily make irrational choices.
This affliction is not without its fixes. We remain able to change our environment by creating institutions that effectively trick our brains into making reasonable decisions – that is, decisions which do not go against our own long-term preferences, and to which most of us could agree.
Corporations have a notable capacity to create work environments that encourage employees to perform their tasks in the most efficient way possible. They do this by allowing them to internalize efficient routines and heuristics, and by implementing what could be called an appropriate choice architecture, a term that I borrow from Thaler and Sunstein.
Heuristics are much like guidelines or rules of thumb. For example, I tend to stick to a personal rule according to which 80% of my time should be spent on short-term priorities, while 20% should consist of improving my tools and my work processes for long-term benefit.
This is what a heuristic is: an internalized rule for reacting to common situations, in this case the issue of how to allocate my work time. There are many such 80/20 rules out there. Heuristics do not work in every case, but they work most of the time. My time-allocation heuristic reflects the reality of the startup I work for, and it works: I do not have to continually reconsider or strategize about the time I spend.
Meanwhile, the purpose of choice architecture is to facilitate rational decision-making by agents that are limited in their ability to do so. People do not have the capacity to carefully think through every decision they make, so structures must be developed to frame and support fast decision-making. Here are a few examples on how this could be achieved in an organizational setting:
Reduce information overload. Information must continually be filtered and synthesized as it is shared and communicated throughout the organization. Efficient managers and employees have neither the capacity nor the time to process everything that goes on everywhere. They will use whatever information they have to make the best out of any given situation. In order to avoid faulty decisions made on the run, people need limited but reliable information. This usually goes for all levels of management.
Facilitate decision-making over time. Individual biases in favor of short-term gains are also present inside organizations. For instance, how does a manager make sure he does not overspend on some immediate promotional opportunity? Such errors can be mitigated by systematically drawing attention to the future outcomes of decisions and by emphasizing second-best options.
Create reminders and checklists. Confirmation bias, anchoring, and loss aversion are particularly hard to overcome, as we are continually affected by these biases – even when we consciously try to avoid them. While it is impossible to entirely remove their effect, it might be a good thing to keep a few checklists in reach for important decisions.
Make sure new ideas are exposed to critical appraisal. In one of his most recent books on groupthink, Sunstein observes that while positive, enthusiastic, happy people will tend to attract the most attention and transmit their contagious optimism to others, groups may benefit from maintaining the presence of some more anxious and critical people, as these are the most likely to cut through groupthink psychology (precisely because they say what they fear).
Truth be told, there is a whole bunch of cognitive biases to which our everyday decision-making can fall victim. It would be a herculean task to counteract all of them, but it certainly is possible to design an organization that minimizes the impact of the most serious ones.