Decision-making is, or seems to be, a very natural process: we weigh different options, evaluate their likely pros and cons, assess the potential risk and then judge the best course of action forward. This seems pretty easy and very familiar with all those whose position is geared towards quality decision making. However, it gets trickier and more difficult under specific constraints that are characterised by uncertainty, complexity, and in general, human fatigue.
Decision making and the cognitive processes and their limitations have been a subject of study in a number of disciplines including psychology, economics, cognitive ergonomics and system engineering. The quest for super computers and the advent of AI and machine learning are promissory future pathways to ensure higher decision making under extreme conditions with minimal error and the lowest levels of risk.
This has been a growing field of research in aviation, the medical fields and also management science. Yet, many of us, in our everyday life and even in business, still believe in our gut feeling when it comes to making decisions. This is not necessarily wrong, and we can often be correct, but, mistakes and errors are also the fruit of such ‘naturalistic’ processes, and what’s worse, we overlook them!
Managers make decisions every day, but there are a few blind spots that require attention. Such subjects have been brought up from time to time in a range of articles, stemming from popular articles in the Harvard Business Review to high level peer-reviewed journals. These blind spots are the triggers of our decisional distortions and very often lead us to take the wrong direction. Of course, such a short article will never do justice to this vast and still progressing field, but I will try to highlight the main issues.
We learn by a process of association and generalisation. Learning is a natural process in every human being, and it has a specific evolutionary function: that of helping us adapt to the world around us. Aided by other cognitive processes like memory, perception and attention, we learn to associate items and elements in the world as we feed them into our long-term memory.
For instance, a table would be associated to a chair; a fork to a knife and clown to laughter. These associations (which by the way form the basis of machine learning) can run into millions plus, of course, their permutations. For example, we know what a door is, and every object that has characteristics to which we associate to a ‘door’, is a door to us.
This generalisation process frees us from having to learn every time we meet specific characteristics that, when put together, reflect the object ‘door’ and eventually define our relations to these objects in very well defined ‘decision rules’. For example, “I walk through a door”, “I eat with a fork” are simple rules we all have stored in our memory system. In some ways this process makes life easier and more enjoyable because we do not need to tax our cognitive processes every time we encounter new but familiar objects, experiences and situations; on the other hand, we assume ‘simplicity’ and judge every instance using the same set of rules because of the seemingly familiarity.
These rules help us get around easily and, very often, are quite correct in helping us make the best choice. We call these heuristics. A heuristic is therefore a mental shortcut that allows people to solve problems and make judgments quickly and efficiently. Heuristics have been the subject of study for many years and have also been a field of academic and practical interest including that of three Nobel Prize Winners (Herbert Simon in 1978, and Daniel Kahneman and Vernon Smith in 2002; Kahneman also dedicated his prize to the late Amos Tversky with whom he developed Prospect Theory in Behavioral Economics).
A simple problem like asking someone which of the following strings of letters is more probably random (BBBBBBBBBB) versus (BBABAABAAB) is likely to trigger the second string of letter as more randomly appearing than the first. The truth is that both strings are likely to be randomly generated, because every letter is independent of the previous instance. However, the human brain associates ‘perceived difference’ (for instance, A and then B and then A again) as being more random than all letters being repeatedly the same.
Put differently, we assume that it is more probable for letters to be different when ‘random’ and less probably when least ‘random’. We may have learnt this through association, and therefore assume randomness. This is a heuristic.
While in this specific example the consequence for our assumption is no big deal, such assumptions have often been the cause for human errors and in some cases carry fatal consequences. This is not to say that heuristics are bad or good; they are intrinsically natural, and some studies indicate they are surprisingly precise when we are dealing with unknown territories or unfamiliar events. Yet, it is also a scientific fact that heuristics can sometimes ‘distort’ our judgements, and this leads us astray.
This is why in high reliability work contexts where risk is heightened because of the complexity and degree of uncertainty characterising the environment one needs to operate in (e.g., an operating theater, an aircraft’s cockpit, etc), established systems and explicit decision rules have to be adhered to clearly so as to avoid assumptions being made and errors manifesting themselves.
A number of heuristics have been identified and here we restrict ourselves to just four. The ‘availability’ heuristic means that people assess the likely cause of an event because that event seems to recur. Say, if Peter tells me “good morning” every single day, then Peter must be a nice guy.
The ‘representative’ heuristic refers to the fact that people tend to look for traits the individual may have that correspond to previously formed stereotypes of a specific category. For example, we may assume tall people to be smarter than shorter people.
The ‘confirmation’ heuristic means that we are selective about what data to use when testing personal hypotheses and we therefore conclude that the association between the single cause and the effect that we are considering is stronger than it is in reality.
And the last heuristic, the ‘affect’ heuristic, corresponds to a judgement we make following an emotional evaluation. You can certainly think of many episodes when you have used one or more of these heuristics such as during interviews, appraising people, judging performance and many other instances.
Heuristics are best conceived like mental algorithms that are ‘engineered’ to help us make an evaluation and hence a judgement. Because we are naturally limited information processors (we are often aware of part of what we know, less sure about what we know that we don’t know and completely blind to what is unknown to us), then there is a great risk that our judgement is distorted. This distortion coupled with other human and environment constraints leads to biases.
Biases limit our quality decisions and very often throw us off track and take us down the wrong path. Research has investigated a number of (cognitive) biases which a recent Harvard Business Review article calls traps. We mention four focal ones out of literally dozens.
The first is called ‘anchoring’. The first thing we see or hear feels the most important. While first impressions count, they are not always correct. We often fail to see the bigger picture, or, because of short cuts and assumptions, we become lazy at exploring the bigger picture, only to realise later that we made a mistake!
The second is called ‘sunk cost’, which refers to the tendency to make decisions that justify the past. The past is determined by a number of instances that are configured in a specific pattern and the conditional probability of each instance. While the future may be characterised by similar instances, there is no guarantee that the future will be shaped by those instances in the same configuration and shape. This is also sometimes called the ‘winner’s curse’, and motivates us to examine every situation, even if it looks familiar, with a fresh outlook and a critical eye.
Another bias is the ‘confirm evidence’ bias, whereby we seek information that supports our existing point of view. There are many reasons why we do this, one of them being because new evidence requires laborious processing, and it may upset our ego as we have to face the fact that the truth is different from what we always believed.
The last bias is ‘framing’, which means that the way we state the problem influences what we decide: stating that a drink is ’80 per cent sugar free’ rather than stating it is ’20 per cent sugar’ is really and truly saying the same thing; but the first statement is more likely to trigger a sense of health appeal than the second does.
Other biases include ‘illusion of control’, ‘illusory correlations’, ‘hindsight bias’, ‘conservatism’ and ‘social desirability effect’ amongst others. The important take away here is that biases are a natural off-shoot of our natural decision-making process and we are likely to be in the wrong from time to time unless we become slightly more critical and thoughtful of the way we use evidence in decision making.
One international scholar and friend of mine in business strategy once told me that “what you see is not always what you see”. This holds true to the phenomenon of cognitive biases in business decision making.
This is why training and skillful development in the management of data and evidence will become critical to the successful implementation of decisions in complex and uncertain environments. We need to learn how to capture unseen patterns and fuse them into our decision-making activity. This is where the border between human limits and technological capabilities meet, and while this is a promising field, it requires attention in the way it is implemented and managed. But certainly, this is the future in high-level quality decision-making of complex business systems.
We often see ESG principles slapped onto company mottos, statements and websites with the aim of looking modern and progressive, ...
The Headhunter names a drop in productivity and engagement as two possible indicators of burnout in employees.
The Concept Stadium CEO highlighted the need for internal assessments to ensure the right focus is in place.
Business leaders have to be wary that a lack of motivation from their end will seep through to the rest ...