Here’s US Secretary of Defense Donald Rumsfeld talking about the famous “unknown unknowns”, in the context of the Iraq war. But there’s one category he misses, and it is the most important of all.
“There are known knowns, the things we know we know, and known unknowns, the things we know we don’t know. And then there are the unknown unknowns, the things we don’t know we don’t know…”
But there are also the unknown knowns, the things we don’t know we know.
By now your brain may be well on its way to being fried, but an example will make it crystal clear. The reason the Iraq war turned into such a tragedy is not because of what Rumsfeld and Bush didn’t know they didn’t know, but what they didn’t know they knew. Or, to put it simply, the assumptions they “knew” but never stated, so that they never examined them. The big unknown known of the Iraq war was “once the Iraqi people have been freed from the horrible Saddam Hussein, they will joyfully embrace a Western-style liberal democracy.”
Now when you actually say this you see the improbability of it. And even a cursory familiarity with history would tell you that this isn’t what usually happens when people are freed by force from tyranny. But the assumption was never stated, so it was allowed to pass without challenge. In the words of Pierre Bourdieu, it was one of those things which “go without saying, because they come without saying.”
Or, in the much neater formulation of Mark Twain, “It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so.”
It can be positively dangerous to be too much of a subject matter expert. The following comes from a body of work originally developed by the CIA as guidance for analysts.
Consider the following simple experiment. Read the text in the three triangles below.
Did you notice that in each case the article is repeated? If so, you have exceptional powers of observation, were lucky, have seen a similar exercise before, or are a professional proof-reader. Mostly, we see what we expect to see. In fact, the more fluent you are in english, the less likely you are to see the repetitions – the more likely you will unconsciously “correct” the “faulty” text.
As a more relevant example, consider the case of the two Germanies at the end of the 1980s. The Soviet Union was collapsing and had announced that it would not interfere in the affairs of its satellites, and unrest was growing in East Germany. The Berlin Wall was about to fall. German reunification was becoming more and more probable. Yet is was often the German specialists in the foreign ministries and intelligence services who had the most difficulty seeing this. They needed considerable prodding from their more generalist supervisors to accept the significance of current events.
This by the way is an alternative angle on the problem of disruptive innovation described by Christensen in his book The Innovator’s Dilemma. Many established businesses suffer because they miss or resist new products. Christensen ascribes this to either financial or organisational factors – the new products are less profitable than the old, or the most powerful people in the organisation are those in charge of the established businesses; they have no desire to cede power or influence to those bringing forward the new. The German example suggests another reason – they simply cannot see what is happening.
In a world which is changing fundamentally, deep domain knowledge on its own can be dangerous. It predisposes you to see what you have always seen, and blinds you to the new. This is not to deprecate it altogether, but to say that it is probably necessary but certainly not sufficient. It needs to be complemented by someone who has skills in complex and ambiguous situations in general, not knowledge of your situation in particular.
Try this experiment. For six months, ban the use of the word “culture” in your organisation. (The only exception is if you are a pharmaceutical company, in which case you may still have cultures, so long as they live in petri dishes).
Instead of “culture” say “the collection of habits and beliefs that determine how we do things around here.” This is not a controversial redefinition, but it has remarkable results. How do you feel when someone says “we need to change the culture”? It seems like a huge task, like chipping away at an enormous granite monolith.
Where you would even start? But if you say “we need to change habits and beliefs,” that invites the question “all of them, or just some?” The answer, of course, is just some. Progress already! We have broken the task down into more manageable parts. And we know how to change habits and beliefs.
The other reason for avoiding “culture” is that the word covers a multitude of different things, some of them deserving of respect, others not. As a diverse society we put a great deal of importance on understanding different national, ethnic and religious traditions and, quite rightly, respecting them. Organisational “cultures”, on the other hand, are more instrumental. They help the organisation achieve its goals or they do not. They may be dysfunctional, or simply outdated.
An organisation which historically worked in a stable and predictable environment may have developed a habit of risk aversion which serves it well so long as the environment remains stable and predictable. If it doesn’t, then that risk aversion can become dangerous. General Motors pre-bankruptcy was notorious for its belief that it knew everything and had nothing to learn from anyone. Asking someone to take more sensible risks or listen more to customers is not the same thing as offering the Rabbi a bacon sandwich. At its most degenerate “that’s not in our culture” is simply a way of giving spurious authority to “we don’t want to do that.”
Words have power. They determine what we think, what we notice, and what we believe right or possible and thus what we do. It may seem strange that a simple change of vocabulary could make such a difference, but give it a try.
Without lifting your pen from the paper, draw 4 straight lines which go through each of the nine dots exactly once.
When you first came across the problem you were probably foxed, at least for a while. Then you either noticed, or were told, the unspoken assumption that was holding you back. You assumed that the lines had to stay inside the nine dots. Once you realised that this was nowhere stated in the problem, the answer became simple:
Literally, the answer comes from going “outside the box.”
So far, so familiar. But …there is a simpler solution. You can in fact join all the dots with only 3 lines. Give it some thought. The solution is here.