Do we have a tendency to look at too many small risks and not just one big one?
Two things that I often observe to go wrong with risks.
There is a natural tendency to do the easy, least risky things first. What we should do is to adress the most risky things first, because they often hide most new information.
Also there is a natural tendency to bundle risks. Thinking that if we anyway need to do all of this, then we can as well do all of this. We overlook that when we add complexity we multiply risk.
In effect we tend to bundle a lot of small risks in to an inflated ‘let’s do this in first release’ concept. Not unlikely to happen in a risk adverse, stage-gate culture.
Implicit in this is also a strive to find and log all risks, when we know by experience that it’s a few big risks we are not yet aware of that ends up knocking us over … rather than logging small risks we know, we should search for big ones we don’t know 🙂
Much can be achieved by splitting risks as much as you can, dealing with them one by one and starting with the most important ones.
What do we need to know first? What options do we have? How can we set up experiments to find out what the most viable options are?
Tackling the biggest risks first likely lead us to find other big ones we did not know about sooner 🙂