Define ‘Risk’…

This should be an easy one, by pointing at ISO 31000 and its definition the effect of uncertainty on objectives. But that same easy def also raises more questions than it answers, e.g.,

  • How to define [ hence | and ] classify effects,
  • How to define [ hence | and ] classify uncertainty (a biggy …!),
  • How to define [ hence | and ] classify objectives,
  • How to establish measurement of effects,
  • How to establish measurement of uncertainty,
  • How to establish measurement of objectives

that all have an impact on, and are impacted by, the definition. Hopefully, I don’t have to elucidate define hence classify, define and classify or establish measurement regarding effects, uncertainties or objectives. I’ve been at the subject before (here and many posts since) so much that it hurts, me too. But still, many won’t listen and remain stuck in their proven (sic) mistaken belief that the World we’re dealing with, can be caught in models to ‘predict’ the future and/or at the same time remain stuck in, by now approaching hilarious, classifications like Basel II-IV’s… or the slowly but steadily outdating of the classical information security mantra of CIA — those three classes of objectives don’t cut it anymore.

For the more advanced reader (approx. 90% by now — hopefully), the question remains: How to define and classify uncertainty, effect(s!) and objectives ..? Standard classifications all had their stab at it, but failed for the fuzzy nature of those phenomena. Some leaned to the Uncertainty side, trying foremost to classify threats. Some, to the effects side with their vulnerabilities-first approach — via the Impacts classification. Some even had Objectives in mind when pondering the downside potentials of loss-of-upside potential, including scour-for-opportunities to any (0-100%) degree. And then, there’s the abovementioned surefire laugh over ‘Event’ driven analysis… yes consistency, completeness and orthagonality remain essential.
But above all, none captured the time-fluctuation confluence of causes, effects, impacts, … [what have we] that all have such unanalysable structure. Due to their continuous nature; contrasted to the discrete nature often but cannot-be-more-false’ly assumed. [If you don’t get the fundamental difference between discrete and continuous phenomena, go study core math in depth, length and breath. Which is helpful against so great many ills of mind…] And due to the enormously-over-three body problem of interactions [link is about grand business not the petty risk analysis kind but the link therein is valid for the above, too].
Modeling in order to understand may work, but only to understand the exaggeratedly dumbed-down model, the conclusions of which if normative are (in this case, there is such a thing as absolute) certain not to apply or work so why bother. Oh, maybe you may bother, to get a feel of your inadequacy. [Note: I don’t feign to be above that. But I don’t allow you to assume you are as that is both a theoretical and practical logical error.]

Yesy, yes, I know; there very probably is no One Classification Fits All, then. But we may dream, and strive for it, don’t we ..? And at least be very, very clear about it — it being the approach we do take, and what it might potentially (with the probability being above zero but certainly being far off 100%) achieve. Aren’t GUTs, like the Standard Model or the hyperdimensional string theories, the dreams that stuff are made of, too ..?
As always, your suggestions, please. And:
DSC_0643
[Just wait till Etna Says Boom. Or don’t.]

Leave a Reply

Maverisk / Étoiles du Nord