On qualitative calculations


[Guess the country. Wrong.]

Some time ago, I hinted that maybe some combination of fuzzy logic and wavelet-like mathematics might deliver tools for qualitative risk management calculations.
Now is the time to delve a little into the subject. If possible, somewhat methodologically.

But hey, you know me; that will not work too perfectly as perfection is boring. I’ll just take it away with a discussion on scales, and thrown in a handful of Ramsey-Lewis work:
There’s the nominal scale. Where one can only sum up the categories or bins one would want to place one’s observations in. presumably, we’d have observations of just one quality (‘aspect’) of the population of observed items [either ‘real’, if there is such a thing, or abstract..! Oh how many problems we face here already] One cannot establish (in)equality between categories, nor add or substract, nor ‘size-‘compare.
There’s the ordinal scale, too. Here, we have some form of ranking of categories, they can be ordered. Categories are either dichotomous (an observation put into one category excludes the observation be also put into another), or non-dichotomous (category membership is non-exclusive. ‘Completely agree’ supposes ‘mostly agree’). At least we can compare, by order, between ‘larger’ and ‘smaller’ or by precedence, but not much more; no calculation.
On to the interval scale. Which has degrees of difference, like (common) temperature and dates. With their arbitrary zero we can’t talk sensibly about ratios. 10°C is not twice as warm as 5°C … But we can add and substract, just not multiply and divide.
This, by the way, is the ‘first’ scale considered to be quantitative, the above are qualitative..!
Next is the ratio scale, that has all of the above and a definitive zero, hence full calculation can be done.
Trick question for you: Where would the binary scale go …?

Just to mention Cohen’s kappa for qualitative score agreement among raters; it may or should come back later. And to mention all sorts of ‘Big’ Data analysis conclusions, with all the monstrosities of mathematical errors in that squared with lack of understanding of the above (in particular, the degradation of information in the direction from ratio to nominal, impossibly the other way around without adding relatively arbitrary information..!)

Now then, fuzzy logic. It takes an interval scale or ‘better’, and stacks a probability function to arrive at quantitative non-ditochomy. Next, work through cause-effect trees [hey, that’s a new element – I’ll come to it shortly] where some cause both happens with some probability and doesn’t happen with another probability at the same time, which propagates through OR and AND-gates to create effects with weird probabilities / weird aspects of probability, and on through the chain / feedback loops and all. If we can assign probabilities to less than interval scales, we would … oh, we do that already, in fault trees, etc.
Except that we don’t. We do not, I repeat do not, build real fault trees in general business riks management. We do the fresh into kindergarten version of it only! NO you don’t. And also, you don’t assign proper probabilities. You screw them up; apologies for the words.

So, we need combinations. We need the flexibility to work with qualitative scales when (not if) we can do no better, and with quantitative scales wherever we can. Being careful about the boundaries ..! Maybe that is (the) key.

[Interlude: wavelets, we of course use on ratio scales, as a proxy for fuzzy logic in continuous mathematics (?)]

Why would this be so difficult ..? Because we have so limited data ..? That’s solvable, by using small internal-crowd sourced measurements; using Cohen’s kappa (et al.) as mentioned.
Because we have no fault trees? Yes, indeed, that is your fault – trees are essential to analyse the situations anyway. Acknowledging the difficulties to get to any form of completeness, including the feedback loops and numerous time-shifts (Fourier would have ahead-of-time feedbacks …! through negative frequencies…). Not acknowledging consistency difficulties; one could even state that any worthwhile fault tree i.e., any one that includes sufficient complexity to resemble reality in a modeling way (i.e., leaving out unnecessary detail (only!)), will have inconsistencies included or it’s not truly representative… ☺

Hm, I start to run in circles. But:
• Haven’t seen fault trees in risk management, lately. We need them;
• Let’s apply ‘fuzzy logic’ calculations to fault trees. We can;
• When we use less-than ratio scales, let’s be clear about the consequences. Let’s never overrepresent the ‘mathematical rigour’ (quod non) of our work, as we will be dragged to account for the errors that causes, with certainty.

One thought on “On qualitative calculations”

Leave a Reply

Maverisk / Étoiles du Nord