At least, you can have your PIA

Privacy Impact Assessments are treated much too much as an assumption in (new European regulations’) privacy-anything these days. Yes, PIAs are a critical step, on the very critical path towards compliance in substance. Since when they aren’t done well if at all done with any true attention and intention, your compliance effort will fail, if not formally then in practice – with equal serious break-your-business high-probability risks.

First, this:
20140905_201502[Heaps upon Sea again indeed]

The point being; PIAs should be done with an actual interest in privacy (of stakeholders) protection. When done less than full-heartedly, the results have hardly any value. Because that would demonstrate one doesn’t understand the ethic imperatives of privacy protection in the first place. From which would follow all required (other) policies and measures would be half-hearted, ill-focused, and sloppily implemented ‘as well’. Which isn’t the stretch of reasoning you picked up on first reading this…

And then, a great many organisations don’t even start with PIAs, they just jump in at all angles and steps. With PIAs still being required, not full-heartedly carried out somewhere during or after the fact,where all the rest is implemented on assumptions that will not be met.

To which I would add: In the above, ‘you’ regards the ones in control (“governance”, to use that insult) at organisations that would have to be compliant. Not you the advisors/consultants, internally (in 2nd and 3rd LoDs) or externally, that push organisations. [Don’t! Just tell, record, and after the disaster ‘told you so’ them. There’s no use at all kicking this dead horse.]
But oh well, why am I writing this? Why am I hinting at ethics in your governance? That’s an oxymoron at your organization – do you claim to have the one or the other?

Feel free to contact if you’d like to remedy at least this part of your Privacy non-compliance…

4th of July, a message from the US of A

On controls and their systemic ineffectiveness per se. As written about a lot in the past year on this site, PCAOB now finally seems to find out how things have been ever since SOx… in [simple block quote copy from this post by James R. (Jim) Peterson]:

The PCAOB Asks the Auditors an Unanswerable Question: Do Company Controls “Work”?

“Measure twice – cut once.”
— Quality control maxim of carpenters and woodworkers

If there can be a fifty-million-euro laughingstock, it must be Guillaume Pepy, the poor head of the SNCF, the French railway system, who was obliged on May 21, 2014, to fess up to the problem with its € 15 billion order for 1860 new trains—the discovery after their fabrication that the upgraded models were a few critical centimeters too wide to pass through many of the country’s train platforms.

Owing evidently to unchecked reliance on the width specifications for recent installations, rather than actual measurement of the thirteen hundred older and narrower platforms, the error is under contrite remediation through the nation-wide task of grinding down the old platform edges.

That would be the good news – the bad being that since the nasty and thankless fix is doubtless falling to the great cohort of under-utilized public workers who so burden the sickly French economy, correction of the SNCF’s buffoonish error will do nothing by way of new job creation to reduce the nation’s grinding rate of unemployment.

The whole fiasco raises the compelling question for performance quality evaluation and control – “How can you hope to improve, if you’re unable to tell whether you’re good or not?”

This very question is being reprised in Washington, where the American audit regulator, the Public Company Accounting Oversight Board, is grilling the auditors of large public companies over their obligations to assess the internal financial reporting controls of their audit clients.

As quoted on May 20 in a speech to Compliance Week 2014, PCAOB member Jay Hanson – while conceding that the audit firms have made progress in identifying and testing client controls — pressed a remaining issue: how well the auditors “assess whether the control operated at a level of precision that would detect a material misstatement…. Effectively, the question is ‘does the control work?’ That’s a tough question to answer.”

So framed, the question is more than “tough.” It is fundamentally unanswerable – presenting an existential problem and, unless revised, having potential for on-going regulatory mischief if enforced in those terms by the agency staff.

That’s because whether a control actually “works” or not can only be referable to the past, and cannot speak to future conditions that may well be different. That is, no matter how effectively fit for purpose any control may have appeared, over any length of time, any assertion about its future function is at best contingent: perhaps owing as much to luck as to design — simply not being designed for evolved future conditions — or perhaps not yet having incurred the systemic stresses that would defeat it.

Examples are both legion and unsettling:

  • The safety measures on the Titanic were thought to represent both the best of marine engineering and full compliance with all applicable regulations, right up to the iceberg encounter.
  • A recovering alcoholic or a dieter may be observably controlled, under disciplined compliance with the meeting schedule of AA or WeightWatchers – but the observation is always subject to a possible shock or temptation that would hurl him off the wagon, however long his ride.
  • The blithe users of the Value-At-Risk models, for the portfolios of collateralized sub-prime mortgage derivatives that fueled the financial spiral of 2007-2008, scorned the notion of dysfunctional controls – nowhere better displayed than by the feckless Chuck Prince of Citibank, who said in July 2007 that, “As long as the music is playing, you’ve got to get up and dance… We’re still dancing.”
  • Most recently, nothing in the intensity of the risk management oversight and reams of box-ticking at Bank of America proved satisfactory to prevent the capital requirement mis-calculation in April 2014 that inflicted a regulatory shortfall of $ 4 billion.

Hanson is in a position to continue his record of seeking improved thinking at the PCAOB — quite rightly calling out his own agency, for example, on the ambiguous and unhelpful nature of its definition of “audit failure.”

One challenge for Hanson and his PCAOB colleagues on the measurement of control effectiveness, then, would be the mis-leading temptation to rely on “input” measures to reach a conclusion on effectiveness:

  • To the contrary, claimed success in crime-fighting is not validated by the number of additional police officers deployed to the streets.
  • Nor is air travel safety appropriately measured by the number of passengers screened or pen-knives confiscated.
  • Neither will any number of auditor observations of past company performance support a conclusive determination that a given control system will be robust under future conditions.

So while Hanson credits the audit firms – “They’ve all made good progress in identifying the problem” — he goes too far with the chastisement that “closing the loop on it is something many firms are struggling with.”

Well they would struggle – because they’re not dealing with a “loop.” Instead it’s an endless road to an unknown future. Realistic re-calibration is in order of the extent to which the auditors can point the way.

And … there you go, for today’s sake:
DSCN7728
[Watching (us against) you …]

On qualitative calculations


[Guess the country. Wrong.]

Some time ago, I hinted that maybe some combination of fuzzy logic and wavelet-like mathematics might deliver tools for qualitative risk management calculations.
Now is the time to delve a little into the subject. If possible, somewhat methodologically.

But hey, you know me; that will not work too perfectly as perfection is boring. I’ll just take it away with a discussion on scales, and thrown in a handful of Ramsey-Lewis work:
There’s the nominal scale. Where one can only sum up the categories or bins one would want to place one’s observations in. presumably, we’d have observations of just one quality (‘aspect’) of the population of observed items [either ‘real’, if there is such a thing, or abstract..! Oh how many problems we face here already] One cannot establish (in)equality between categories, nor add or substract, nor ‘size-‘compare.
There’s the ordinal scale, too. Here, we have some form of ranking of categories, they can be ordered. Categories are either dichotomous (an observation put into one category excludes the observation be also put into another), or non-dichotomous (category membership is non-exclusive. ‘Completely agree’ supposes ‘mostly agree’). At least we can compare, by order, between ‘larger’ and ‘smaller’ or by precedence, but not much more; no calculation.
On to the interval scale. Which has degrees of difference, like (common) temperature and dates. With their arbitrary zero we can’t talk sensibly about ratios. 10°C is not twice as warm as 5°C … But we can add and substract, just not multiply and divide.
This, by the way, is the ‘first’ scale considered to be quantitative, the above are qualitative..!
Next is the ratio scale, that has all of the above and a definitive zero, hence full calculation can be done.
Trick question for you: Where would the binary scale go …?

Just to mention Cohen’s kappa for qualitative score agreement among raters; it may or should come back later. And to mention all sorts of ‘Big’ Data analysis conclusions, with all the monstrosities of mathematical errors in that squared with lack of understanding of the above (in particular, the degradation of information in the direction from ratio to nominal, impossibly the other way around without adding relatively arbitrary information..!)

Now then, fuzzy logic. It takes an interval scale or ‘better’, and stacks a probability function to arrive at quantitative non-ditochomy. Next, work through cause-effect trees [hey, that’s a new element – I’ll come to it shortly] where some cause both happens with some probability and doesn’t happen with another probability at the same time, which propagates through OR and AND-gates to create effects with weird probabilities / weird aspects of probability, and on through the chain / feedback loops and all. If we can assign probabilities to less than interval scales, we would … oh, we do that already, in fault trees, etc.
Except that we don’t. We do not, I repeat do not, build real fault trees in general business riks management. We do the fresh into kindergarten version of it only! NO you don’t. And also, you don’t assign proper probabilities. You screw them up; apologies for the words.

So, we need combinations. We need the flexibility to work with qualitative scales when (not if) we can do no better, and with quantitative scales wherever we can. Being careful about the boundaries ..! Maybe that is (the) key.

[Interlude: wavelets, we of course use on ratio scales, as a proxy for fuzzy logic in continuous mathematics (?)]

Why would this be so difficult ..? Because we have so limited data ..? That’s solvable, by using small internal-crowd sourced measurements; using Cohen’s kappa (et al.) as mentioned.
Because we have no fault trees? Yes, indeed, that is your fault – trees are essential to analyse the situations anyway. Acknowledging the difficulties to get to any form of completeness, including the feedback loops and numerous time-shifts (Fourier would have ahead-of-time feedbacks …! through negative frequencies…). Not acknowledging consistency difficulties; one could even state that any worthwhile fault tree i.e., any one that includes sufficient complexity to resemble reality in a modeling way (i.e., leaving out unnecessary detail (only!)), will have inconsistencies included or it’s not truly representative… ☺

Hm, I start to run in circles. But:
• Haven’t seen fault trees in risk management, lately. We need them;
• Let’s apply ‘fuzzy logic’ calculations to fault trees. We can;
• When we use less-than ratio scales, let’s be clear about the consequences. Let’s never overrepresent the ‘mathematical rigour’ (quod non) of our work, as we will be dragged to account for the errors that causes, with certainty.

Fuzzy risk language


[Antwerp. Seriously.]

In some previous post, I posited that we should move from quantitative (quod non) to qualitative or even intuitive risk management.
And how that may be difficult. ‘cause it is.
As an intermediary step, I propose to build a better language with which to communicate, discuss and calculate (sic) with qualitative risk management.

Because I see a place for a combination of fuzzy logic and wavelet theory, including neural network signal combination functions.
As my time is limited, this time of year, would anyone have pointers to what’s already out there in papers, practical applications, etc..? That could kickstart the discussion. And I’ll return with more, better, more extensive, more thought out stuff on the subject later.

The 15.5 risk

Your 15.5 risk is of no interest at all! I have a 15.6 risk! Hm, I only have a 13.1
Seriously.
You know you’re doing that. But will you admit it, and learn, and move to something better ..?


[Hi, DC!]

There’s a lot wrong in risk management today. I mean, not only can one still rant about the ‘three lines of defense’ (quod non) as I do regularly on this blog, but one can also dive into the details of how risks are managed, if not when, and find a lot of systemic error and particularly, non-thinking all around.

Let’s start with one core element: weigh(t)ing and comparison of risks. With my guesses, based on decades of experience and science/literature:
Do you include all risks, or just the tiny fraction that your mind can get a hold of? My guess is: The latter. So you miss the vast majority of the risk universe and will be grossly incomplete.
Do you include upside potentials (actions unthought of, and uncontrolled/unsmashed by measures) too? My guess is: No, again you’re incomplete, but also you’re so biased I can’t trust you anymore.
Do you use High-Medium-Low for impacts? My guess is: Yes. Or you use 1-5 scales or so, maybe (sic) even with sort-of indicator thresholds or brackets to determine what goes where. But you don’t realise that impacts can vary, very much so and in time, too. Averages will not do in subsequent calculations or other analysis! You must have (continuous!) impact functions of time and chance. If they’re hard to establish (I’d say: Impossible, given the scarcity of data!), that’s your bad.
Do you use High-Medium-Low for probability (frequency expectations)? My guess is: Yes. Or you use 1-5 scales or so, maybe (sic) even with sort-of indicator thresholds or brackets to determine what goes where. But you don’t realise that probabilities can vary, very much so and in time, too. Averages will not do in subsequent calculations or other analysis! You must have (continuous!) probability functions of time and impact. If they’re hard to establish (I’d say: Impossible, given the scarcity of data!), that’s your bad.
Do you know the difference between statistics and chance calculus? I guess not. Hah, and then you still abuse both ..? Do you know the difference between discrete and continuous mathematics (functions)? If not, you’ll make errors all around. How would you arrive at a 15.5 score when all choices are discrete 1-5 …?
And if you notices the duality of impact functions of probability, and probability functions of impacts; you’re welcome. And if you noticed that on top of this all, you should also calculate (sic) for the cost (impact) of pre-emptive, detective, corrective etc. measures, and the chances of their partial or full (in)effectiveness, in a mesh of cause and effect.
Do you use Impact X Chance to establish severity of risks? Guessed so. But unless you take the whole continuous (!) two-dimensional landscape of every risk into account, you’re gonna fail with certainty.
Do you compare relative risks by their combined scores? Yeah, that indeed was the whole purpose of your exercise. But you failed already on so many points, the results are both literally and figuratively ridiculous
And you continue by considering a ‘15.5’ risk to be worse or higher than some ‘15.4’ risk….

And you don’t consider the enormous mesh of causes and effects (just one by one, or per single event only) with all sorts of feedback and feedforward loops, and the mesh of ‘preventative’, detective and corrective mitigating measures in between, all with their distinct cost(impact!)s, mutual reliance, reinforcements or and other influences, all with their inefficiencies and ineffectiveness (sic) levels – in percentages? In number of incident elements caught and missed?

We may continue. But it’ll lead to more of the same; you’re fooling yourself, and fooling decision makers. Didn’t know that that was in your job description. What would you think would happen if the decision makers would find out?
And oh yes they will! You lead them astray so much, that they will find (you) out about plain wrong negative impact times frequency totals in Write-Offs, and when (not if) they’ll dig deeper, find quite a lot of unnecessary, inefficient and ineffective Risk Mitigation Measures Overhead Cost.

Is there another way? Yes of course.
But it’s not easy. It takes the European (vis-à-vis the wrongly dubbed Anglo-Saxon) approach where the focus is not on data but on qualitative scenarios. As with data, these can be had externally, or internally from experience and insight. As with data, external inputs can be of doubtful relevance and fit. As with data, internal input may (in case of data: will!) be (much) too limited to work with. And yes, going through the motions to determine some risk on all four areas (external vs. internal, data vs. scenario) and finding some gross common denominator, one can get a balanced view on things. But it’ll be balanced over four erroneous outcomes; way to go!
If the outcomes will be understood at all. Value At Risk being the case in point, that would better be called Amount of Company Value Not Being Lost At Some Random Probability. Or so, depending on your working definition and working understanding of VaR…

The only solution seems to be to stop using a quantitative approach and switch to a radical qualitative approach. This may be awkward, but quantities are just so much too weak to describe reality that they are a fly in the face fraud.
And indeed, we we don’t know how to do organisation-wide qualitative risk analysis and management let alone how to do it for meso- and macro-levels, let alone how to communicate, understand and argue about one risk to the next. But we have nothing else that can work; we must. And, it may fit better with the way humans, the human brains, work, with all their psychological ‘flaws’ (quod non!) in the management of risks. Kahnemann, remember? Well, maybe to align with what our brains have gotten used to handle over the aeons, from the savannah to our latter-day deserts of cubicle offices may be the best way to go. And why not? Do you really want to argue that today’s offices differ from hunter-gatherer tribes batteling the elements, predators and prey, and other tribes?

So, qualitative management of risk it is. Any takers?

Maverisk / Étoiles du Nord