Managing Fortuna’s Risks

On how Risk Management is self-defeating… or just something one has to do while Life has other plans with you(r organization).

First, the übliche picture:
20140813_154840
[Shadow (play) of Dudok, Hilversum again of course]

First, the Defeating part. This, we hardly discuss at length, full enough, but when we do, it’s so obvious you understand why: Because RM is about cost avoidance, even if opportunity cost avoidance. Which makes you the Cassandra, the Boy Cried Wolf of the courtiers (sic). It’s just not interesting, not entrepreneurial enough. [Even if that would be pearls before swines…]

Second, this is why it’s so hard to sell (no quotes, just outright sell for consultancy or budget bucks) the idea of RM to executives – they only see the cost you are. They don’t see themselves as delivering something if, if, if only, they had integrated RM into their daily ‘governance’ (liar! that’s just management!) / management. They don’t want to do anything. They just don’t understand to do something that doesn’t show to be effective even if others for once see no harm (though the others will not even care to flag the kindergarten-level window dressing that’s going on with the RM subject; too silly that, to call out).

Third, this happens not only with ‘boardroom’ RM (~consultancy/advisory), it has been well-established at lower ranks; all the way from the mundane IT security, Information Security, Information Risk Management, Operational Risk Management (where the vast majority of organisations don’t make anything tangible anymore), including the wing positions of, e.g., Credit and Market Risk (which are in fact, with the visors to mention, the same as the previous!), to Enterprise Risk Management altogether.

Fourth, we tie in Queuillism. The Do nothing part almost as in Keynesianism where in the latter, future-mishap prevention should be arranged during the years where government intervention wasn’t required as such. As in Joseph’s seven fat years contra the seven years of famine (Genesis 41). How does this reflect on RM? Would it not be just BCM in its widest, enterprise-wide sense? Isn’t that what ‘management’ is about, again..? Just sanding off the rough edges and for the rest, give room to the actual stars, the employees at all levels, to let them bring out their best – so exponentially much more than you can achieve by mere, petty, command & control. You raise KPIs, I rest my case of your incompetence. And this goes for governments even more. Just enormously expensive busywork.

Fifth, finally, the fourth trope points into the direction of Machiavelli’s Fortuna. Which was also covered by Montaigne, of course. It doesn’t matter what you do, in the end, Life has other plans with you. Sobering, eh? Oh but again, you can shave off the rough edges for yourself, too. Just don’t think in the end, that will matter much beyond some comfort to you. Katharsis, and move on.

OK, since you held out so long:
20140813_154800
[The same, from another angle]

Stop bLeading, start PLeading

There seems to be a wave of ‘management’/ ‘executive’ (quoting for the laughability of those words, as in this) advice on how to Lead. Mostly, on how to push underlings into following better i.e., more, towards your goals.
Which is wrong in many ways:

  • Your goals are ulterior, or very inferior. If you lead in such a pushing way, your team members don’t give a r.ts a.. about your goals, objectives, strategy, or whatever, and even less, much less, about you. They’ll be in it not to get punished by being fired too early ’cause they know to be fired anyway. And by matter of speech (if you’d take it as a pointer, you’d miss my point) they’ll phone the WBC on the details of your funeral.
  • If you whip, you will get whipped. More, harder, longer than an eye for an eye [Interesting side note: That iconic reference was originally only about business transactions, nothing physical, to get even but no more than even].
  • You don’t push, at most pull a bit into the right direction. At best, plead for the ulterior motives, the lofty goals out there. That have nothing to do with you. You are redundant, replaceable, and guess what – you will be replaced more often than not on the verge of achieving your own slave’s targets because someone else will reap your benefits. But then again, if your goal is really ulterior and you honestly mean it, you don’t care! Don’t care about all the effort, and the gains being spread so wide around you. You do care about that ulterior goal being reached, not any other reward let alone with the mammon from hell.
  • If you push, the grader (like in this) will only dig itself in.
  • If you push, all words will be called out for their emptiness before they’re out of your mouth. ‘Supporting creativity / out-of-the-box thinking’..? People will know on what side their bread is buttered.
  • And more.

Which all will make you a deliriously stressed-out fool.
One that will have done irreparable damage to many around you. That you should pay everyday for the rest of your life. You make them bleed, you will bleed.

Why not go out in the effective direction, by Leading …? Leading towards ulterioir goals because they’re worth it. Not by whipping, bleeding, but by pleading (mostly with empty hands..!) others into rightly-oriented action.

[Edited to add: OK, right today, as this pre-scheduled post appears, this, to the same effect…]

OK, I’ll leave you now with an upbeat picture:
??????????
[Clearly, a bridge. Cala near Hoofddorp]

Security accountability: We’re off

Remember the Vasco i.e. Diginotar certificate breach scandal ..? For the many that don’t read Dutch easily enough, the gist of this court decision is that the previous owners of Diginotar are accountable for the damages to Vasco following the breach since the previous Diginotar owners hadn’t secured their systems well enough.

There’s a lot to be said here.

  • E.g., that the security lapses could have been known. Due diligence …? Well, the PwC reports were all green traffic lights, at the procedures-on-paper level. But a couple of years before the take-over, already a third party (ITSec, which I know for their good work [disclaimer: have no business relations]) had notified Diginotar about shop-floor level deficiencies. That remained uncorrected.
     
  • Add to that, that actually, the previous owners themselves started legal claims. Because a major part of their sale proceeds were still held in escrow, and they wanted the monay. Vasco filed a counter claim; logically, and won.
     
  • Also, the auditors that had time and time again ‘assured’ the security of the scheme (and don’t get me started about limiting the scope of such assurance in scope vagueness or in the fine print!), haven’t felt too much backfire. Yet, hopefully. Though recently, the same firm announced an initiative towards a new, proprietary one can guess, security standard. Right.

So, are we finally seeing accountability breaking through ..? I already posted something on the Target Cxx stepdown for similar security lapse(s). Now this one. The trickle’s there, let the deluge follow. That‘ll teach ’em! And of course, generate a humongous market for backlog bug remediation, from the software levels up through controls to governance levels…
Even if that would stifle innovation for a while. Would that be a bad thing; having only the real improvements breaking through and not the junk ones ..?

OK then, now for a picture:
DSCN0358
[Monteriggione security was effective, until not, then abandoned as control approach… they did, why not all of us today?]

4th of July, a message from the US of A

On controls and their systemic ineffectiveness per se. As written about a lot in the past year on this site, PCAOB now finally seems to find out how things have been ever since SOx… in [simple block quote copy from this post by James R. (Jim) Peterson]:

The PCAOB Asks the Auditors an Unanswerable Question: Do Company Controls “Work”?

“Measure twice – cut once.”
— Quality control maxim of carpenters and woodworkers

If there can be a fifty-million-euro laughingstock, it must be Guillaume Pepy, the poor head of the SNCF, the French railway system, who was obliged on May 21, 2014, to fess up to the problem with its € 15 billion order for 1860 new trains—the discovery after their fabrication that the upgraded models were a few critical centimeters too wide to pass through many of the country’s train platforms.

Owing evidently to unchecked reliance on the width specifications for recent installations, rather than actual measurement of the thirteen hundred older and narrower platforms, the error is under contrite remediation through the nation-wide task of grinding down the old platform edges.

That would be the good news – the bad being that since the nasty and thankless fix is doubtless falling to the great cohort of under-utilized public workers who so burden the sickly French economy, correction of the SNCF’s buffoonish error will do nothing by way of new job creation to reduce the nation’s grinding rate of unemployment.

The whole fiasco raises the compelling question for performance quality evaluation and control – “How can you hope to improve, if you’re unable to tell whether you’re good or not?”

This very question is being reprised in Washington, where the American audit regulator, the Public Company Accounting Oversight Board, is grilling the auditors of large public companies over their obligations to assess the internal financial reporting controls of their audit clients.

As quoted on May 20 in a speech to Compliance Week 2014, PCAOB member Jay Hanson – while conceding that the audit firms have made progress in identifying and testing client controls — pressed a remaining issue: how well the auditors “assess whether the control operated at a level of precision that would detect a material misstatement…. Effectively, the question is ‘does the control work?’ That’s a tough question to answer.”

So framed, the question is more than “tough.” It is fundamentally unanswerable – presenting an existential problem and, unless revised, having potential for on-going regulatory mischief if enforced in those terms by the agency staff.

That’s because whether a control actually “works” or not can only be referable to the past, and cannot speak to future conditions that may well be different. That is, no matter how effectively fit for purpose any control may have appeared, over any length of time, any assertion about its future function is at best contingent: perhaps owing as much to luck as to design — simply not being designed for evolved future conditions — or perhaps not yet having incurred the systemic stresses that would defeat it.

Examples are both legion and unsettling:

  • The safety measures on the Titanic were thought to represent both the best of marine engineering and full compliance with all applicable regulations, right up to the iceberg encounter.
  • A recovering alcoholic or a dieter may be observably controlled, under disciplined compliance with the meeting schedule of AA or WeightWatchers – but the observation is always subject to a possible shock or temptation that would hurl him off the wagon, however long his ride.
  • The blithe users of the Value-At-Risk models, for the portfolios of collateralized sub-prime mortgage derivatives that fueled the financial spiral of 2007-2008, scorned the notion of dysfunctional controls – nowhere better displayed than by the feckless Chuck Prince of Citibank, who said in July 2007 that, “As long as the music is playing, you’ve got to get up and dance… We’re still dancing.”
  • Most recently, nothing in the intensity of the risk management oversight and reams of box-ticking at Bank of America proved satisfactory to prevent the capital requirement mis-calculation in April 2014 that inflicted a regulatory shortfall of $ 4 billion.

Hanson is in a position to continue his record of seeking improved thinking at the PCAOB — quite rightly calling out his own agency, for example, on the ambiguous and unhelpful nature of its definition of “audit failure.”

One challenge for Hanson and his PCAOB colleagues on the measurement of control effectiveness, then, would be the mis-leading temptation to rely on “input” measures to reach a conclusion on effectiveness:

  • To the contrary, claimed success in crime-fighting is not validated by the number of additional police officers deployed to the streets.
  • Nor is air travel safety appropriately measured by the number of passengers screened or pen-knives confiscated.
  • Neither will any number of auditor observations of past company performance support a conclusive determination that a given control system will be robust under future conditions.

So while Hanson credits the audit firms – “They’ve all made good progress in identifying the problem” — he goes too far with the chastisement that “closing the loop on it is something many firms are struggling with.”

Well they would struggle – because they’re not dealing with a “loop.” Instead it’s an endless road to an unknown future. Realistic re-calibration is in order of the extent to which the auditors can point the way.

And … there you go, for today’s sake:
DSCN7728
[Watching (us against) you …]

Live by the rules. Hopefully, not.

[Western version]

Hm, you wanted to live by the rules …? Hopefully, not in this way…

Next question would be: On what principles would you decide which rules to follow, and which not [so much] ..? Wouldn’t that constitute deeper and/or pre-existing moral/ethical principles? What and where’s the real instruction value ..?

But then, of course:
DSCN7427
[Relevant, MD]

Control industry

First, a picture for your viewing pleasure; you’ll need it:

[Baltimore inner harbour; rec area]

As a backlogged item, I was to give a little pointer to the design of control in (process-oriented!) industry, from which ‘we’ in the administrative world have taken some clues like sorcerer’s apprentices without due and proper translation and without taking the pitfalls of our botched translation job into account.

To start with, a little overview of the basics of how an industrial process (e.g., mixing paint, or medicine) is done, at the factory floor:

In which we see the main process as a (near- or complete) mathematical function of the input vector (i.e., multiple input categories) continuously (sic) resulting in the output vector which is supposed to come as close to a desired output as possible, continuously, on the parameters that matter. The parameters that matter, and the inputs, are measured by establishing values for parameters that we can actually measure, continuously (sic). With the inputs and outputs of course including secondary and tertiary ‘products’ like waste, heat, etc., and with all elements not being picture perfect but with varying variations off set values (the measuring devices and e.g. process hardware, also will have a fluctuating noise factor).
With the input vector being measured via the feedforward loop (control before anything might deviate) and the output vector being measured through the feedback loop (control by corrective actions, either tuning the process (recipe) or, more commonly, tuning the inputs). And the control function being the (near- or complete) mathematical derivative of the transformation function.
And all measurements being seen as signals; appropriately, as they concern continuous feeds of data.

That’s all, folks. There’s nothing more to it … Unless you consider the humongous number of inputs, outputs and fluctuations possible in all that can be measured – and not. In all elements, disturbances may occur, varying in time. So, you get the typical control room pictures from e.g., oil refineries and nuclear plants.
But there’s a bit more to it. On top of the control loop, secondary (‘tactical’, compared to the ‘operational’ level of which the simple picture speaks) control loop(s) may be stacked that e.g. may ‘decide’ which recipe to use for which desired output (think fuel grades at a refinery), and tertiary (‘strategic’ ..? Or would we reserve that for discrete whole new plants ..?). And there’s the gauges, meters and alarm lights in a dizzying array and display of the complexity of the main transformation function – the transformation function can be very complex! If pictured as a flow chart, it may easily have many tens if not hundreds of all sorts of (direct or time-delayed!) feedforward and feedback loops in itself. Now picture how the internals of that are displayed by measurement instruments…

Let’s put in another picture to freshen up your wiring a little:

[Baltimore, too; part of the business district]

Now then, we seem to have taken over the principles of these control designs into the administrative realm. Which may all be good, as it would be quite appropriate re-use of stuff that has proven to work quite soundly in the industrial process world with all its (physical, quality) risks.
But as latter-day newly trade trained practitioners, we seem to have not considered that there are some fundamental differences between the industrial process world and our bookkeeping world.

One striking difference is that the industrial process world governs continuous processes, with mostly linear (or understandable non-linear) transformation and control functions. Even in the industrial world, non-linearity but also non-continuous (i.e., discrete, in the mathematical sense) signals (sic) cause trouble, runaway processes and process deviations, etc.; these push the limits of the (continuous-, duh)control abilities.
Wouldn’t it be wise, then, if we had taken better care when making a weak shadow copy of the industrial control principles into the discrete administrative world …? Discrete, because even when masses of data points are available, they’re infinitely discrete as compared to continuous signals (that they sometimes were envisaged to represent)? Where was the cross-over from administering basic process / production data to administrating the derivative control measurements, and/or the switch from continous signals captured by sampling maybe (with reconstructability of the original signal being ensured by Shannon’s and other’s theories ..!!), to just discrete sampling without even an attempt to reconstruct(ability) of the original signals?

So we’re left with vastly un- or very sloppily controlled administrative ‘processes’, with major parts of ‘our’ processes being out of our scope of control (as is witnessed by the financial industry’s meltdown of 2007– ..!), non-linear, non-continuous, debilitatingly complex, erroneously governed/controlled (in fact, quod non) in haphazard fashion by all sorts of partial controller (groups) all with their own objectives, varying overwhelming lack of actual ‘process’ knowledge, etc.

Just sayin’. If you would have a usable (!) pointer to literature where the industrial control loop principles were carefully (sic) paradigm-transformed for use in administrative processes, I would be very grateful to hear from you.
And otherwise, I’d like to hear from you, too, for I fear it’ll be a silent time…

The InfoSec stack (Part 1b; implement or not to be)

Some have questioned why I put the Compensate part upward on the right side, instead of downward, as is usually considered.
Well, this may be obvious to those in the know, but: Compensating control weakness at a higher level simply does not work..!

This, because of some very basic principles:
1. Any ‘control’ at some level, will have to be implemented at at least one lower level or not exist at all except for some ink on paper (OK, ‘ink’ on ‘paper’).
2. ‘Implementation’ of some deficient control at any level by ‘compensating’ it at a higher level, will lead to an implemenation at the level of the deficiency or lower, or will not be implemented.
3. The lower the implementation level, the stronger. The higher, the weaker. ‘Compensating’ at a higher level requires more controls there to be about as strong, and hence more at the same/lower levels as implementations otherwise the same strength may not be achieved.
4. ‘Compensating’ at a higher level doesn’t fit in the design at that level or would be there already, the deficiency would not be ‘compensated’ by pointing at its rationale. Adding to the design, obviates that the design was deficient or is overcomplete now – the resulting implementation will be flawed by design.
5. Occam-like efficiency thus requires implementation of compensating controls at the same or lower levels.


[Paris, La Défense, for pictoral reasons]

QED

Maverisk / Étoiles du Nord