Segregatie in werk

Meer State-of-the-Art Watson-like Big Data Analysis is niet te krijgen (a.k.a mijn hersens), leveren een opvallend patroon, ‘dus’ is het waar:
Vacatures voor ‘control’-gerelateerde functies (van modellenHAHAbouwers via ‘controlHAHAHAHAHAHAHAHAHAlers’ tot auditors) lijken steeds meer in de Provincie te vinden en steeds minder in de grote steden (a.k.a. Amsterdam). In de laatste duiken überhaupt weinig vacatures op, omdat de werkelijk nieuwe economie niet op zo’n formeel mechanism leunt; via-via- en informele cooptieve coöperatie drijven de innovatie en werkgelegenheid.
Áls dit in cijfers wordt gevangen, zullen die de verschuiving onderschatten omdat de nieuwe ontwikkelingen in hun diversiteit juist zo veel minder in hokjes te stoppen vallen. De anti-these van al wat vastloopt, van al waarin opkomende economieën (en voorheen science-fictionachtige binnenlandse sector’lets) juist níét in roeren en daardóór kracht hebben…

OK, OK, een plaatje tot slot:
DSCN6305
[Klassiek onklassiek]

4th of July, a message from the US of A

On controls and their systemic ineffectiveness per se. As written about a lot in the past year on this site, PCAOB now finally seems to find out how things have been ever since SOx… in [simple block quote copy from this post by James R. (Jim) Peterson]:

The PCAOB Asks the Auditors an Unanswerable Question: Do Company Controls “Work”?

“Measure twice – cut once.”
— Quality control maxim of carpenters and woodworkers

If there can be a fifty-million-euro laughingstock, it must be Guillaume Pepy, the poor head of the SNCF, the French railway system, who was obliged on May 21, 2014, to fess up to the problem with its € 15 billion order for 1860 new trains—the discovery after their fabrication that the upgraded models were a few critical centimeters too wide to pass through many of the country’s train platforms.

Owing evidently to unchecked reliance on the width specifications for recent installations, rather than actual measurement of the thirteen hundred older and narrower platforms, the error is under contrite remediation through the nation-wide task of grinding down the old platform edges.

That would be the good news – the bad being that since the nasty and thankless fix is doubtless falling to the great cohort of under-utilized public workers who so burden the sickly French economy, correction of the SNCF’s buffoonish error will do nothing by way of new job creation to reduce the nation’s grinding rate of unemployment.

The whole fiasco raises the compelling question for performance quality evaluation and control – “How can you hope to improve, if you’re unable to tell whether you’re good or not?”

This very question is being reprised in Washington, where the American audit regulator, the Public Company Accounting Oversight Board, is grilling the auditors of large public companies over their obligations to assess the internal financial reporting controls of their audit clients.

As quoted on May 20 in a speech to Compliance Week 2014, PCAOB member Jay Hanson – while conceding that the audit firms have made progress in identifying and testing client controls — pressed a remaining issue: how well the auditors “assess whether the control operated at a level of precision that would detect a material misstatement…. Effectively, the question is ‘does the control work?’ That’s a tough question to answer.”

So framed, the question is more than “tough.” It is fundamentally unanswerable – presenting an existential problem and, unless revised, having potential for on-going regulatory mischief if enforced in those terms by the agency staff.

That’s because whether a control actually “works” or not can only be referable to the past, and cannot speak to future conditions that may well be different. That is, no matter how effectively fit for purpose any control may have appeared, over any length of time, any assertion about its future function is at best contingent: perhaps owing as much to luck as to design — simply not being designed for evolved future conditions — or perhaps not yet having incurred the systemic stresses that would defeat it.

Examples are both legion and unsettling:

  • The safety measures on the Titanic were thought to represent both the best of marine engineering and full compliance with all applicable regulations, right up to the iceberg encounter.
  • A recovering alcoholic or a dieter may be observably controlled, under disciplined compliance with the meeting schedule of AA or WeightWatchers – but the observation is always subject to a possible shock or temptation that would hurl him off the wagon, however long his ride.
  • The blithe users of the Value-At-Risk models, for the portfolios of collateralized sub-prime mortgage derivatives that fueled the financial spiral of 2007-2008, scorned the notion of dysfunctional controls – nowhere better displayed than by the feckless Chuck Prince of Citibank, who said in July 2007 that, “As long as the music is playing, you’ve got to get up and dance… We’re still dancing.”
  • Most recently, nothing in the intensity of the risk management oversight and reams of box-ticking at Bank of America proved satisfactory to prevent the capital requirement mis-calculation in April 2014 that inflicted a regulatory shortfall of $ 4 billion.

Hanson is in a position to continue his record of seeking improved thinking at the PCAOB — quite rightly calling out his own agency, for example, on the ambiguous and unhelpful nature of its definition of “audit failure.”

One challenge for Hanson and his PCAOB colleagues on the measurement of control effectiveness, then, would be the mis-leading temptation to rely on “input” measures to reach a conclusion on effectiveness:

  • To the contrary, claimed success in crime-fighting is not validated by the number of additional police officers deployed to the streets.
  • Nor is air travel safety appropriately measured by the number of passengers screened or pen-knives confiscated.
  • Neither will any number of auditor observations of past company performance support a conclusive determination that a given control system will be robust under future conditions.

So while Hanson credits the audit firms – “They’ve all made good progress in identifying the problem” — he goes too far with the chastisement that “closing the loop on it is something many firms are struggling with.”

Well they would struggle – because they’re not dealing with a “loop.” Instead it’s an endless road to an unknown future. Realistic re-calibration is in order of the extent to which the auditors can point the way.

And … there you go, for today’s sake:
DSCN7728
[Watching (us against) you …]

OSSTMMPerimeter ..?

Just a note; was struck by the OSSTMM approach towards the structure of infrastructure. [Disclaimer] though I am quite a fan of the OSSTMM approach (and do want to write up tons of whitepapers linking it with my ideas for moving forward in the InfoSec field without having to revert to #ditchcyber bla), I feel there’s a snag in it:
The analysis part seems to still take a perimetered, though onion, approach. The Defense in Breath is there, for sure, but still the main (sic) focus is on the primary axis of the access path(s). Does this still work with the clouds out there and all, focused as they are on principalled agnostics on where your data and ‘systems’ might hang out?

OK yes now I will go study the OSSTMM materials in depth to see whether this is just my impression and I’m proven horribly wrong, or …

So i’ll leave you with:
DSCN3689
[Hardly a street, next to Yonge]

CIAAEE+P

Privacy came to the fore last week, at a very interesting ISSA NL event.
Where we discussed the prevalent Confidentiality-Integrity-Availability approach (where impacts mandatorily regard the data subject(s), not you the processor, as the data subjects are legally owner of their info …!) and whether those three actually cover privacy aspects sufficiently.

Well, we did conclude that for now, CIA is ‘still’ the common denominator. But … hey, Auditability might be added, as that’s a sort-of requirement throughout privacy protection. And Effectiveness and Efficiency – of the data handling! – have a place as well, being representative of proportionality and legal-grounds-for-the-privacysensitive-data-handling-in-the-first-place (i.e., real purpose / purpose limitation!); if you collect more than very, very strictly necessary, you’re culpably inefficient in a hard legal sense, and at least part of your data handling is not effective.

But should we add Privacy as yet another factor ..? Does it have value in itself? Initially, I thought so, as the common CIA somewhere will always have lost its connection to information value, e.g., through the Bow Tie effect and other deviations (lagging) from modern developments.

Which I’ll discuss below. But now, first, an intermission picture:
OLYMPUS DIGITAL CAMERA
[Yup, Whistler]

So, as said, Privacy may be covered by CIA. But, … with specific deviations of interpretation. Continue reading “CIAAEE+P”

Aweariness.

Tweeks ago, at this successful! symposium, I noted the developments in the Awareness side of our IRM business. Multiple speakers were onto the subject without hesitating to move beyond the mere annual poster campaign for awareness, and moving into the daily-normal subconscious behavioral change work that was for a long time so much lacking. From ISO 2700x as well.

Which of course is a very, very good thing. Before the 80% of hard work in IRM as such (after discounting the first 80% in hardcore information security), the 80-100% of effort should go into this socio-/psycho-/behavioral fluffy stuff that yields so many benefits and returns. Though we ‘still’ may not be good at it, at least there is development, and leading examples. Thanks, speakers, for that; and for now:
DSCN1807
[Your guess. No, not Paris, Reims; not even Strasbourg and that’s a hint]

The enemy from below

I know, I’ve been guilty of it too. Thinking, tinkering, and musing about all sorts of abstract risk management schemes, how they’re a giant mess, mostly, and how they could be improved. Here and there, even considering a middle-out improvement direction. But mostly, ignoring the very fact that in the end, information risk management hinges on the vastly complex technological infrastructure. Where the buck stops; threat-, vulnerability- and protection-wise.

A major (yes, I wrote major) part of that low-level (Is it? To whom? It’s very-highly intellectual, too!) technological complexity is in the trust infrastructure in there. Which hinges on two concepts: Crypto and certificates. In fact, they’re one but you already reacted towards that.

For crypto, I’ll not write out too much here about the wrongs in the implementation. There’s others doing that maybe (sic) even better than I can.
For certificates, that hold the crypto keys, the matter is different. Currently, I know of only one club that’s actually on top of things, as they may be for you as well. Yes, even today, you may even think the problem is minor. Since you don’t know…

Really. Get your act together … This is just one example of how deep, deep down in ‘the’ infrastructure, whether yours or ‘out there’, there’s much work to be done, vastly too much complexity to get a real intelligent grip on. How will we manage ..?

And, of course:
002_2 (13)
[Showboating tech, London]

(Ahead of time, because we can)

This will be published in the July ISSA Journal. Just put it down here, already, to be able to link to it. ;-]
And, first, a picture:
DSCN3152
[Toronno, ON]

After which (Dutch version linked here):

You have the Bystander Bug

One of the major pluses of open source software is that anyone, even you, could check the source code so, via logic, the chance that a somewhat hidden bug will be found in a heartbeat will rise to about one when enough people look at the source, now they can.
But we were recently surprised by just such a bug, with global implications. Sure, it turned out actually no-one keeps tabs on what open source software is used (in) where by whom.

So all the global software behemoths turned out to rely on pieces of open source software – and that software, maintained by literally a handful of volunteers on a budget of less than a couple of seconds of the major software vendors’ CEOs, actually had not been scrutinized to the level one would require even on a risk base. Certainly not given the widespread use which would make the risk to society grow high. Did we tacitly expect all the software vendors of the world to have inspected the open source code more carefully before it being deployed in the global infrastructure ..? How would one propose to organize that within those big, huge for-profit companies? What where (not if) the global infrastructure wasn’t ‘compiled’ into one but built using so many somewhat-black boxes? Virtualization and ‘cloud’ abstract this picture even further. Increasing the risks…

But more worryingly, this also means that ‘we all’ suffer from the Bystander Effect. Someone’s in the water, unable to get out, and we all stand by without helping out because our psychology suggests we follow the herd. Yes, there are the stories of the Ones that beats this and jumps in to the rescue – but there’s also stories where such heroes don’t turn up. And, apparently, in the open source software world, there’s too few volunteers, on budgets far less than a shoe string, that jump in and do the hard work of detailed code inspections. Which means there’s also a great number, potentially about all, of us that look the other way, have made ourselves unaware, and just want to do our 9-to-5 job anywhere in the industry. In that way, we’re suffering from the bystander effect, aren’t we ..?

And, even worse, so far we seem to have escaped the worst results of this in e.g., voting machines. Here, how close was the call where everyone just accepted the machine program-ming and expected that because of its open source nature (if …), “someone will have looked at it, right …!?”. Though of course, on this subject, some zealots (?) did indeed do some code checking in some cases, and the troubles with secrecy of votes overshadowed the potential (!) tallying biases programmed in, knowingly or not. But still… when ‘machines’ are ever more relied upon, moving from such simple things like voting machines to a combination of human-independent big data analysis/action automata with software-defined-everything and the Internet of Things, where’s the scrutiny?

Will a new drive for increased security turn out to be too little, too narrowly focused and for too short at time, as many if not all after-the-fact corrections have been? If we leave it to ‘others’ again, with their own long-term interests probably more at heart than our even longer-term societal interests, we still suffer from the bystander effect and we may be guilty by inaction.

But then again, how do we get all this stuff organized …? Your suggestions, please.

[Edited to add I: The above is the original text; improved and toned down a bit when published officially in the ISSA Journal]
[Edited to add II: This link to an article on the infra itself]

[Edited to add III: The final version in PDF, here.]

One more on how the @nbacc accountants’lab’ will work in NL

Why am I somewhat convinced that this article is what the results of the accountants’lab’ that is proposed in the Netherlands, will look like. Even when I like the idea! Just not the direction the proposed content of research is taking. Is (this) a lab for deep, methodologically cast-in-concrete but utterly boring science of the history writing kind with drab results that don’t apply in the chaos of the (business) world out there, or is (this) a lab about doing experiments, exploring the unexplored, with results scored on qualitative scales like Innovation and Beauty …?

With a picture because you held out so long reading the above:
DSCN4777
[Yup, the axis going South]

Accountansdiscussie: Start en eind

Als 28 mei @nbacc met @tuacc in de slag gaat, is deze post misschien een hulp, en onderstaande alsnog het resultaat…
Imago accountants
[Parool, zaterdag 24 mei 2014]

Want laten we wel wezen: Een lab voor diepe vaktechniek loopt consequent (if) aan tegen: “Volgens de theorie kan niks hier werken en weten we exact waarom. In de praktijk werkt alles, maar niemand weet hoe. Wij mixen theorie en praktijk.” en ook: ” Laat niet degenen die zeggen dat iets onmogelijk is, in de weg staan van degenen die dat iets al doen.” Succes …

[En dan krijg ik weer te horen dat ik ‘te’ cynisch ben. Cynisme is een laatste poging vooruitgang te bereiken – waar de meeste betrokkenen (?) in apathie afwezig blijven, geestelijk of fysiek. Apathie; de houding van degenen die alle hoop zijn verloren. Nee, dáár ben je blij mee…]

Column on Open Source code Bystanders

Well, the column is out there now… Will be published in the July 2014 ISSA Journal but here’s a preview, in Dutch

And, of course:
DSCN6225
[S’bourg, or Brussels – always mix them up and end up in the wrong place for a meeting]

Maverisk / Étoiles du Nord