On the integration of IRM into regular business management just the way HR is (was?).
[Some future blog will be about the Three Lines of (NO!) Defense. Now, about a bit more practical stuff.]

It struck me that information security, lately expanded into information risk management as (peer) part of operational risk management, as part of enterprise risk management sometimes fuzzied into ‘COSO ERM’ babble, still has difficulty to be understood to not be a separate function that can function apart from the rest of the business (‘their infosec corner to take care of their things’) but be an integral part of everyday management (and operations) just like e.g., HR.

Yes, HR is also still a separate function – for the parts that can be handled separately from the business as usual in other departments. Payrolls can be processed (almost) without knowledge of any primary business processes, or secondary processes for that matter. Apart of course from entry/leavers, etc., but that’s detail.
But HR is also very much integrated, the way it has always been. Optimising (sic; not maximising) the performance of the resources that are human (are they; are they considered such ..?) has since the inception of the idea of organizations, always been with management. Through target setting, through performance evaluations, through facilitative management. Not through micromanagement as you rightfully point out; that has no place in any organization.
All the core, direct HR tasks that are performed, are performed directly by (‘line’) managers. The less separately recognised as such, the better. Just manage!

How come, then, that IRM doesn’t take the same approach ..? The major part of simple information risk management (as is the major part of all risk management!) can and should be performed by those actually dealing with the information; employees and their management. How is it that managers generally understand that part (*) of their role consists of various HR chores, but information asset protection (and information asset performance optimalization..!) doesn’t, yet?
(*) Depending on how your organization works; when dealing with knowledge workers, the facilitative part of HR may form the core of managerial work altogether.

Yes, well, indeed managers may on the average be insufficiently educated to be able to deal with information risk management within their normal duties. But ‘we’ should solve that. And almost no manager whatsoever was trained to be a manager in the first place! No, certainly also not the business school types. They learn a few bits and pieces of administration, which is something very different. The military (cadres), they learn something (little, simple things, but apparently sufficient to work with many subordinates in life-threatening situations – don’t insult by assuming your organization can even compare to that kind of managerial challenge). But in general: No. That’s why military cadre finds it difficult to settle back into management positions in civilian society: The level of incompetence (they have to work with) is staggering.

And they our common managers may not have been provided with the appropriate methodologies and tools to do that. But ‘we’ should provide those. Work In Progress, but the distance to cover is so enormous.

And here’s a picture for your delight:Madrid, perspectives: where you stand, where you look at.

So, by education and methodology/tool provision, we can indeed bring information risk management back into the main line of management.
But so much work to be done! and rest assured that for decades to come, IRM will have its place as a (staff) department. HR hasn’t gone away quite yet, has it ..?

Comments appreciated.

Interlude: A Mistake made Policy

How a mistake made it to governmental policy…
[Though the above Toronto skyline’s just for your viewing pleasure, unrelated to this blog]

These has been quite some discussion about a thing called the Plan-Do-Check-Act cycle. Rightly so, since ever since Deming’s groundbreaking work on quality control WITHIN small shop floor level work groups, the understanding of the practical trade of small-group (self!)management has flourished.
But alas! So many sorcerer’s apprentices have ran around like lemmings. And have followed the ill-guided amongst them, over the cliffs edge. They have mixed up Deming’s quality improvement cycle with the generic process control cycle, later applied to administrative management ..!

The disastrous consequences we still have to work with. The demise of management as a craft, the attempts, failed by default from the start, to scientifise management, the blindness for the utter contraproductivity, all can be traced to this error of application out of an error of understanding.

Know your history: The control cycle has its origin in (chemical plant) process control, or even in generic control as elaborated in applied cybernetic systems methodology. Inputs, (mathematical!) transformation function, outputs, and a (mathematical!) first derivative control (signal) function; feed-forwards, feed-backs, input- and output-based signals, multiple levels of these control cycles, it should all be familiar but isn’t, on a pervasive scale.
Which is a pity because it leads to dumb, stupid, design of control cycles and the inclusion of Deming’s quality (improvement) cycle as the name-giver of the resulting management control efforts. Which in turn has led to the stupidest efforts to fit management control actions and controls into the Plan (feasible; most ‘control’-related work stays there, luckily given the dumb and dumber practitioners around), the Do (awkward! managers don’t Do anything at all in Deming’s Do sense!), the Check (auditors’ delight but NOT what Deming intended), and the Act (not understood at all, in the mix-up it’ll be wiped under the Plan carpet!) phases.

But so many wrongs don’t make a right.

Putting the two models together into this atrocious mix, leads to heaps of management babble and a destruction of sound management practices at the hands of culpable consultants (external or internal). The utter waste of money, the utter demise of anything actually productive!

And now the mistke [not intended but I’ll leave it there] reaches its peak: PDCA will be required by government directive as a design principle for (management) control! [In the Netherlands, always preaching against someone else’s sins]
What a failure of administration: To unknowingly admit so publicly one’s incapacity at the scale of an outright sackable offence (by the many that go along with this, too!).
Now, can we all please move forward the consequences of the pervasive sackabillity ..?

The Infosec Stack (Part 1a; don’t attack yourself)

Some took my Part 1 as offensive. It may have been interpreted as such – by those that have a closed mind; out of fear for being found in the Emperor’s Clothes or because of the transformation of that fear into some sort of invincibility syndrome.

But those that have their mind open enough, could see the other side and sharpen their own thought and understanding, or see my piece(let) as a proper anti-thesis i.e. not as full 180° head-on collision threat but as another angle onto a improvement-worthy wicked problem.
Don’t think backward. Contribute!

And here’s a picture for your pleasure.

The InfoSec Stack (Part 1; house of cards)

[This is a very first part of an article / white paper under development, just to float the idea and get response …(?)]

When asked what parts, or areas, information security would have to cover, people often respond with something in the line of ‘People, Process, Technology’ as a failed alliteration.

The Process and Technology parts are then elaborated to the point/gray area where they will have become completely stifling business, with a panicked compliance craze because of a serious mental condition being a lack of ability to accept that one cannot perfectly control everything. Which is a fact. And read, and re-read, Bruce Schneier’s Liars and Outliers over and over again till you get it.

Yes, for Technology, we do get that it’s an arms’ race against the most (more than us) skilled and sophisticated attackers. But somehow we don’t accept, and still think we can do all security upfront, ex-ante, preventative. The budget (grossly) wasted on this side, are direly needed at the other side(s), detective and corrective security. Or rather, the money would be much, much, much better spent there. Sid vis pacem, para bellum preparat. We’ve only just started to learn how to do that properly.
Since for the most part, we’ve focused on Process. To death… Figuratively, almost literally. We’ve designed way too much top-down analytical procedures, that result in infeasible requirements and lots of babble at the shop-floor level where the front line is. Don’t even start me on ‘three lines of defense’ that are a lot but not defense (i.e., they simply do not between any threat and any vulnerability!), don’t start me on ISO27kx compliance either. Oh, compliance, the killer of anything alive, the H-bomb of actual (sic) productivity, revenue growth and cost savings.

How did we get there ..? Simple, by not understanding that Process and Technology are just two parts of the formal structure of infosec. And as in the picture, we have almost forgotten that information security is a stacked thing.
Physical security sits at the bottom, at the base, often much underestimated in its importance.
Next comes technical security, ranging from sound architecture (not the usual goalless kind, but the one focused at lean and mean design including e.g., privacy by design/default, all the way via OS security, middleware security, application security and communications security. The parts that you don’t ‘see’ I mean, the hardware, the software, the parameter settings.
Then, there’s logical security. The kind you know, that sounds very abstract, almost philosophical but all we know about it is that it’s about password strenght and the failures (sic) of RBAC and such.
One step higher up, and again failing one order of magnitude more!, is organizational security with all its processes and procedures. How blithely stupid the rules are, and how completely and utterly they fail to contribute to security and fail to fix lower-level shortcomings, as they should.
At the top of the pyramid sits Policy, to ward off te evil forces of the babblefolk. Much to be said about this (level), which feeds the desires of those that have fled into a narcissist self-inflation fleeing forward from a lack of self-confidence and a fear of being called out for emptiness of understanding. Much will be said; elsewhere.

And, as in the picture: To understand it, one should understand that the lower the security controls, the harder they are; tougher, more resistant against attacks of any nature. The higher up, the less is contributed anyway. Bable, babble, that is what most is. Ever seen a policy statement line that actually protects against even just one Advanced Persistent Threat ..?
As is also in the picture, lower-level controls may very well cover for either control failure (lck of suitable control(s), or failure of control(s)) at the same or higher level, but a higher-level control can NOT compensate for a failure (of presence or of quality) of lower-lvel controls. But this is what the great many in the field think is the common, all too easy solution ..! Compensate up ..! Which will fail ..! At least in a house of cards, the upper cards are similar in quality to the lower ones. But the lower ones will carry the higher ones, and if a lower one tumbles, the higher ones come down. Would you suggest that if a lower-level card would be / get missing, the higher cards in the house of cards would keep the whole thing standing …!?!? If so, you would really be eligable for a guided living program.

And the picture is far from complete. Remember that above, we started with People? Where are they now, then? Where are they in your information security posture? Ah, you have awareness campaigns. Right. Haha. Oh, thought you were joking.
The ‘joke’ of the thing is that People are everywhere in the picture. Not only within te triangle, but also, vastly more so, outside it. If you implement any tinesy part of the triangle in a way that hinders co(sic)workers to achieve their objectives, they will obliterate your controls. Information security is Overhead!
And People are threats, yes indeed, but they’re also Physical, Technical, Logical (their logic, not yours!) and Organizational assets, vulnerable at the same time. Vulnerable asset and controls in ‘one’, of a wide variety.
Well, you can understand that, as we haven’t dealt with People almost since the inception of information security as an IT thing – since as an IT thing, it has grown within the formal triangle – we hardly understand a thing of how that people part works. We do in a sense, we do have psychological sciences (admit it, working in another Kuhn/Lakatos paradigm but undeniably (partially…; ed.) science). But information security practitioners don’t understand humanity. Infosec practitioners are engineers, seeing the next problem in their hands only (hardly even one ahead) and trying to fix it as quickly and pervasively as possible, and then move on. Whereas the humanities side of life is beyond them. As far as we (all humanity) know, human failings are of all times, recognised throughout the centuries, and still unfixed. We still dream of being (more) civilised than our ancestors; wrongly so! But we have to deal with those sides of People in particular now, for information security.

I’ll just stop here. You can see that so much, much more can be written, and should in my opinion.
Do I state that one wouldn’t need all the formal controls? No. But I do mean that a lot of what we have today is rubbish. [Disclaimer: This is not a statement that my current employer is or is not better or worse than average, but they certainly hinted some catharsis and solution content of this column and white paper (to come).] Incomplete, inconsistent, failing wholesale as we speak. We need much better, meaner (and if we achieve that, leaner) infosec controls. And much more of them, outside of the traditional boundaries.
So there you have it; the inception of an idea that will be discussed at great length as you know I can, in a white paper to be published somewhere, sometime in the coming months. Please feel free to comment and add already. I’ll keep you posted!

Trust statistics

“Never trust any statistic that you haven’t forged yourself” as Winston Churchill put it.

With today’s tools, computing capabilities and speed, why wouldn’t accountants draft their own annual accounts statement of their clients’ accounts ..? These could then be published alongside the ones of the clients themselves. The differences would be transparent to investors, and when (not if I guess) the clients can explain the differences to their stakeholders, there’s no pain in anything.

When accountanst would complain that it would be impossible to re-do the annual accounts, they only claim their incompetence. If they truly understand what’s on the books, really understand the underlying processes in every detail required (and note, this would be a very, very hard requirement already!), they would have no problem at all to recreate the sum total of it all. Or they end up with apparently differing interpretations of the processes, and may explain to stakeholders.

So, either the accountants go do this or they declare to not know nearly well enough what they’re signing off. There’s no in between.

Predictive ..?

Ah, to add to the previous column: The Signal and the Noise: Why So Many Predictions Fail – But Some Don’t by Nate Silver seems interesting. Though I fear that the conclusion will again be: “It’s not hopeless [it almost is, ed.] but if you work really, really hard, you may succeed in finding nuggets — whether or not they’re worth all the effort [I think: hardly, ed.].”

Predictive Mediocrity

[Just a pretty picture of Valencia, as a Calatrava … follower but not groupie]

With all the talk about Predictive Analysis lately, it is time we separate, once again, the hype-humbug from the real(ity). Which is that the Predictive part is only partially true, and the Analysis part by definition is the contrary of predictive.

1. Analysis can only look at what’s here and now, or behind us. One can only analyse history. And, as Hegel pointed out, “Only at dusk does Minerva’s owl takes flight”: Only when the sun sets on some development, can we start (sic) to assess its importance. Analysis is devout of Meaning. It’s mechanical, it may abstract but not create Knowledge, Information at best but usually just more meta metadata. Again, interpretation, the error-prone and nearly-completely ununderstood brain process, is needed to make analysis worthwhile. And the result looks back, not forward.

2. Predictive… If we know anything of the future, it is that it’s uncertainty defined. One can predict some trends to continue — but may be disappointed.
Anything near the precision and the precise predictions that Predictive Analysis is pictured to deliver, will be certain to not come true.
This, as the dragnet of the analysis can never be complete, by the theoretical framework of needing a model of the universe which then would need the model itself to be anything distantly approaching completeness. And by practical considerations; analysts will have to work with the hammer they’re handed and view any world as nails only.
Also, trends may continue but ‘never’ (unless by extreme exception) linearly so, and non-linear modeling is still in its infancy (and may prove impossible by lack of suitable data). Hence, you’ll miss the mark by just extending lines.
Oh, it’s not about the detail, you say? Why don’t you stick to qualitative predictions, then?

3. Anything that can be analysed today, is tomorrow’s mediocrity as it was/is already known. The spurious results may be interesting, but we know whether they’re spurious or early indicators only afterwards, too late.

4. “Sh.t happens!” Why can’t we accept that in the medium-sized world we live in, quantum jumps do happen as well, with similar levels of chance calculus? They may not be binary quantum events, but appear to be. And make life both dangerous and fun! No surprises is boring par excellence. And will fail.

5. Go ahead and hype the mundane. But don’t oversell. You’re not a magician, or the magician is a fraud …

The Size of Your … Data

It’s not the size of your data, it’s what you do with it.
Or so claim men that are insecure about theirs. [Disclaimer: I have big hands. ‘nuff said.]

There appears to be confusion about what Big Data may achieve. There’s the Marketing anecdotes (sic) about spurious relations, the long-standing anecdotes of credit card companies’ fraud detection, and there’s talk of ‘smart data’ use in e.g. organizations’ process mining to establish internal control quality. One party bashing the others over ‘abuse’ of terms.

What happen? [for those that know their History of Memes]

1. We have more data available than ever before. We had various generations of data analysis tools already; every new generation introducing a. a higely increased capability to deal with the data set sizes of their age, b. ease of use through prettier interfaces, c. less requirements on ex ante correlation hypothesis definitions. Now, we seem to come to a phase where we have all the data we might possibly want (not) with tools that allow any fatastic result to appear out of thin air (not).

2. Throwing humongous amounts of (usually, marketing- or generic socmed) data at an analysis tool may automatically deliver correlations, but these come in various sorts and sizes:

  • The usual suspects; loyal brand/product followers will probably be hard to get into lift-shift-retention mode. If (not when) valid, still are not really interesting because they would (should!) have been known already from ‘traditional’ research;
  • The false positives; spurious correlations, co-variance, etc., induced by data noise. Without human analysis, many wrongs can be assumed. All too often, correlation is taken (emotionally at least) to be close to or somewhat the same as causation; over and over again. How dumb can an analyst be, to not (sufficiently) be aware of their own psychological biases! Tons of them are around and impact the work, and the more one is convinced not to be (psychologically) biased, the more onewillbe and the worse the impact will be. Let alone that systemic biases can be found all too often;
  • The true positives. The hidden gems.

We don’t have any data on the amount of the spurious results vis-a-vis useful results (next bullet) to know how well (effective, efficient) we do with both automated correlation discovery and human analysis, which would be tell-tale in the world of analyse-everything. But what would you expect from this overwhelmingly inductive approach?

3. Yes, until now, in history we seem to have done quite well with deductive approaches, from the pre-Socratics until the recent discovery of the Higgs boson… in all sciences including the classic social/sociological scientists like the Greek and Roman authors (yes, the humanities are deductive sociology) and the deep thinking by definition philosophers.
The ‘scientists’ who relied on inductive approaches … we don’t even know their names (anymore) because their ‘theories’ were all refuted so completely. Yet, the above data bucket approach is no more than just pure and blind induction.

4. Ah, but then you say, ‘We aren’tthatstupid, we do take care to select the right data, filter and massage it until we know it may deliver something useful.’ Well, thank you for numbing down your data; out go the false but also the true positives..! And the other results, you should have had already long time ago via ‘traditional’ aproaches. No need to call Big Data Analysis what you do now. Either take it wholesale, or leave it!

5. Taking it wholesale will take tons of human analysis and control (over the results!); the Big Data element will dwindle to negligable proportionsifyou do this right. Big Data will be just a small start to a number of process steps that,ifdone right, will lean towards refinement through deduction much more than being induction-only that Big Data is trumpeted to be. This can be seen in e.g. some TLA having collected the world’s communications and Internet data; there’s so many dots and so many dots connected, that the significant connected dots are missed time and time again — it appears infeasible to separate the false positives from the true positives or we have a Sacrifice Coventry situation. So, repeated, no need to call this all, Big Data.

5. And then there’s the ‘smart data’ approach of not even using too much data, but using what’s available because there’s not yottabytes out there. I mean, even the databases of business transactions in the biggest global companies don’t hold what we’d call Big Data. But there’s enough (internal) transaction data to be able to establish through automated analysis, how the data flows through the organization, which is then turned into ‘process flow apparent’ schemes. Handy, but what then …? And there’s no need at all to call this stuff Big Data, either.

So, we conclude that ‘Big Data’ is just a tool, and the world is still packed with fools. Can we flip the tools back again to easily test hypotheses? Then we may even allow someinductive automated correlation searches, to hintat possible hidden causations that may be refined and tested before they can be useful.
Or we’ll remain stuck in the ‘My Method is More What Big Data Is Than Yours’.

So, I can confidently say: Size does matter, and what you do with it.

Next blog will be about how ‘predictive’ ‘analysis’ isn’t on both counts.