First Predictions 2014

[Unfamiliarly, from the West]

What’s up, 2014?

The end is nigh, of 2013. Can we predict what the big business / infosec hype of 2014 will be ..?
No. There’s no predicting the unknown. The somewhat-known will not stand out enough. The known… boring!

The knows are the fall of Fubbuck, maybe Tvitter (both to be replaced by the WeChat’s and hopefully Tumblr’s of this world; will Vine and Snapchat take over?), cloud, BYOD/flexwork, etc.
SMAC: Social, Mobile, Analytics, Cloud. ViNT by Sogeti adds a T of Things.

The Things part, I’m unsure about. Yes, for the long run (i.e., 2-5 years) we will definitely see an explosion. But next year already? It’s the infancy of a sine wave, taking off slowly.

So, my prediction is that the thing we’ll all be talking about as the next thing in 2014, would be … People versus Algorithms.
This was pointed out by some #Coney guy(s), with some lead links elsewhere. But algorithms will not conquer the world in one swallow. Rather, we will see both an increase in the use of algorithms for partial (at most!) data analytics, to support TLA-style use of ‘big’ data both in public and private environments – but also a major development of the People component in tha analysis, a wave of development of specialized functions, methodology and tools, re the human pattern detection and interpretation parts of analytics.

Plus, then a more clear picture of how people and algorithms fit together, as function, profession(s), etc., with spin-off everywhere e.g., the development of a better understanding of how the brain works, how humans work (produce / operate), how to describe the purpose of life. On our way to the Singularity, and Beyond!

In the cave, not hunting

60% of any group of people are conservative. They just want a job, any job, and want to do simple, predictable work. The want to stay in the cave, because it’s dangerous out there; you never know where and when a bear or sabletooth tiger may attack.

40% of people understand that one can be safe by staying in the cave, but one will starve. Food is not in the cave, it’s out there. So, for self-preservation (strategic risk management) one has to go out, well equipped and sober to handle the environment (tactical risk management: listen and look carefully, and Be Prepared) and do some foraging (incl operational risk management; don’t be stupid and taste rotten stuff, etc.).

Nowadays, staying in the cave will not make you safe. Bears may hide in the back of the cave already: no matter how still you sit, headcount cuts are coming and may hit you regardless of your conformity. Or your better-adapted colleague-caveman may prey on you for the little comfort that (performance-contribution-)skinny you may bring to hold out another day. If you’d still have flesh, that would be muscle and you would (could) be the one holding out. Either all in the cave die, or through cannibalism at least a more lean and mean gathering of people (i.e., organization) may survive.
And the world outside changes faster than ever, so the cave entrance is besieged (or walled off!) by enemies taking your turf and starving or enslaving you.

[Toronto, but you knew that]

So, join the 40% and be even better at exploring, be better at being safe without a cave. Be better at risk management. Go out, dare; not running around stupidly but with due care. Enjoy the ever (faster) changing view!

On assumed guilt, innocence to be proven

An interesting piece (in Dutch) on documentation and the requirement to prove innocence under totalitairan presumed guilt: http://www.accountant.nl/Accountant/Opinie/Meningen/Naar+de+bliksem.aspx
How true, the story. Now go apply the gist to your auditees as well! They‘re the ones under presuure, that accountants et al are only starting to feel in the past couple of years..!

And here’s another picture for your viewing pleasure:

The InfoSec stack (Part 1b; implement or not to be)

Some have questioned why I put the Compensate part upward on the right side, instead of downward, as is usually considered.
Well, this may be obvious to those in the know, but: Compensating control weakness at a higher level simply does not work..!

This, because of some very basic principles:
1. Any ‘control’ at some level, will have to be implemented at at least one lower level or not exist at all except for some ink on paper (OK, ‘ink’ on ‘paper’).
2. ‘Implementation’ of some deficient control at any level by ‘compensating’ it at a higher level, will lead to an implemenation at the level of the deficiency or lower, or will not be implemented.
3. The lower the implementation level, the stronger. The higher, the weaker. ‘Compensating’ at a higher level requires more controls there to be about as strong, and hence more at the same/lower levels as implementations otherwise the same strength may not be achieved.
4. ‘Compensating’ at a higher level doesn’t fit in the design at that level or would be there already, the deficiency would not be ‘compensated’ by pointing at its rationale. Adding to the design, obviates that the design was deficient or is overcomplete now – the resulting implementation will be flawed by design.
5. Occam-like efficiency thus requires implementation of compensating controls at the same or lower levels.


[Paris, La Défense, for pictoral reasons]

QED

IHRM

On the integration of IRM into regular business management just the way HR is (was?).
[Some future blog will be about the Three Lines of (NO!) Defense. Now, about a bit more practical stuff.]

It struck me that information security, lately expanded into information risk management as (peer) part of operational risk management, as part of enterprise risk management sometimes fuzzied into ‘COSO ERM’ babble, still has difficulty to be understood to not be a separate function that can function apart from the rest of the business (‘their infosec corner to take care of their things’) but be an integral part of everyday management (and operations) just like e.g., HR.

Yes, HR is also still a separate function – for the parts that can be handled separately from the business as usual in other departments. Payrolls can be processed (almost) without knowledge of any primary business processes, or secondary processes for that matter. Apart of course from entry/leavers, etc., but that’s detail.
But HR is also very much integrated, the way it has always been. Optimising (sic; not maximising) the performance of the resources that are human (are they; are they considered such ..?) has since the inception of the idea of organizations, always been with management. Through target setting, through performance evaluations, through facilitative management. Not through micromanagement as you rightfully point out; that has no place in any organization.
All the core, direct HR tasks that are performed, are performed directly by (‘line’) managers. The less separately recognised as such, the better. Just manage!

How come, then, that IRM doesn’t take the same approach ..? The major part of simple information risk management (as is the major part of all risk management!) can and should be performed by those actually dealing with the information; employees and their management. How is it that managers generally understand that part (*) of their role consists of various HR chores, but information asset protection (and information asset performance optimalization..!) doesn’t, yet?
(*) Depending on how your organization works; when dealing with knowledge workers, the facilitative part of HR may form the core of managerial work altogether.

Yes, well, indeed managers may on the average be insufficiently educated to be able to deal with information risk management within their normal duties. But ‘we’ should solve that. And almost no manager whatsoever was trained to be a manager in the first place! No, certainly also not the business school types. They learn a few bits and pieces of administration, which is something very different. The military (cadres), they learn something (little, simple things, but apparently sufficient to work with many subordinates in life-threatening situations – don’t insult by assuming your organization can even compare to that kind of managerial challenge). But in general: No. That’s why military cadre finds it difficult to settle back into management positions in civilian society: The level of incompetence (they have to work with) is staggering.

And they our common managers may not have been provided with the appropriate methodologies and tools to do that. But ‘we’ should provide those. Work In Progress, but the distance to cover is so enormous.

And here’s a picture for your delight:Madrid, perspectives: where you stand, where you look at.

So, by education and methodology/tool provision, we can indeed bring information risk management back into the main line of management.
But so much work to be done! and rest assured that for decades to come, IRM will have its place as a (staff) department. HR hasn’t gone away quite yet, has it ..?

Comments appreciated.

Interlude: A Mistake made Policy


How a mistake made it to governmental policy…
[Though the above Toronto skyline’s just for your viewing pleasure, unrelated to this blog]

These has been quite some discussion about a thing called the Plan-Do-Check-Act cycle. Rightly so, since ever since Deming’s groundbreaking work on quality control WITHIN small shop floor level work groups, the understanding of the practical trade of small-group (self!)management has flourished.
But alas! So many sorcerer’s apprentices have ran around like lemmings. And have followed the ill-guided amongst them, over the cliffs edge. They have mixed up Deming’s quality improvement cycle with the generic process control cycle, later applied to administrative management ..!

The disastrous consequences we still have to work with. The demise of management as a craft, the attempts, failed by default from the start, to scientifise management, the blindness for the utter contraproductivity, all can be traced to this error of application out of an error of understanding.

Know your history: The control cycle has its origin in (chemical plant) process control, or even in generic control as elaborated in applied cybernetic systems methodology. Inputs, (mathematical!) transformation function, outputs, and a (mathematical!) first derivative control (signal) function; feed-forwards, feed-backs, input- and output-based signals, multiple levels of these control cycles, it should all be familiar but isn’t, on a pervasive scale.
Which is a pity because it leads to dumb, stupid, design of control cycles and the inclusion of Deming’s quality (improvement) cycle as the name-giver of the resulting management control efforts. Which in turn has led to the stupidest efforts to fit management control actions and controls into the Plan (feasible; most ‘control’-related work stays there, luckily given the dumb and dumber practitioners around), the Do (awkward! managers don’t Do anything at all in Deming’s Do sense!), the Check (auditors’ delight but NOT what Deming intended), and the Act (not understood at all, in the mix-up it’ll be wiped under the Plan carpet!) phases.

But so many wrongs don’t make a right.

Putting the two models together into this atrocious mix, leads to heaps of management babble and a destruction of sound management practices at the hands of culpable consultants (external or internal). The utter waste of money, the utter demise of anything actually productive!

And now the mistke [not intended but I’ll leave it there] reaches its peak: PDCA will be required by government directive as a design principle for (management) control! [In the Netherlands, always preaching against someone else’s sins]
What a failure of administration: To unknowingly admit so publicly one’s incapacity at the scale of an outright sackable offence (by the many that go along with this, too!).
Now, can we all please move forward the consequences of the pervasive sackabillity ..?

The Infosec Stack (Part 1a; don’t attack yourself)

Some took my Part 1 as offensive. It may have been interpreted as such – by those that have a closed mind; out of fear for being found in the Emperor’s Clothes or because of the transformation of that fear into some sort of invincibility syndrome.

But those that have their mind open enough, could see the other side and sharpen their own thought and understanding, or see my piece(let) as a proper anti-thesis i.e. not as full 180° head-on collision threat but as another angle onto a improvement-worthy wicked problem.
Don’t think backward. Contribute!

And here’s a picture for your pleasure.

The InfoSec Stack (Part 1; house of cards)


[This is a very first part of an article / white paper under development, just to float the idea and get response …(?)]

When asked what parts, or areas, information security would have to cover, people often respond with something in the line of ‘People, Process, Technology’ as a failed alliteration.

The Process and Technology parts are then elaborated to the point/gray area where they will have become completely stifling business, with a panicked compliance craze because of a serious mental condition being a lack of ability to accept that one cannot perfectly control everything. Which is a fact. And read, and re-read, Bruce Schneier’s Liars and Outliers over and over again till you get it.

Yes, for Technology, we do get that it’s an arms’ race against the most (more than us) skilled and sophisticated attackers. But somehow we don’t accept, and still think we can do all security upfront, ex-ante, preventative. The budget (grossly) wasted on this side, are direly needed at the other side(s), detective and corrective security. Or rather, the money would be much, much, much better spent there. Sid vis pacem, para bellum preparat. We’ve only just started to learn how to do that properly.
Since for the most part, we’ve focused on Process. To death… Figuratively, almost literally. We’ve designed way too much top-down analytical procedures, that result in infeasible requirements and lots of babble at the shop-floor level where the front line is. Don’t even start me on ‘three lines of defense’ that are a lot but not defense (i.e., they simply do not between any threat and any vulnerability!), don’t start me on ISO27kx compliance either. Oh, compliance, the killer of anything alive, the H-bomb of actual (sic) productivity, revenue growth and cost savings.

How did we get there ..? Simple, by not understanding that Process and Technology are just two parts of the formal structure of infosec. And as in the picture, we have almost forgotten that information security is a stacked thing.
Physical security sits at the bottom, at the base, often much underestimated in its importance.
Next comes technical security, ranging from sound architecture (not the usual goalless kind, but the one focused at lean and mean design including e.g., privacy by design/default, all the way via OS security, middleware security, application security and communications security. The parts that you don’t ‘see’ I mean, the hardware, the software, the parameter settings.
Then, there’s logical security. The kind you know, that sounds very abstract, almost philosophical but all we know about it is that it’s about password strenght and the failures (sic) of RBAC and such.
One step higher up, and again failing one order of magnitude more!, is organizational security with all its processes and procedures. How blithely stupid the rules are, and how completely and utterly they fail to contribute to security and fail to fix lower-level shortcomings, as they should.
At the top of the pyramid sits Policy, to ward off te evil forces of the babblefolk. Much to be said about this (level), which feeds the desires of those that have fled into a narcissist self-inflation fleeing forward from a lack of self-confidence and a fear of being called out for emptiness of understanding. Much will be said; elsewhere.

And, as in the picture: To understand it, one should understand that the lower the security controls, the harder they are; tougher, more resistant against attacks of any nature. The higher up, the less is contributed anyway. Bable, babble, that is what most is. Ever seen a policy statement line that actually protects against even just one Advanced Persistent Threat ..?
As is also in the picture, lower-level controls may very well cover for either control failure (lck of suitable control(s), or failure of control(s)) at the same or higher level, but a higher-level control can NOT compensate for a failure (of presence or of quality) of lower-lvel controls. But this is what the great many in the field think is the common, all too easy solution ..! Compensate up ..! Which will fail ..! At least in a house of cards, the upper cards are similar in quality to the lower ones. But the lower ones will carry the higher ones, and if a lower one tumbles, the higher ones come down. Would you suggest that if a lower-level card would be / get missing, the higher cards in the house of cards would keep the whole thing standing …!?!? If so, you would really be eligable for a guided living program.

And the picture is far from complete. Remember that above, we started with People? Where are they now, then? Where are they in your information security posture? Ah, you have awareness campaigns. Right. Haha. Oh, thought you were joking.
The ‘joke’ of the thing is that People are everywhere in the picture. Not only within te triangle, but also, vastly more so, outside it. If you implement any tinesy part of the triangle in a way that hinders co(sic)workers to achieve their objectives, they will obliterate your controls. Information security is Overhead!
And People are threats, yes indeed, but they’re also Physical, Technical, Logical (their logic, not yours!) and Organizational assets, vulnerable at the same time. Vulnerable asset and controls in ‘one’, of a wide variety.
Well, you can understand that, as we haven’t dealt with People almost since the inception of information security as an IT thing – since as an IT thing, it has grown within the formal triangle – we hardly understand a thing of how that people part works. We do in a sense, we do have psychological sciences (admit it, working in another Kuhn/Lakatos paradigm but undeniably (partially…; ed.) science). But information security practitioners don’t understand humanity. Infosec practitioners are engineers, seeing the next problem in their hands only (hardly even one ahead) and trying to fix it as quickly and pervasively as possible, and then move on. Whereas the humanities side of life is beyond them. As far as we (all humanity) know, human failings are of all times, recognised throughout the centuries, and still unfixed. We still dream of being (more) civilised than our ancestors; wrongly so! But we have to deal with those sides of People in particular now, for information security.

I’ll just stop here. You can see that so much, much more can be written, and should in my opinion.
Do I state that one wouldn’t need all the formal controls? No. But I do mean that a lot of what we have today is rubbish. [Disclaimer: This is not a statement that my current employer is or is not better or worse than average, but they certainly hinted some catharsis and solution content of this column and white paper (to come).] Incomplete, inconsistent, failing wholesale as we speak. We need much better, meaner (and if we achieve that, leaner) infosec controls. And much more of them, outside of the traditional boundaries.
So there you have it; the inception of an idea that will be discussed at great length as you know I can, in a white paper to be published somewhere, sometime in the coming months. Please feel free to comment and add already. I’ll keep you posted!

Predictive ..?


Ah, to add to the previous column: The Signal and the Noise: Why So Many Predictions Fail – But Some Don’t by Nate Silver seems interesting. Though I fear that the conclusion will again be: “It’s not hopeless [it almost is, ed.] but if you work really, really hard, you may succeed in finding nuggets — whether or not they’re worth all the effort [I think: hardly, ed.].”

Predictive Mediocrity

[Just a pretty picture of Valencia, as a Calatrava … follower but not groupie]

With all the talk about Predictive Analysis lately, it is time we separate, once again, the hype-humbug from the real(ity). Which is that the Predictive part is only partially true, and the Analysis part by definition is the contrary of predictive.

1. Analysis can only look at what’s here and now, or behind us. One can only analyse history. And, as Hegel pointed out, “Only at dusk does Minerva’s owl takes flight”: Only when the sun sets on some development, can we start (sic) to assess its importance. Analysis is devout of Meaning. It’s mechanical, it may abstract but not create Knowledge, Information at best but usually just more meta metadata. Again, interpretation, the error-prone and nearly-completely ununderstood brain process, is needed to make analysis worthwhile. And the result looks back, not forward.

2. Predictive… If we know anything of the future, it is that it’s uncertainty defined. One can predict some trends to continue — but may be disappointed.
Anything near the precision and the precise predictions that Predictive Analysis is pictured to deliver, will be certain to not come true.
This, as the dragnet of the analysis can never be complete, by the theoretical framework of needing a model of the universe which then would need the model itself to be anything distantly approaching completeness. And by practical considerations; analysts will have to work with the hammer they’re handed and view any world as nails only.
Also, trends may continue but ‘never’ (unless by extreme exception) linearly so, and non-linear modeling is still in its infancy (and may prove impossible by lack of suitable data). Hence, you’ll miss the mark by just extending lines.
Oh, it’s not about the detail, you say? Why don’t you stick to qualitative predictions, then?

3. Anything that can be analysed today, is tomorrow’s mediocrity as it was/is already known. The spurious results may be interesting, but we know whether they’re spurious or early indicators only afterwards, too late.

4. “Sh.t happens!” Why can’t we accept that in the medium-sized world we live in, quantum jumps do happen as well, with similar levels of chance calculus? They may not be binary quantum events, but appear to be. And make life both dangerous and fun! No surprises is boring par excellence. And will fail.

5. Go ahead and hype the mundane. But don’t oversell. You’re not a magician, or the magician is a fraud …

Maverisk / Étoiles du Nord