In the cave, not hunting

60% of any group of people are conservative. They just want a job, any job, and want to do simple, predictable work. The want to stay in the cave, because it’s dangerous out there; you never know where and when a bear or sabletooth tiger may attack.

40% of people understand that one can be safe by staying in the cave, but one will starve. Food is not in the cave, it’s out there. So, for self-preservation (strategic risk management) one has to go out, well equipped and sober to handle the environment (tactical risk management: listen and look carefully, and Be Prepared) and do some foraging (incl operational risk management; don’t be stupid and taste rotten stuff, etc.).

Nowadays, staying in the cave will not make you safe. Bears may hide in the back of the cave already: no matter how still you sit, headcount cuts are coming and may hit you regardless of your conformity. Or your better-adapted colleague-caveman may prey on you for the little comfort that (performance-contribution-)skinny you may bring to hold out another day. If you’d still have flesh, that would be muscle and you would (could) be the one holding out. Either all in the cave die, or through cannibalism at least a more lean and mean gathering of people (i.e., organization) may survive.
And the world outside changes faster than ever, so the cave entrance is besieged (or walled off!) by enemies taking your turf and starving or enslaving you.

[Toronto, but you knew that]

So, join the 40% and be even better at exploring, be better at being safe without a cave. Be better at risk management. Go out, dare; not running around stupidly but with due care. Enjoy the ever (faster) changing view!

The InfoSec Stack (Part 1; house of cards)


[This is a very first part of an article / white paper under development, just to float the idea and get response …(?)]

When asked what parts, or areas, information security would have to cover, people often respond with something in the line of ‘People, Process, Technology’ as a failed alliteration.

The Process and Technology parts are then elaborated to the point/gray area where they will have become completely stifling business, with a panicked compliance craze because of a serious mental condition being a lack of ability to accept that one cannot perfectly control everything. Which is a fact. And read, and re-read, Bruce Schneier’s Liars and Outliers over and over again till you get it.

Yes, for Technology, we do get that it’s an arms’ race against the most (more than us) skilled and sophisticated attackers. But somehow we don’t accept, and still think we can do all security upfront, ex-ante, preventative. The budget (grossly) wasted on this side, are direly needed at the other side(s), detective and corrective security. Or rather, the money would be much, much, much better spent there. Sid vis pacem, para bellum preparat. We’ve only just started to learn how to do that properly.
Since for the most part, we’ve focused on Process. To death… Figuratively, almost literally. We’ve designed way too much top-down analytical procedures, that result in infeasible requirements and lots of babble at the shop-floor level where the front line is. Don’t even start me on ‘three lines of defense’ that are a lot but not defense (i.e., they simply do not between any threat and any vulnerability!), don’t start me on ISO27kx compliance either. Oh, compliance, the killer of anything alive, the H-bomb of actual (sic) productivity, revenue growth and cost savings.

How did we get there ..? Simple, by not understanding that Process and Technology are just two parts of the formal structure of infosec. And as in the picture, we have almost forgotten that information security is a stacked thing.
Physical security sits at the bottom, at the base, often much underestimated in its importance.
Next comes technical security, ranging from sound architecture (not the usual goalless kind, but the one focused at lean and mean design including e.g., privacy by design/default, all the way via OS security, middleware security, application security and communications security. The parts that you don’t ‘see’ I mean, the hardware, the software, the parameter settings.
Then, there’s logical security. The kind you know, that sounds very abstract, almost philosophical but all we know about it is that it’s about password strenght and the failures (sic) of RBAC and such.
One step higher up, and again failing one order of magnitude more!, is organizational security with all its processes and procedures. How blithely stupid the rules are, and how completely and utterly they fail to contribute to security and fail to fix lower-level shortcomings, as they should.
At the top of the pyramid sits Policy, to ward off te evil forces of the babblefolk. Much to be said about this (level), which feeds the desires of those that have fled into a narcissist self-inflation fleeing forward from a lack of self-confidence and a fear of being called out for emptiness of understanding. Much will be said; elsewhere.

And, as in the picture: To understand it, one should understand that the lower the security controls, the harder they are; tougher, more resistant against attacks of any nature. The higher up, the less is contributed anyway. Bable, babble, that is what most is. Ever seen a policy statement line that actually protects against even just one Advanced Persistent Threat ..?
As is also in the picture, lower-level controls may very well cover for either control failure (lck of suitable control(s), or failure of control(s)) at the same or higher level, but a higher-level control can NOT compensate for a failure (of presence or of quality) of lower-lvel controls. But this is what the great many in the field think is the common, all too easy solution ..! Compensate up ..! Which will fail ..! At least in a house of cards, the upper cards are similar in quality to the lower ones. But the lower ones will carry the higher ones, and if a lower one tumbles, the higher ones come down. Would you suggest that if a lower-level card would be / get missing, the higher cards in the house of cards would keep the whole thing standing …!?!? If so, you would really be eligable for a guided living program.

And the picture is far from complete. Remember that above, we started with People? Where are they now, then? Where are they in your information security posture? Ah, you have awareness campaigns. Right. Haha. Oh, thought you were joking.
The ‘joke’ of the thing is that People are everywhere in the picture. Not only within te triangle, but also, vastly more so, outside it. If you implement any tinesy part of the triangle in a way that hinders co(sic)workers to achieve their objectives, they will obliterate your controls. Information security is Overhead!
And People are threats, yes indeed, but they’re also Physical, Technical, Logical (their logic, not yours!) and Organizational assets, vulnerable at the same time. Vulnerable asset and controls in ‘one’, of a wide variety.
Well, you can understand that, as we haven’t dealt with People almost since the inception of information security as an IT thing – since as an IT thing, it has grown within the formal triangle – we hardly understand a thing of how that people part works. We do in a sense, we do have psychological sciences (admit it, working in another Kuhn/Lakatos paradigm but undeniably (partially…; ed.) science). But information security practitioners don’t understand humanity. Infosec practitioners are engineers, seeing the next problem in their hands only (hardly even one ahead) and trying to fix it as quickly and pervasively as possible, and then move on. Whereas the humanities side of life is beyond them. As far as we (all humanity) know, human failings are of all times, recognised throughout the centuries, and still unfixed. We still dream of being (more) civilised than our ancestors; wrongly so! But we have to deal with those sides of People in particular now, for information security.

I’ll just stop here. You can see that so much, much more can be written, and should in my opinion.
Do I state that one wouldn’t need all the formal controls? No. But I do mean that a lot of what we have today is rubbish. [Disclaimer: This is not a statement that my current employer is or is not better or worse than average, but they certainly hinted some catharsis and solution content of this column and white paper (to come).] Incomplete, inconsistent, failing wholesale as we speak. We need much better, meaner (and if we achieve that, leaner) infosec controls. And much more of them, outside of the traditional boundaries.
So there you have it; the inception of an idea that will be discussed at great length as you know I can, in a white paper to be published somewhere, sometime in the coming months. Please feel free to comment and add already. I’ll keep you posted!

Trust statistics

“Never trust any statistic that you haven’t forged yourself” as Winston Churchill put it.

With today’s tools, computing capabilities and speed, why wouldn’t accountants draft their own annual accounts statement of their clients’ accounts ..? These could then be published alongside the ones of the clients themselves. The differences would be transparent to investors, and when (not if I guess) the clients can explain the differences to their stakeholders, there’s no pain in anything.

When accountanst would complain that it would be impossible to re-do the annual accounts, they only claim their incompetence. If they truly understand what’s on the books, really understand the underlying processes in every detail required (and note, this would be a very, very hard requirement already!), they would have no problem at all to recreate the sum total of it all. Or they end up with apparently differing interpretations of the processes, and may explain to stakeholders.

So, either the accountants go do this or they declare to not know nearly well enough what they’re signing off. There’s no in between.

Predictive ..?


Ah, to add to the previous column: The Signal and the Noise: Why So Many Predictions Fail – But Some Don’t by Nate Silver seems interesting. Though I fear that the conclusion will again be: “It’s not hopeless [it almost is, ed.] but if you work really, really hard, you may succeed in finding nuggets — whether or not they’re worth all the effort [I think: hardly, ed.].”

Predictive Mediocrity

[Just a pretty picture of Valencia, as a Calatrava … follower but not groupie]

With all the talk about Predictive Analysis lately, it is time we separate, once again, the hype-humbug from the real(ity). Which is that the Predictive part is only partially true, and the Analysis part by definition is the contrary of predictive.

1. Analysis can only look at what’s here and now, or behind us. One can only analyse history. And, as Hegel pointed out, “Only at dusk does Minerva’s owl takes flight”: Only when the sun sets on some development, can we start (sic) to assess its importance. Analysis is devout of Meaning. It’s mechanical, it may abstract but not create Knowledge, Information at best but usually just more meta metadata. Again, interpretation, the error-prone and nearly-completely ununderstood brain process, is needed to make analysis worthwhile. And the result looks back, not forward.

2. Predictive… If we know anything of the future, it is that it’s uncertainty defined. One can predict some trends to continue — but may be disappointed.
Anything near the precision and the precise predictions that Predictive Analysis is pictured to deliver, will be certain to not come true.
This, as the dragnet of the analysis can never be complete, by the theoretical framework of needing a model of the universe which then would need the model itself to be anything distantly approaching completeness. And by practical considerations; analysts will have to work with the hammer they’re handed and view any world as nails only.
Also, trends may continue but ‘never’ (unless by extreme exception) linearly so, and non-linear modeling is still in its infancy (and may prove impossible by lack of suitable data). Hence, you’ll miss the mark by just extending lines.
Oh, it’s not about the detail, you say? Why don’t you stick to qualitative predictions, then?

3. Anything that can be analysed today, is tomorrow’s mediocrity as it was/is already known. The spurious results may be interesting, but we know whether they’re spurious or early indicators only afterwards, too late.

4. “Sh.t happens!” Why can’t we accept that in the medium-sized world we live in, quantum jumps do happen as well, with similar levels of chance calculus? They may not be binary quantum events, but appear to be. And make life both dangerous and fun! No surprises is boring par excellence. And will fail.

5. Go ahead and hype the mundane. But don’t oversell. You’re not a magician, or the magician is a fraud …

The Size of Your … Data

It’s not the size of your data, it’s what you do with it.
Or so claim men that are insecure about theirs. [Disclaimer: I have big hands. ‘nuff said.]

There appears to be confusion about what Big Data may achieve. There’s the Marketing anecdotes (sic) about spurious relations, the long-standing anecdotes of credit card companies’ fraud detection, and there’s talk of ‘smart data’ use in e.g. organizations’ process mining to establish internal control quality. One party bashing the others over ‘abuse’ of terms.

What happen? [for those that know their History of Memes]

1. We have more data available than ever before. We had various generations of data analysis tools already; every new generation introducing a. a higely increased capability to deal with the data set sizes of their age, b. ease of use through prettier interfaces, c. less requirements on ex ante correlation hypothesis definitions. Now, we seem to come to a phase where we have all the data we might possibly want (not) with tools that allow any fatastic result to appear out of thin air (not).

2. Throwing humongous amounts of (usually, marketing- or generic socmed) data at an analysis tool may automatically deliver correlations, but these come in various sorts and sizes:

  • The usual suspects; loyal brand/product followers will probably be hard to get into lift-shift-retention mode. If (not when) valid, still are not really interesting because they would (should!) have been known already from ‘traditional’ research;
  • The false positives; spurious correlations, co-variance, etc., induced by data noise. Without human analysis, many wrongs can be assumed. All too often, correlation is taken (emotionally at least) to be close to or somewhat the same as causation; over and over again. How dumb can an analyst be, to not (sufficiently) be aware of their own psychological biases! Tons of them are around and impact the work, and the more one is convinced not to be (psychologically) biased, the more onewillbe and the worse the impact will be. Let alone that systemic biases can be found all too often;
  • The true positives. The hidden gems.

We don’t have any data on the amount of the spurious results vis-a-vis useful results (next bullet) to know how well (effective, efficient) we do with both automated correlation discovery and human analysis, which would be tell-tale in the world of analyse-everything. But what would you expect from this overwhelmingly inductive approach?

3. Yes, until now, in history we seem to have done quite well with deductive approaches, from the pre-Socratics until the recent discovery of the Higgs boson… in all sciences including the classic social/sociological scientists like the Greek and Roman authors (yes, the humanities are deductive sociology) and the deep thinking by definition philosophers.
The ‘scientists’ who relied on inductive approaches … we don’t even know their names (anymore) because their ‘theories’ were all refuted so completely. Yet, the above data bucket approach is no more than just pure and blind induction.

4. Ah, but then you say, ‘We aren’tthatstupid, we do take care to select the right data, filter and massage it until we know it may deliver something useful.’ Well, thank you for numbing down your data; out go the false but also the true positives..! And the other results, you should have had already long time ago via ‘traditional’ aproaches. No need to call Big Data Analysis what you do now. Either take it wholesale, or leave it!

5. Taking it wholesale will take tons of human analysis and control (over the results!); the Big Data element will dwindle to negligable proportionsifyou do this right. Big Data will be just a small start to a number of process steps that,ifdone right, will lean towards refinement through deduction much more than being induction-only that Big Data is trumpeted to be. This can be seen in e.g. some TLA having collected the world’s communications and Internet data; there’s so many dots and so many dots connected, that the significant connected dots are missed time and time again — it appears infeasible to separate the false positives from the true positives or we have a Sacrifice Coventry situation. So, repeated, no need to call this all, Big Data.

5. And then there’s the ‘smart data’ approach of not even using too much data, but using what’s available because there’s not yottabytes out there. I mean, even the databases of business transactions in the biggest global companies don’t hold what we’d call Big Data. But there’s enough (internal) transaction data to be able to establish through automated analysis, how the data flows through the organization, which is then turned into ‘process flow apparent’ schemes. Handy, but what then …? And there’s no need at all to call this stuff Big Data, either.

So, we conclude that ‘Big Data’ is just a tool, and the world is still packed with fools. Can we flip the tools back again to easily test hypotheses? Then we may even allow someinductive automated correlation searches, to hintat possible hidden causations that may be refined and tested before they can be useful.
Or we’ll remain stuck in the ‘My Method is More What Big Data Is Than Yours’.

So, I can confidently say: Size does matter, and what you do with it.

Next blog will be about how ‘predictive’ ‘analysis’ isn’t on both counts.

Double blind

Just a question: Would anyone know some definitive source, or pointers, to discussions either formal or informal, on the logic behind double secrets i.e. situations where it is a secret that some secret exists ..?

Yerah, it’s relevant in particular now that some countries’ government seems to have failed to keep that double secret completely, but should be more systematically dealt with, I think, also re regular business-to-business (and -to-consumer) interactions.

So, if you have some neat write-ups of formal logic systems approaches, I’d be grateful. TIA!

No Ethical Hacking, Please!

We still see quite a market for ‘ethical’ hacking out in the information security consulting world. However, if this type of activity should have a name, it would be wise the name would be descriptive, right? Rather than deceiting, swindling… We certainly won’t do that, sir, no way.

We’d call it ‘ethical’ if the purpose of it all would be to further the ethical goals of the ones doing it. Now take a look at who’s doing it. ‘Ethical’ hacking. And for what: Moneyyy! Hey indeed, it is the consultants and Big4 accountants that will only and exclusively do it for the money. You say No? Have you tried to talk off just an hour of their bills because the hacking that they do (more on that, below), serves some ethical purpose that they are happy to work on for free ..? A great many would consider doing just anything that pays and not doing any of it otherwise, the direct opposite, the utmost perversion of ‘ethical’ behaviour. Yet, that’s where we are with ‘ethical’ hacking.

Now for the ‘hacking’ part. Most of that is non-existent again. It’s primarily penetration testing using off-the-shelf freeware tools. Can be done from any phablet while driving, or it’s so outdated that it should serve no purpose. OK, you got me there. Even antiquated tools will find big holes in clients’ defenses that could and should have been fixed aeons ago, you know, decades of internet time (a couple of years in our time). And about that entering through a small hole: it’s still rather common to not go there, stay virgin and only do some port scanning.
So, [except for the few good men that do understand what they’re concocting] no hacking together one’s own new baby tools takes place. Yes, hacking, as in state-of-the-art coding (programming for those of you who have been hibernating the last decade) without the need for any bureaucrat’s architecture principles but with a deep understanding of languages’ strenghts and pitfalls.

So there we have it. Let loose some basic scanning tools, write up a fat report with some fancy letterhead and the usual suspects in findings; long live copy-paste, and bill ‘em for some ridiculous amount that goes straight into the coffers of some elderly gentlemen partners that don’t know how to use the Internet … except for, well, you know, searching for pictures.

Therefore, in search for a truthful descriptory name, let’s either revert to ‘penetration testing’ which for most men wouldn’t feel comfortable or even just ‘port scanning, or find some new designation. Mammon scanning, or so. But let’s not call it ‘ethical’ ‘hacking’ – two humongous wrongs don’t make a right.

Next up, maybe, a rephrased repost of @meneer’s #ditchcyber argument.

Maverisk / Étoiles du Nord