The enemy from below

I know, I’ve been guilty of it too. Thinking, tinkering, and musing about all sorts of abstract risk management schemes, how they’re a giant mess, mostly, and how they could be improved. Here and there, even considering a middle-out improvement direction. But mostly, ignoring the very fact that in the end, information risk management hinges on the vastly complex technological infrastructure. Where the buck stops; threat-, vulnerability- and protection-wise.

A major (yes, I wrote major) part of that low-level (Is it? To whom? It’s very-highly intellectual, too!) technological complexity is in the trust infrastructure in there. Which hinges on two concepts: Crypto and certificates. In fact, they’re one but you already reacted towards that.

For crypto, I’ll not write out too much here about the wrongs in the implementation. There’s others doing that maybe (sic) even better than I can.
For certificates, that hold the crypto keys, the matter is different. Currently, I know of only one club that’s actually on top of things, as they may be for you as well. Yes, even today, you may even think the problem is minor. Since you don’t know…

Really. Get your act together … This is just one example of how deep, deep down in ‘the’ infrastructure, whether yours or ‘out there’, there’s much work to be done, vastly too much complexity to get a real intelligent grip on. How will we manage ..?

And, of course:
002_2 (13)
[Showboating tech, London]

Cryptostego

Just as last week I’ve been discussing stego with colleagues, I missed this Bruce’s post
Be sure to read the comments, though. A couple on stacking steganography over cryptography, which is what I would presume would work.

And, again the question: what would you know of actual use ‘out there’; is it common, rare, what are the characteristics of its users ..? Is it the next big thing after (?) APTs …?

Oh, here it is; the pic you expected:
DSCN2526
[Would ‘Riga’ be a hint that there’s more to the picture …!?]

Column on Open Source code Bystanders

Well, the column is out there now… Will be published in the July 2014 ISSA Journal but here’s a preview, in Dutch

And, of course:
DSCN6225
[S’bourg, or Brussels – always mix them up and end up in the wrong place for a meeting]

Predictions 2014 the InfoSec edition


[DUO Groningen (couple of years ago), where a leak led to many a student’s funds were defrauded. Looks original, is just chasing outer effects]

So, some of you have seen my Predictions 2014 including the update or two, and other posts on developments in the Information world at large.
But what you desparately needed, awaited before even starting on the Christmas decorations (for those in the West), and held out any shopping for food or beverage for, I know, I know [Heh, that’s the opposite of Unk Unk’s ;-], is my completely and utterly unpretentious predictions for the Information Security arena of 2014.

Well, here we go then:

Advanced Persistent Threats will blossom like weeds (not wiet!) in 2014. APTs being the ultimate blended threats to confidentiality of information. There may be cases of government-on-government espionage, that highlight the ‘modern’ squared or cubed variant of traditional intrusion & spying (information exfiltration) work. But moreover, there will be incidents much publicised where business-on-business espionage, possibly helped by shady government agencies, is outed as more than a theoretical possibility. Of course, you all know that this is nothing new, but the general public will demand an answer from some Board members and State secretaries here and there (literally) of victims and perpetrators (denial becoming less if at all plausible). These answers will be the definition of lame. The infosec industry will rush to develop (maybe not yet fully utilise) the market; contra but hush-hush, also pro…

Certificate vulnerabilities will be shown to be a factor of import. Yesterday’s unprotected printer WiFi, will be the current certificates being stolen or manipulated. Bot victimisation of clients will be less by trojan planting and more by means of hijacking certificates (‘Certjacking’? You read it here, first! [Update to add: well, in this spelling …]) to do … sort of low-level ID / trust theft. This will not be explained nearly well enough to the general public to get them concerned, but with tech-savvy CIOs and IT managers, this will give a stir. And major sweeps through the own infra. And not much by means of future better cert management.

Crypto-failures. Cryptography, as far as actually implemented at all (some ‘implementations’ may embarassingly be found out to be done on paper only!), will show to have failures in at least two ways: procedural errors will lead to (much publicly visible) non-availability of information, e.g., when asymmetry isn’t implemented well due to lack of understanding of bureaucrat procedures writing committees mumbling and fumbling about; and by public demonstration of bad technical implementation, the coding being so shoddy that the strength of crypto will drop to n-day crackability (with 0 < n < 2 I guess).

Quantum computing, on the plus side, for crypto, will see its first practical proofs of concept. Where the PoCs are used to protect the most secret of some government’s information, but with that information having to be used by the most bumbling-about officials hence the overall end user -to- end user effectiveness being close to zero and helping any and all attackers (rogues, organised or not; state(-sponsored) organisations, etc.) to learn about tentative, among them maybe class-hack, attack vectors.

Methodological innovation in information security. As already discussed earlier, and here, and here, and with OSSTMM, and before that in quite some other posts as well. Now, also Docco joined the fray, advising SMEs from the accountancy side … Which shows this prediction for 2014 is already hatching.
On this item, we will see much improvement. E.g., combining the OSSTMM framework with SABSA. Combining the rebooted CIA with the fresh install of the (alternative to the) ‘15.5 risk’ management approach via the OSSTMM framework. And so on. Interesting! Want to contribute!

I’ll leave these here for you to follow up. When (not if) I’m right on any of the above: Yeah, see? Told you so! And if (not when) not all five are square-on: Hey, they’re only predictions! Don’t shoot the messenger that only conveyed the message of the Great Engineer Behind The Scene.
Though I would like to receive a bottle of good (I mean, really good) red Bourgogne for each prediction that shows to be somewhat right; at the good side of 50-50. Any takers?
The opposite, I can unfortunately not do as it would have way too many takers…

Rebooting the CIA


[Nope]

The CIA of information security doesn’t cut it anymore. We have relied on Confidentiality-Integrity-Availability for so long, that even ‘managers’ in the most stale of government departments now by and large know of the concepts. Which may tell you that very probably already by that fact, the system of thought has been calcified into ineffectiveness.
At least we should reconsider where we are, and where we’d want to go.

Lets tackle Confidentiality first. And maybe foremost. Because it’s here that we see the most clear reflection of our deepened understanding of the value merits of information not being in line with the treatment(s) that the information (data!) gets. Which is a cumbersome way to formulate that the value estimation on data, and the control over that data, is a mess.
Add in the lack of suitable (!) tools. User/Group/World, for the few among you who would still know what that was about, is clearly too simple (already by being too one-dimensional), but any mesh of access as can (sic) be implemented today, makes a mess of access rights. Access blocks? Access based on (legitimate, how to verify) value (s), points in time, intended and actually enabled use, non-loss copyability, etc.?
But what is the solution ..? Continue reading “Rebooting the CIA”

Inter faces


[Educational institute x 3, campus Free University, Amsterdam]

When sleeping over problems, one often comes up with solutions that both are real and so all-encompassing that they’ll need much elaboration before being applicable in a nimble way.
This one was/is on information security, again. Recall the ‘discussions’ I posted some days ago about (industrial) process control versus administrative control? Well, I’ve some more elements for a grand new scheme now.

It struck me that the operators at the (chemical) plant control room, are the ones with the dashboards. Not necessarily their managers. Nor their manager managers, etc. What if instead of some machine equipment, we plug in hoomans into the whole ..? And let them interact like the übercomplex ‘machines’ that they are, doing their (administrative / service) thing that they (want to?) do. All the way to the point where we have no equipment, just humans (with tools, by the way, but those would be under ‘complete’ control of the ones using them so are just extensions of them). One ‘manager’ could then control quite a lot; have a huge span of control…

If, big if, if only the manager would understand the overall ‘process’ well enough, that is, to be able to work with the dashboard then provided. Just Continuous Monitoring as a job, not much more (one would have 2nd- and/or 3rd ‘lines of control’ (ugh for the expression) to fix deviations, do planned maintenance, etc.). Probably not. But one can still dream; organizations would be flat without chaos breaking out.

And if you’d say it would be impossible altogether, have a look at your SOC/NOC room where techies monitor IT network traffic and systems’ health. They even have some room to correct..! And they are aware, monitor, the appropriateness of what flows over the lines, having professional pride in catching un(machine)detected patterns of irregularity possibly being break-in/break-out attempts. And they leave the content for what it is, that’s for the experts, the users themselves, to understand and monitor if only they would.
Why wouldn’t other ‘managers’ copy the idea to their own desk? No, they don’t, yet. They get Reports that they hardly read, because someone else had thought for them in determining what should be in there. And reports aren’t continuous. Walking around is, but would (rightly) be viewed as micromanagement and a bit too much given the non-continuous nature of what modern knowledge workers do. So, we’ll have to define some gauges that are monitored semi-continuously.

Now, a picture again to refresh:

[Westpunt, Curaçao]

But with the measurements not influencing the primary production ..! To let knowledge workers do their thing, in mutual cooperation without interference by some busybody thinking (s)he knows better for no reason whatsoever.
Through which we note that the use of dashboards should not, must not, start with ‘Board’s or similar utterly superfluous governance levels. Governance is for governments. As it is ‘implemented’ in larger organizations, it doesn’t look like kindergarten kids playing Important for nothing. The use of dashboards should start from the bottom, and should include quite rigorous (but not merely by the numbers) pruning of both middle-level ‘managers’ (keep the good ones, i.e., not the ones that are only expert in hanging on! otherwise you spell death), and all sorts of groupie secondary and third-line staff.

Which will only work if you haven’t yet driven out all the knowledge workers by dumbing down their work into ‘processes’ and ‘procedures’ that are bereft of any productive (sic) rationale. And if you haven’t driven out all the actual managers and are left with the deadwood that is expert only in toeing the line or rather, sitting dead still in their place.

Now have a look back also on how you do information security. Wouldn’t the little bit of tuning you may need to do, be focused best on the very shop floor level that go into the ‘industrial’ process as inputs? You would only have to informationsecure anything that would not be controlled ‘automatically’, innate in the humans that handle the information (and data; we’ll discuss later). Leave infosec mostly with them, with support concentrated at an infosec department maybe, and have managers monitor it only to the extent necessary.

And, by extension, the same would go for risk management altogether. Wouldn’t this deliver a much more lean and mean org structure than the top-down approaches that lead to such massive counterproductive overhead as we see today? With the very first-line staff that would need all the freedom feasible to be productive (the managers and rest of the overhead, aren’t, very very maybe only indirectly but certainly not worth their current income levels!) then not having to prove their innocence… See Menno Lanting’s blog for details…
Org structures have become more diamond- than pyramid-shaped; which is plain wrong for effectiveness and efficiency…

So let’s cut the cr.p and manage the interfaces, vertically, and horizontally, noting the faces part; human. An art maybe, but better than the current nonsense…

The P (part 1, too)

Now then, for the grand Part 1 of the People of Information Security. À la the triangle I posted on earlier (see somewhere below) where the People aspect floats around the triangle like a dense cloud; obscuring your clear view and posing a foggy unclarity threat.
To jot down, there are many aspects of People that we have to deal with, but let’s start with some random unstructured angles:
[Generalife, Granada]

People are a Threat. Externally, they are the actors, not random Acts of nature. No, they, they! the people, the masses (even in Ortega y Gasset style), they exist only to attack us!
How nice if you believe such, how nice to all those that have a sense of community and either don’t care to attack you even if it could be to their (risk-weighted) profit, or even help you, tacitly or visibly, explicitly. How hard do you work to alienate all those, too? Notwithstanding that there are indeed some out there that want to attack you: Have you ever stepped into their shoes to figure out why ..? If (very big if) you really stepped into their mindset, wouldn’t you do the same because by their reasoning, you ‘deserved’ it?

People are Vulnerabilities, on the inside. They are frail, failing their duty-above-all to follow your procedures, excuse me the word F.ck the contributions to the organizational success; your procedures are sacred of course?

People are Means in information security. That’s actually what they are in the People, Process, Technology trio. Vulnerability, and Threat by the way, if they deviate from how you wanted to deploy the resource, but they can also be very powerful ‘allies’ as resource to deploy in information security, information safety [nice idea, to defuse the old phrase], information asset protection. People are the thing (sic) that might follow Process using Technology to achieve protection. People are the ones to task doing to safeguard your information assets. They may not be perfect, but they will for a long time to come be the actual actors and re-actors.

People are psychological constructs acting in sociological environments. I cannot write this often enough: Read and re-read Bruce Schneier’s Liars and Outliers, to understand how these People may operate in your artificial society called organization (oh the wishful thinking in that word…).

People then, will have to be included in security design in the prominent role they have not as an afterthough. They will have to take center stage indeed, as alpha and omega of information security organization.
We’ll have to find ways to really start with People and see how their work may be structured, and how their work may be supported (not the other way around!!) by Process and Technology. Process as a little handy tool, not as the raison d’être – an uphill struggle it will indeed be, but also sign of the times already! Totalitarian bureaucrats beware; the Age of Compliance is waning. See a future blog. Technology as a little handy tool (in big plural), not as the first to arrive and to bolt a bit of Process and very maybe even People onto here and there.
But we haven’t explored such a design direction at all, yet! We have no clue, no metholodogy, no vocabulary, to describe such a ‘design’ …

That’s where you come in; through your comments I propose to crowdsource such a methodology. Be part of it!

The InfoSec stack (Part 1b; implement or not to be)

Some have questioned why I put the Compensate part upward on the right side, instead of downward, as is usually considered.
Well, this may be obvious to those in the know, but: Compensating control weakness at a higher level simply does not work..!

This, because of some very basic principles:
1. Any ‘control’ at some level, will have to be implemented at at least one lower level or not exist at all except for some ink on paper (OK, ‘ink’ on ‘paper’).
2. ‘Implementation’ of some deficient control at any level by ‘compensating’ it at a higher level, will lead to an implemenation at the level of the deficiency or lower, or will not be implemented.
3. The lower the implementation level, the stronger. The higher, the weaker. ‘Compensating’ at a higher level requires more controls there to be about as strong, and hence more at the same/lower levels as implementations otherwise the same strength may not be achieved.
4. ‘Compensating’ at a higher level doesn’t fit in the design at that level or would be there already, the deficiency would not be ‘compensated’ by pointing at its rationale. Adding to the design, obviates that the design was deficient or is overcomplete now – the resulting implementation will be flawed by design.
5. Occam-like efficiency thus requires implementation of compensating controls at the same or lower levels.


[Paris, La Défense, for pictoral reasons]

QED

Maverisk / Étoiles du Nord