The enemy from below

I know, I’ve been guilty of it too. Thinking, tinkering, and musing about all sorts of abstract risk management schemes, how they’re a giant mess, mostly, and how they could be improved. Here and there, even considering a middle-out improvement direction. But mostly, ignoring the very fact that in the end, information risk management hinges on the vastly complex technological infrastructure. Where the buck stops; threat-, vulnerability- and protection-wise.

A major (yes, I wrote major) part of that low-level (Is it? To whom? It’s very-highly intellectual, too!) technological complexity is in the trust infrastructure in there. Which hinges on two concepts: Crypto and certificates. In fact, they’re one but you already reacted towards that.

For crypto, I’ll not write out too much here about the wrongs in the implementation. There’s others doing that maybe (sic) even better than I can.
For certificates, that hold the crypto keys, the matter is different. Currently, I know of only one club that’s actually on top of things, as they may be for you as well. Yes, even today, you may even think the problem is minor. Since you don’t know…

Really. Get your act together … This is just one example of how deep, deep down in ‘the’ infrastructure, whether yours or ‘out there’, there’s much work to be done, vastly too much complexity to get a real intelligent grip on. How will we manage ..?

And, of course:
002_2 (13)
[Showboating tech, London]

(Ahead of time, because we can)

This will be published in the July ISSA Journal. Just put it down here, already, to be able to link to it. ;-]
And, first, a picture:
DSCN3152
[Toronno, ON]

After which (Dutch version linked here):

You have the Bystander Bug

One of the major pluses of open source software is that anyone, even you, could check the source code so, via logic, the chance that a somewhat hidden bug will be found in a heartbeat will rise to about one when enough people look at the source, now they can.
But we were recently surprised by just such a bug, with global implications. Sure, it turned out actually no-one keeps tabs on what open source software is used (in) where by whom.

So all the global software behemoths turned out to rely on pieces of open source software – and that software, maintained by literally a handful of volunteers on a budget of less than a couple of seconds of the major software vendors’ CEOs, actually had not been scrutinized to the level one would require even on a risk base. Certainly not given the widespread use which would make the risk to society grow high. Did we tacitly expect all the software vendors of the world to have inspected the open source code more carefully before it being deployed in the global infrastructure ..? How would one propose to organize that within those big, huge for-profit companies? What where (not if) the global infrastructure wasn’t ‘compiled’ into one but built using so many somewhat-black boxes? Virtualization and ‘cloud’ abstract this picture even further. Increasing the risks…

But more worryingly, this also means that ‘we all’ suffer from the Bystander Effect. Someone’s in the water, unable to get out, and we all stand by without helping out because our psychology suggests we follow the herd. Yes, there are the stories of the Ones that beats this and jumps in to the rescue – but there’s also stories where such heroes don’t turn up. And, apparently, in the open source software world, there’s too few volunteers, on budgets far less than a shoe string, that jump in and do the hard work of detailed code inspections. Which means there’s also a great number, potentially about all, of us that look the other way, have made ourselves unaware, and just want to do our 9-to-5 job anywhere in the industry. In that way, we’re suffering from the bystander effect, aren’t we ..?

And, even worse, so far we seem to have escaped the worst results of this in e.g., voting machines. Here, how close was the call where everyone just accepted the machine program-ming and expected that because of its open source nature (if …), “someone will have looked at it, right …!?”. Though of course, on this subject, some zealots (?) did indeed do some code checking in some cases, and the troubles with secrecy of votes overshadowed the potential (!) tallying biases programmed in, knowingly or not. But still… when ‘machines’ are ever more relied upon, moving from such simple things like voting machines to a combination of human-independent big data analysis/action automata with software-defined-everything and the Internet of Things, where’s the scrutiny?

Will a new drive for increased security turn out to be too little, too narrowly focused and for too short at time, as many if not all after-the-fact corrections have been? If we leave it to ‘others’ again, with their own long-term interests probably more at heart than our even longer-term societal interests, we still suffer from the bystander effect and we may be guilty by inaction.

But then again, how do we get all this stuff organized …? Your suggestions, please.

[Edited to add I: The above is the original text; improved and toned down a bit when published officially in the ISSA Journal]
[Edited to add II: This link to an article on the infra itself]

[Edited to add III: The final version in PDF, here.]

One more on how the @nbacc accountants’lab’ will work in NL

Why am I somewhat convinced that this article is what the results of the accountants’lab’ that is proposed in the Netherlands, will look like. Even when I like the idea! Just not the direction the proposed content of research is taking. Is (this) a lab for deep, methodologically cast-in-concrete but utterly boring science of the history writing kind with drab results that don’t apply in the chaos of the (business) world out there, or is (this) a lab about doing experiments, exploring the unexplored, with results scored on qualitative scales like Innovation and Beauty …?

With a picture because you held out so long reading the above:
DSCN4777
[Yup, the axis going South]

Accountansdiscussie: Start en eind

Als 28 mei @nbacc met @tuacc in de slag gaat, is deze post misschien een hulp, en onderstaande alsnog het resultaat…
Imago accountants
[Parool, zaterdag 24 mei 2014]

Want laten we wel wezen: Een lab voor diepe vaktechniek loopt consequent (if) aan tegen: “Volgens de theorie kan niks hier werken en weten we exact waarom. In de praktijk werkt alles, maar niemand weet hoe. Wij mixen theorie en praktijk.” en ook: ” Laat niet degenen die zeggen dat iets onmogelijk is, in de weg staan van degenen die dat iets al doen.” Succes …

[En dan krijg ik weer te horen dat ik ‘te’ cynisch ben. Cynisme is een laatste poging vooruitgang te bereiken – waar de meeste betrokkenen (?) in apathie afwezig blijven, geestelijk of fysiek. Apathie; de houding van degenen die alle hoop zijn verloren. Nee, dáár ben je blij mee…]

Cryptostego

Just as last week I’ve been discussing stego with colleagues, I missed this Bruce’s post
Be sure to read the comments, though. A couple on stacking steganography over cryptography, which is what I would presume would work.

And, again the question: what would you know of actual use ‘out there’; is it common, rare, what are the characteristics of its users ..? Is it the next big thing after (?) APTs …?

Oh, here it is; the pic you expected:
DSCN2526
[Would ‘Riga’ be a hint that there’s more to the picture …!?]

Column on Open Source code Bystanders

Well, the column is out there now… Will be published in the July 2014 ISSA Journal but here’s a preview, in Dutch

And, of course:
DSCN6225
[S’bourg, or Brussels – always mix them up and end up in the wrong place for a meeting]

Rebooting the CIA


[Nope]

The CIA of information security doesn’t cut it anymore. We have relied on Confidentiality-Integrity-Availability for so long, that even ‘managers’ in the most stale of government departments now by and large know of the concepts. Which may tell you that very probably already by that fact, the system of thought has been calcified into ineffectiveness.
At least we should reconsider where we are, and where we’d want to go.

Lets tackle Confidentiality first. And maybe foremost. Because it’s here that we see the most clear reflection of our deepened understanding of the value merits of information not being in line with the treatment(s) that the information (data!) gets. Which is a cumbersome way to formulate that the value estimation on data, and the control over that data, is a mess.
Add in the lack of suitable (!) tools. User/Group/World, for the few among you who would still know what that was about, is clearly too simple (already by being too one-dimensional), but any mesh of access as can (sic) be implemented today, makes a mess of access rights. Access blocks? Access based on (legitimate, how to verify) value (s), points in time, intended and actually enabled use, non-loss copyability, etc.?
But what is the solution ..? Continue reading “Rebooting the CIA”

Inter faces


[Educational institute x 3, campus Free University, Amsterdam]

When sleeping over problems, one often comes up with solutions that both are real and so all-encompassing that they’ll need much elaboration before being applicable in a nimble way.
This one was/is on information security, again. Recall the ‘discussions’ I posted some days ago about (industrial) process control versus administrative control? Well, I’ve some more elements for a grand new scheme now.

It struck me that the operators at the (chemical) plant control room, are the ones with the dashboards. Not necessarily their managers. Nor their manager managers, etc. What if instead of some machine equipment, we plug in hoomans into the whole ..? And let them interact like the übercomplex ‘machines’ that they are, doing their (administrative / service) thing that they (want to?) do. All the way to the point where we have no equipment, just humans (with tools, by the way, but those would be under ‘complete’ control of the ones using them so are just extensions of them). One ‘manager’ could then control quite a lot; have a huge span of control…

If, big if, if only the manager would understand the overall ‘process’ well enough, that is, to be able to work with the dashboard then provided. Just Continuous Monitoring as a job, not much more (one would have 2nd- and/or 3rd ‘lines of control’ (ugh for the expression) to fix deviations, do planned maintenance, etc.). Probably not. But one can still dream; organizations would be flat without chaos breaking out.

And if you’d say it would be impossible altogether, have a look at your SOC/NOC room where techies monitor IT network traffic and systems’ health. They even have some room to correct..! And they are aware, monitor, the appropriateness of what flows over the lines, having professional pride in catching un(machine)detected patterns of irregularity possibly being break-in/break-out attempts. And they leave the content for what it is, that’s for the experts, the users themselves, to understand and monitor if only they would.
Why wouldn’t other ‘managers’ copy the idea to their own desk? No, they don’t, yet. They get Reports that they hardly read, because someone else had thought for them in determining what should be in there. And reports aren’t continuous. Walking around is, but would (rightly) be viewed as micromanagement and a bit too much given the non-continuous nature of what modern knowledge workers do. So, we’ll have to define some gauges that are monitored semi-continuously.

Now, a picture again to refresh:

[Westpunt, Curaçao]

But with the measurements not influencing the primary production ..! To let knowledge workers do their thing, in mutual cooperation without interference by some busybody thinking (s)he knows better for no reason whatsoever.
Through which we note that the use of dashboards should not, must not, start with ‘Board’s or similar utterly superfluous governance levels. Governance is for governments. As it is ‘implemented’ in larger organizations, it doesn’t look like kindergarten kids playing Important for nothing. The use of dashboards should start from the bottom, and should include quite rigorous (but not merely by the numbers) pruning of both middle-level ‘managers’ (keep the good ones, i.e., not the ones that are only expert in hanging on! otherwise you spell death), and all sorts of groupie secondary and third-line staff.

Which will only work if you haven’t yet driven out all the knowledge workers by dumbing down their work into ‘processes’ and ‘procedures’ that are bereft of any productive (sic) rationale. And if you haven’t driven out all the actual managers and are left with the deadwood that is expert only in toeing the line or rather, sitting dead still in their place.

Now have a look back also on how you do information security. Wouldn’t the little bit of tuning you may need to do, be focused best on the very shop floor level that go into the ‘industrial’ process as inputs? You would only have to informationsecure anything that would not be controlled ‘automatically’, innate in the humans that handle the information (and data; we’ll discuss later). Leave infosec mostly with them, with support concentrated at an infosec department maybe, and have managers monitor it only to the extent necessary.

And, by extension, the same would go for risk management altogether. Wouldn’t this deliver a much more lean and mean org structure than the top-down approaches that lead to such massive counterproductive overhead as we see today? With the very first-line staff that would need all the freedom feasible to be productive (the managers and rest of the overhead, aren’t, very very maybe only indirectly but certainly not worth their current income levels!) then not having to prove their innocence… See Menno Lanting’s blog for details…
Org structures have become more diamond- than pyramid-shaped; which is plain wrong for effectiveness and efficiency…

So let’s cut the cr.p and manage the interfaces, vertically, and horizontally, noting the faces part; human. An art maybe, but better than the current nonsense…

Maverisk / Étoiles du Nord