When it comes to Risk, Appetite is Tolerance

Previously, with many others I believed that Risk Appetite would have to be the starting point of discussion for anything Risk within organisatons. The appetite, following from discussions on Strategy being the choices of directions and subsequent steps that would need to be taken to achieve strategic objectives, i.e., where one sees the organisation ending up in the future. Very clearly elucidated here. Backtracking, one will find the risks associated with these possibly multiple directions and steps — in qualitative terms, as NO valid data exists (logically necessarily, since these concern the future and hence are determined by all information in the universe which, logically, cannot be captured in any model since then, the model would have to be part of itself, incurring circularities ad infinitum and already, the organisational actions will impact the context and vice versa, in as yet (for the same reason) unpredictable ways.
And then … This risk appetite, automatically equated with the risk tolerance by the Board for risks incurred bottom-up by the mundane actions of all the underlings (i.e., including ‘managers’, see yesterday’s post), then suddenly would have to be in quantitative terms… [Yes, bypassing tolerance-as-organisational-resilience-capacity]
As all that goes around in organisations, through the first 99.9% of Operational / Operations Risk, and then some 10% industry-specific risks (e.g., market- and credit- for the finanical industry), not measured but guesstimated by hitherto outstandingly some that have least clue and experience [otherwise, they would have been much better employed in the first line of business themselves… The picture changes favorably (!) where we see some organisations shift to first-line do-it-yourself risk management… finally!] with what the chance and impact figures would be. As if those were the two only quantities to be estimated per ‘event’… As if any data from anywhere would be sufficiently reliable benchmarking material — If you believe that nevertheless, you should be locked up in a treatment facility… Yes sometimes it’s taken to be this moronic… No need to flame bigger here, as that was already done here.

But wait where was I. Oh, yeah, with the bypassing of tolerance defined as what the organisation could bear. The bare fact being, that no-one can establish a reliable figure for that. What the Board can and want to bear … Considering that the Board would have to be all-in, i.e., not only all of their bonuses since ever under clawback threat, but also all of their earned income incl salaries and personal wealth — if any of the Board would not want to risk all they ever had and have, bugger off this is what you signed up to. Considering also that strategic decisions are about wagering the existence of the company on choosing right or else, this wagering the well-being and wealth of all employees however unable to bear loss by mere fact of never had the ability to create some reserves, the previous consideration isn’t exaggerated. You wager others’ very existence, you wager your own ‘first’.

Summa summarum:
Risk Appetite is what the Board lets happen as Risk Tolerated Already.

Plus:
20160529_142237
[And away goes your grand hallway down the drain; [non-related] Haarzuilens, Utrecht]

The legacy of TDoS

So, we have the first little probes of TDoS attacks (DoS-by-IoT). ‘Refrigereddon’.
As if that wasn’t predictable, very much predictable, and predicted.
[Edited to add: And analysed correctly, as here.]

Predicted it was. What now? Because if we don’t change course, we’ll achieve ever worse infra. Yes, security can be baked into new products — that will be somewhat even more expensive so will not swarm the market — but for backward compatibility in all the chains out there already, cannot be relied upon plus there’s tons of legacy equipment out there already (see: Healthcare, and: Utilities). Even when introducing new, fully securable stuff, we’re heading into a future where the Legacy issue will grow for a long time and much worse than it already is, before (need to be) huge pressure will bring the problem down.

So… What to do ..? Well, at least get the fundamentals right, which so far we haven’t. Like this, and this and this and here plus here (after the intermission) and there

Would anyone have an idea how to get this right, starting today, and all-in all-out..?

Plus:
20150323_213334
[IRL art will Always trump online stuff… (?); at home]

All Your Data Are Belong To Us

Or, in the form of a question: When
a. One has to notify authorities of any (possible!) data leak, per law, in Europe and soon maybe also in the USofA,
b. Even BIOSses aren’t secure anymore, baked in from the word Go and onwards,
Shouldn’t all organisations declare all of their infrastructure and hence all their data, possibly compromised ..?

Just asking.

[Edited to add this. Also relevant; this one deeper (?)]

And:
20141101_145950
[Calm, not private; Museumplein Amsterdam]

New Normal Hacking

Errm, anyone still surprised about (not) new news on data being stolen, ransomware striking, or democracy perverted, anywhere, all the time ..?

Got a bit worried, and wondered whether there would be others the same, about the current Mehh impression of everyone in the loop, about even political parties [now openly], voting machines, etc., getting cracked and data stolen which combined with at last, at very last finally, the hackability of voting machines not, against all sane arguments, being tamper-resistant — which leads to the vulnerability and class broken-ness of fundamental human values.

And still, there’s hardly more than Mehhh.

Would anyone have a reason not to worry …?

Ah:

Oh well, blue pills everywhere …? Plus:
20150109_135649
[Sorry to say lads and lassies of the Royal Academy of Arts, but the Gemeentemuseum did beat you, on this one]
[Edited to add: No, this post was written before the NIST October 7 ‘news’ came out that (‘end’?) users are tired of hack-warnings (security fatigue), if that were a thing. Which is also not quite what I meant above, which is worse…]

Are sw bugs taxing your resilience ..?

There would be a solution when we’d find a way to tax software makers for their product faults.

Because caveat emptor can work only if unlike in softwareland, one can duly (!) examine the product before purchase otherwise-and-anyway culpability for hidden flaws remains with the seller/licensor.

Which is impossible with shrink-wrapped stuff — and the ‘license’ claim is ridiculous, moreover the EULA is inconsistent hence null and void: Either the product is used under license hence the product quaility liability remains with the producer/licensor or the licensee is liable for damages the use of the product might cause but then invariably ownership is with the purchaser.

The software maker can’t have their cake and eat it; that would run against basic legal principles. And claiming that one’s always allowed to not use the product and choose another one or not, the Hobson’s Choice that brings about so many legal ramifications that even $AAPL’s pockets would never suffice, would invariably lead to oligopoly/cartel charges …!

Or, as this may easily be solved when taken as a societal problem just like environmental stuff like CO2 pollution (we all need electricity): Why not tax the software makers for their ‘pollution’ of the IS environment with bugs ..? (And prohibit the use of greenhouse gases like SQL injection weaknesses?)
Like, after post-write but before release, this (Dutch) news that casual carelessness is a headache for government(s)… A bit like driving rules with no enforcement, maybe ..?

I’m not one for fighting the real windmills… hence:
dsc_0099
[The outards of the inn(ard)s of courts; Bridget’s London obviously]

Contra Bruce, for once

For once, Bruce is not at the right end. Maybe not opposite of it, but.
As per this here blog post of his — a repeat of one of his, and others’, thread.

The argument: We make things, like, security, too difficult for users and hence (?) we shouldn’t try to change them into secure behaviour.
The contra: ‘Guns kill people’, or was it that the men (mostly) firing guns, kill people? And the many toddlers shooting their next of kin since, being at the approximate maturity of the Original gun pwner, they have no clue.

The Contra, too, and much more to the point when it comes to ‘information’ ‘security’: We should make cars run at maximum 5Mph … Since ‘users’ are waaaay too stupid to drive carefully.
Just don’t mention that ‘security’ is a quality not an absolute pass-or-fail thing, and that ‘information’ could not be more vague. [Except ‘cyber’, that’s so vacated of any meaning that it’s a black hole.] And don’t mentoin we still seem to let cars be used by any other moron that once, possibly literally decades ago before ‘chips’ were invented, passed some formal test — the American idea of the test coming very, dangerously, close to … was (sic) it the Belgian? system where one could pick up one’s driver’s license at the post office. Able, allowed, to buy cars that drive not just 5 but 250Mph, on busy roads, without protection against using socmed mid-traffic… One thing could be to introduce Finnish-style booking for unsafe behaviour (if caught, not when as per next paragraph [think that through…]), and/or huge fines for the producers of bad equipment (hw/sw) comparable to fines on car makers, or outright laws to build airbags in, etc.

And then, if we’d design ‘secure’ systems, e.g., the Apple way, we’d end up with even worse Shallows sheeple that have so much less clue than before… And all in the hands of … even in ultra-liberal countries one would suggest either Big Corp, or Big Gov’t, both options being Big Brother literally in such an atrocious Dystopia of humanity.

So, you want safe systems? You get the loss of humanity before actual safety.

[Yes I get the Humans Are The Cause Of Much Infosec Failure thing (including Human Flexibility Can (still!) Solve More Than Machines Can, Against System (!) Malfunction), but also I am completely in favour of both the Humans Must Through Tech Be Completely Shielded From Being Able To Do Anything Wrong and Humans Should Retain All Freedom To Act Responsibly solutions.]

Pick your stand. And:

[Use G Translate if you have to, from Dutch. Typifying the driver, probably, if only for picking the brand/car…; London]

Teh business, does it exist ..?

On purpose, teh. Plus a spoiler: No.

Though this is a tell-tale sign your infosec program, of whatever kind, will #fail, wholesale.
’cause If you can’t specify all stakeholders, at their various levels of detail required, beyond swiping them up under the ‘the business’ nomen, Then you might as well call it ‘teh’ business, as you are vague to the point of irrelevance, as you will be regarded by ‘the business’ and since that’s where 99.9% of your security sits (including budget holders…), fugeddabout effectiveness.
Endif. No Else.

So, stop using ‘the business’ as a stopgap designation for your lack of understanding of the infosec problems that you claimed you could tackle hence you demonstrate to know no thing about the swamp of root causes to the problems that you said to go solve.
You n00b.

Oh well…:
dscn1150
[Some specific business; Madrid]

Data Classinocation

I was studying this ‘old’ idea of mine of drafting some form of impact-based criteria for data sensitivity when, along with a couple of fundamental logical errors in some of the most formally adopted (incl legal) standards and laws, I suddenly realised:

In these times of easily provable easy de-anonymisation of even the most protective homomorphic encryption multiplied with the ease of de-anonymisation throught data correlation of even the most innocent data points, all even the most innocent data points/elements must (not should) be classified at the highest sensitivity levels so why classifiy data ..!?

This may not be a popular point, but that doesn’t make it less true.
In similar vein, in European context where one is only to process data in the first place if (big if) there is no alternative and one can process for the Original intent and purpose only,

To prevent data from unauthorised disclosure internally or externally, without tight need-to-know/need-to-use IAM implementation, one already does too little; with, enough.

That’s right; ‘internal use only’ is waaay too sloppy hence illegal — it breaks the legal requirement for due (sic) protection, and if the use of data is, ‘by negligence’ not changing a thing here, let possible, the European privacy directive (and its currently active precursors) do not allow you to even have the data. This may be a stretch but is still understandable and valid once you take the effort to think it through a bit.
Maybe also not too popular.

Needless to say that both points will not be understood the least by all the ‘privacy officer’ types that have rote learned the laws and regulations, but have no experience/clue how to actually use those in practice and just wave legal ‘arguments’ (quod non) around as if that their (song and) dance is the end purpose of the organisation but cannot answer even the most simple questions re allowablity of some data/processing with anything that logically or linguistically approaches clarity. [Note the ‘or’ is a logical one, not the sometimes interpreted xor that the too-simpletons (incl ‘privacy officers’) interpret but don’t know exists.]

OK. So far, no good. Plus:
dscn0990
[Not a fortress, nor a real maze once you see the structure; Valencia]

Waves of cyberfud

Not just because #ditchcyber is real. But because only now, the first of the absolute leggards (i.e., gov’t officials) begin to make waves about access to private data, through apparent (sic) complete lack of understanding about the fundamentals of free society. The issue of blanket access to any communications, for whatever purpose, has been settled so shut up for eternity or however much longer it takes ‘you’ to get it or die — whichever comes first, my guess is the latter.

Politics being the only field of work where no education is required; all the cyber-blah being the second, then, apparently ..? And:

dscn1128
[He would have annihilated the little people that clamour for ‘backdoors’, etc., et al.; DC]

Comedy crashers

No capers, frankly no comedy either, when some of the most respected in the field are concerned about pervasive probing of whole countries in one go. As here.

Probably, the same is pulled off on smaller countries as well; the infra doesn’t distinguish, but the protection budgets probably are much smaller, so a proof of concept might be interesting. Though this may trigger better protection in the larger country/countries, if done ‘right’ the attack(s) may be class break kind of things not so easily protected against in the first place.
And for now, the smaller countries probed, will have even smaller budgets and capabilities to even detect the probing all together / in the first place. Interesting …

But maybe budgets are better spent on all the other actual risks out there, like: ..?
dsc_0789
[Suddenly (of course !!) turned up at the Joinville château; Haut-Marne]

Maverisk / Étoiles du Nord