Tragic users

Isn’t it a tragedy that those that would most need full but fully inconspicuous, unnoticable security on socmed et al., are the ones that care the least?

This, both in careful scouring of legalese and practical settings, tools, and what have we, and qua effort to keep messaging (Email dies out hard, doesn’t it ..? Or doesn’t it due to very valid reasons..?) secure and data private ..?
On the other hand / end, not all ‘professionals’ practice what they preach to the hilt… And may do too little.
Flip side of “There exists no 100% security”: If you do only a little less, the huge costs aren’t worth it whereas if you do quite a bit less, you’re much more efficient. Hence, even reasoning from the other side, maximum security will leave gaping holes you (sic) will get caught in.

So, all are in an inverse Catch-22 of sorts… [there should be a name for that; suggestions?]

And:
Photo11[The one that checked water temp, wasn’t the one to go swimming…; Cyprus]

From bike design to security design

You recall my posts from a couple of days ago (various), and here, and have studied the underlying Dutch Granny Bike Theory (as here), while not being put off by the lack (?) of design when taking a concrete view here.
You may also recall discussions, forever returning as long as security (control) design existed even when not (yet) as a separate subject, that users’ Desire Paths (exepelainifyed here) would inevitably be catered for or one would find continual resistance until failure — with opposition from the Yes But Users Should Be Made Aware Of Sensitivity Of Their Dealing With Commensurate (Linearly Appropriate) Security Hindrance side; things are hard for a reason and one should make things as simple as possible but not simpler. [Yeah, I know that’s a reformulation of Ockam’s Razor for simpletons outside of science and having dropped the scientific precision of O and of application to science where it’s valid and the second part is often lost by and on the most simpletons of all short of politicians which are in a league of their own.]

I feel there may be a world a.k.a. whole field of science, to be developed (sic) regarding this. Or at least, let’s drop the pretension of simpleness of cost/benefit calculations that are a long way on the very, very wrong side of but not simpler.
Anyone have pointers to some applicable science in this field?

Oh, and:
DSCN3655[Applicable to security design: “You understand it when you get it” © Johan Cruyff; Toronto]

4Q for quality assurance

To go beyond the usual, downtrodden ‘quality in assurance’ epitome of dullness, herewith something worth considering.
Which is about the assessment of controls, to establish their quality (‘qualifications’) on four, subsequent, characteristics [taking some liberties, and applying interpretation and stretching]:

  • Design. The usual suspect here. About how the control, or rather set of them, should be able to function as a self-righting ship. Point being, that you should+ (must?) evaluate the proposed / implemented set of controls to see whether self-righting mechanisms have been built in, with hopefully graceful degradation when not (maintained) implemented correctly and fully — which should be visible in the design or else. Or, you’re relying on a pipe dream.
  • Installation. Similar to implementation-the-old-way, having the CD in hand and loading / mounting it onto or into a ‘system’.
  • Operational. Specifies the conditions within which the control(s) is expected to operate, the procedural stuff ‘around’ the control.
  • Performance. Both in terms of defining the measuring sticks, and the actual metrics on performance attached to the control(s). Here, the elements of (to be established) sufficiency of monitoring and maintenance also come ’round the corner.

Note; where there’s ‘control(s)’ I consider it obvious, going without saying (hence me here now writing instead of that), that all of the discussed applies to singleton controls as well as sets of controls grouped towards achieving some (level of) control objective. All too often, the very hierarchy of controls is overlooked or at best misconstrued to refer to organisational / procedural / technical sorts of divisions whereas my view here is towards the completely ad hoc qua hierarchy or so.
Note; I have taken some liberty in all of this. The Original piece centered around hardware / software, hence the Installation part so explicitly. But, on the whole, things shouldn’t be different for any type of control or would they in which case you miss the point.

And, the above shouldn’t just be done at risk assessment time, in this case seen as the risk assessment time when one establishes the efficacy, effectiveness of current controls, to establish gross to net, inherent to residual risks, on all one can identify in the audit universe, risk universe, at various levels of detail. On the contrary, auditors in particular should at the head of any audit, do the above evaluation within the scope of the audit, and establish the four qualities. Indeed focusing on Maturity, Competence, and Testing to establish that — though maybe Competence (not only the competence of the administrator carrying out the control, but far more importantly, the competence of the control to keep the risk in check) is something just that bit more crucial in the Design phase, with Maturity slightly outweighting the others in Installation and Operational, and Testing of course focusing on the Operational and Performance sides of things.

Intermission: The Dutch have the SIVA method for criteria design — which may have some bearing on the structure of controls along the above.

Now, after possibly having gotten into a jumble of elements above, a closing remark would be: Wouldn’t it be possible to build better, more focused and stakeholder-aligned, assurance standards of the ISAE3402 kind ..? Where Type I and II mix up the above but clients may need only … well, hopefully, only the full picture.
But the Dutch (them again) can at once improve their hazy, inconsistent interpretation of Design, Existence, and Effectiveness of control(s).
With Design often, mistaken very much yes but still, meaning whether there’s some design / overall structure of the control set, some top-down detailing structure and a bit of consistency but with the self-righting part being left to the overall blunder-application of PDCA throughout…;
Existence being the actual control having been written out or more rarely whether the control is found in place when the auditor come ’round;
Effectiveness… — hard to believe but still almost always clenched-teeth confirmed — being ‘repeatedly established to Exist’ e.g., at surprise revisits. Complaints that Effectiveness is utterly determined by Design, fall on stone deaf ears and overshouting of the mortal impostor syndrome fears.

Back to the subject: Can four separate opinions be generated to the above four qualities ..? Would some stakeholder benefit, and in what way? Should an audit be halted when at some stage of the four, the audit opinion is less than very Satisfactory — i.e., when thing go downhill when moving from ideals and plans to nitty practice — or should the scope of the audit be adapted, narrowed down on the fly so the end opinion of In Control applies only to the subset of scope where such an opinion is justified?
But a lot needs to be figured out still. E.g., suppose (really? the following is hard fact at oh so many occasions) change management is so-so or leaky at best; would it be useful to still look at systems integrity?

Help, much? Plus:
DSCN4069[An optimal mix of complexity with clarity; Valencia]

One extra for Two AI tipping point(er)s

To add, to the post below of a month ago.
This here piece, on how AI software is now writing (better) AI software. Still in its infancy, but if you recall the Singularity praise (terroristic future), you see how fast this can get out of hand. Do you?

The old bits:

You may have misread that title.

It’s about tips, being pointers, two to papers that give such a nice overview of the year ahead in AI-and-ethics (mostly) research. Like, this and this. With, of course, subsequent linkage to many other useful stuff that you’d almost miss even if you’d pay attention.

Be ware of quite a number of follow-up posts, that will delve into all sorts of issue listed in the papers, and will quiz or puzzle you depending on whether you did pay attention or not. OK, you’ll be puzzled, right?

And:
DSCN1441[Self-learned AI question could be: “Why?” but to be honest and demonstrating some issues, that’s completely besides the point; Toronto]

You Don’t Call The Shots

I.E., You Are Not In Control !

This, as a consequence of the ‘In Control’ definition. Where the controlling and ‘steering’ (what Steering Committees are about, if properly functioning … ) are the same.
But as explained previously, such steering doesn’t happen (is impossible) already in a Mediocristan world its complexity, let alone the mix-in (to say the least) with Extremistan that you’ll find everywhere and certainly in your business.

NO you can risk-manage your business to the hilt, or even make it extremely brittle, antiresilient by totalitarian bureaucracy that leaves no human breathing space but switches to full 100% bot-run enterprise, DAO-style ops (hence will fail with complete certainty when interacting with humans like, e.g., your clients),
because complete risk-managed stuff still weighs costs so is imperfect or isn’t…
And of the imperfection of fully-reactive quod non-‘security’, see the above and many of my previous posts…

So either way, things will happen that you didn’t order. Estimates run from 50-50 (where you have zero clue about which 50 you do control) to 90%, 95%, 99% not-your-call shots. The latter category since your brain is not wired [link: huh] to deal with more than 10% ‘free will’ and the rest is, as scientifically determined, reactive to the environment however clever and deep-minded you think yourself to be (the more the latter, the less you are … If you have to say you are wise, you aren’t). Which make the majority of what happens to you and your organisation, accidental and from the outside. Which is by the very definition not you being ‘in control’.

Despite all the ‘GRC’ liars that should be called out for that quality.

[Edited after scheduling, to add: In this here piece, there are very, very useful pointers to break away from the dismal Type I and II In Control (quod non) Statements of all shades. Should be studied, and seen to refer back to the foundations of auditing ..!]

Oh, and:
DSC_1033[Designed to belittle humans — failing since they’re still there…; DC]

Fake-fake-fakes

[Edited to add: this, I wrote a month+ ago, and has of course since been ‘repeated’ over and over, e.g., through the poor Swedes not knowing what hit them…]

Not quite like this, but troublesome: The information explosion brought to us by the Internet, has finally come to the brink of its feared state of drowning-till-death the Truth, under Fake. Where nothing, literally nothing, can be believed anymore, nor can anything be refuted as fake once the humans’ limited context view cannot discard everything that seems legit or on the border of it, for lack of irrefutable, foundational truths that would raise the plausibility to sufficient levels.
On the contrary, the logical-positivists’ traps / blind spots would kick in. We get unprovable ‘double secrets’ and ditto ‘double falsehoods’ (“We didn’t hack the elections”) — so finally, we reach Socrates’ ideal ..!!

The Elysion at last, like:
DSC_0026
[Now that’s E Pluribus Unum; Noto oh no it’s reluctantly-unified DunEdin…]

Secret Health

The year hasn’t started in earnest, and already we’re swamped in news about the over-easy hackability in and/or frequent leakage of medical data from the Care sector — haha we aren’t swamped but rather, quite ignore the news because either one cannot do anything about it (but complain) or it’s too embarrassing …
Also, it turns out that people are more reluctant to share medical data (info) with their practitioner(s) when they are less secure about the secrecy of it; the very reason there’s such a thing as medical professional code of secrecy (doctor/patient confidentiality) and now, leading to worse care (quality, cost) then if proper secrecy wouldn’t be in doubt.

So, either you medical/care expert have professional pride to provide the best medical care and hence implement proper infosec measures (from ISMS to crypto-details) and chastise your managerial staaf for not doing it properly — or you try to wing it, don’t secure properly hence don’t provide maximal care, and should be banned.

And:

[A good health figure; Barça]

They're Security Scrum!

Yet another trend: The recoil of Agile practices since uncontrollable isn’t what you’d want from your IS infrastructure..?

Where the scrum and other development methods using emblematic sprints by that very idea have to lose all the ballast …
But would you run a marathon-length Chinese Whispers game (Telephone if you’re from the US, inable to go with the rest) …? Because that’s what you get, quality-wise, if you deploy sprinters for the whole 42k195m — no use for miles either — and (wide-sense) security’s one major part of it.

Again, a baby with the bath water thing, here. Moreover, since even with large Waterfall development — which should’ve been V-shaped for the right half of it ..! — security (wide-sens, incl. proper-usability, documentation for maintainability et al.) was too much of an afterthought. When taken seriously, by the way, proven to be much less of a nuisance either during the project or or during implementation/roll-out or during the production phases, than it was taken for.

So, the question is not how fast ‘we’ can dump Security when adopting something agile, nor ow to ‘ split up’ the CISO’s thinking and acting and standards over App Devt and DevOps, but how to get suitable Sec into DevOps-or-whatever. The only road that’s not a dead end, sounds like “Sorry Dave, I can’t let you do that” [I know]. A sort of thick-concrete sandbox — creating tons of overhead in sprints, and when later exposed in the Real World of production. Retrogade.
Your start-up hacktons just don’t cut it in the big boy business..? Better ideas?

Plus:
20160408_133824
[Where all you wanted was one big coat hanger… Beurs van Berlage]

Four Cyber

Where a Big 4 consultancy (or rather, a hang-on-by-the-teeth-fourth intensive-people(?)-farming accountancy-wannabe-advisory) now has a (unit) label “Cyber”. Where #ditchcyber (here) hardly helps… ‘Cyber’ is like being a lady; when you say it of yourself, you aren’t. This qualifier as head of a Linked List — you didn’t need the link to get the wink or did you? — a very long list it is. How desperate can one be to maintain extortionist fee levels and labour practices, to have to label yourself so empty-barreled ..?

I’ll halt now, and:DSC_0700 (2)[Or, like a dolphin not leggard]

Maverisk / Étoiles du Nord