Being Creative with Trust in Identities

… seems impossible to get right. Since for sure, Identities that can be Trusted are so stable that all Creativity is impossible ..?

What does society-at-large want? If you think about the bandwidth above: Aristoteles’ true middle..! But would you know where that is, in this? Would it be sufficiently on the Fixed side to be able to be used as trustworthy Identity? Or would it be a matter of good-enough reliability, for the task at hand?
Possibly we should like Activity-Based Access Control to pair to this Task-Sufficient Identification ..?

A lot on this will have to be developed further, I’d say, but this could be the beginning of a beautiful friendship
Plus (skewed ‘horizon’-ID intentional…):
[All the ID theft may not get you here…; Amsterdam]

Quote by Book: John’son

Network: Any thing reticulated or decussated, at equal distances, with interstices between the intersections.
Dr. Samuel Johnson, Dictionary

No kidding. Only script kiddies of the worst kind don’t seem to get that. Though it has been around since 1755, as a definition that is. How prescient.

And:
[Mash; London (already some years ago, yes]

Meta / Attrib-ShareAlike- … Commercial

For the following, one would best resort to …
Who are we kidding; are there still believers out there apart from te truly stupid-to-beyond-dysfunctionality-capacity defenders, that metadata is something less bad than just privacy-sensitive data points outright? Well, <spoiler> it’s the other way around— as is exemplified in this here piece. From which I’ve blatantly copied:

  • They know you rang a phone sex line at 2:24 am and spoke for 18 minutes. But they don’t know what you talked about.
  • They know you called the suicide prevention hotline from the Golden Gate Bridge. But the topic of the call remains a secret.
  • They know you got an email from an HIV testing service, then called your doctor, then visited an HIV support group website in the same hour. But they don’t know what was in the email or what you talked about on the phone.
  • They know you received an email from a digital rights activist group with the subject line “52 hours left to stop SOPA” and then called your elected representative immediately after. But the content of those communications remains safe from government intrusion.
  • They know you called a gynecologist, spoke for a half hour, and then searched online for the local abortion clinic’s number later that day. But nobody knows what you spoke about.

So blatantly I might as well add:

But then the Non element in there warps things. Nevertheless, I’ll use the example in my upcoming pres.

And I’ll leave you for now with:
[Full of info, too, innocious that aint but no invasion on you; Prague]

From bike design to security design

You recall my posts from a couple of days ago (various), and here, and have studied the underlying Dutch Granny Bike Theory (as here), while not being put off by the lack (?) of design when taking a concrete view here.
You may also recall discussions, forever returning as long as security (control) design existed even when not (yet) as a separate subject, that users’ Desire Paths (exepelainifyed here) would inevitably be catered for or one would find continual resistance until failure — with opposition from the Yes But Users Should Be Made Aware Of Sensitivity Of Their Dealing With Commensurate (Linearly Appropriate) Security Hindrance side; things are hard for a reason and one should make things as simple as possible but not simpler. [Yeah, I know that’s a reformulation of Ockam’s Razor for simpletons outside of science and having dropped the scientific precision of O and of application to science where it’s valid and the second part is often lost by and on the most simpletons of all short of politicians which are in a league of their own.]

I feel there may be a world a.k.a. whole field of science, to be developed (sic) regarding this. Or at least, let’s drop the pretension of simpleness of cost/benefit calculations that are a long way on the very, very wrong side of but not simpler.
Anyone have pointers to some applicable science in this field?

Oh, and:
DSCN3655[Applicable to security design: “You understand it when you get it” © Johan Cruyff; Toronto]

4Q for quality assurance

To go beyond the usual, downtrodden ‘quality in assurance’ epitome of dullness, herewith something worth considering.
Which is about the assessment of controls, to establish their quality (‘qualifications’) on four, subsequent, characteristics [taking some liberties, and applying interpretation and stretching]:

  • Design. The usual suspect here. About how the control, or rather set of them, should be able to function as a self-righting ship. Point being, that you should+ (must?) evaluate the proposed / implemented set of controls to see whether self-righting mechanisms have been built in, with hopefully graceful degradation when not (maintained) implemented correctly and fully — which should be visible in the design or else. Or, you’re relying on a pipe dream.
  • Installation. Similar to implementation-the-old-way, having the CD in hand and loading / mounting it onto or into a ‘system’.
  • Operational. Specifies the conditions within which the control(s) is expected to operate, the procedural stuff ‘around’ the control.
  • Performance. Both in terms of defining the measuring sticks, and the actual metrics on performance attached to the control(s). Here, the elements of (to be established) sufficiency of monitoring and maintenance also come ’round the corner.

Note; where there’s ‘control(s)’ I consider it obvious, going without saying (hence me here now writing instead of that), that all of the discussed applies to singleton controls as well as sets of controls grouped towards achieving some (level of) control objective. All too often, the very hierarchy of controls is overlooked or at best misconstrued to refer to organisational / procedural / technical sorts of divisions whereas my view here is towards the completely ad hoc qua hierarchy or so.
Note; I have taken some liberty in all of this. The Original piece centered around hardware / software, hence the Installation part so explicitly. But, on the whole, things shouldn’t be different for any type of control or would they in which case you miss the point.

And, the above shouldn’t just be done at risk assessment time, in this case seen as the risk assessment time when one establishes the efficacy, effectiveness of current controls, to establish gross to net, inherent to residual risks, on all one can identify in the audit universe, risk universe, at various levels of detail. On the contrary, auditors in particular should at the head of any audit, do the above evaluation within the scope of the audit, and establish the four qualities. Indeed focusing on Maturity, Competence, and Testing to establish that — though maybe Competence (not only the competence of the administrator carrying out the control, but far more importantly, the competence of the control to keep the risk in check) is something just that bit more crucial in the Design phase, with Maturity slightly outweighting the others in Installation and Operational, and Testing of course focusing on the Operational and Performance sides of things.

Intermission: The Dutch have the SIVA method for criteria design — which may have some bearing on the structure of controls along the above.

Now, after possibly having gotten into a jumble of elements above, a closing remark would be: Wouldn’t it be possible to build better, more focused and stakeholder-aligned, assurance standards of the ISAE3402 kind ..? Where Type I and II mix up the above but clients may need only … well, hopefully, only the full picture.
But the Dutch (them again) can at once improve their hazy, inconsistent interpretation of Design, Existence, and Effectiveness of control(s).
With Design often, mistaken very much yes but still, meaning whether there’s some design / overall structure of the control set, some top-down detailing structure and a bit of consistency but with the self-righting part being left to the overall blunder-application of PDCA throughout…;
Existence being the actual control having been written out or more rarely whether the control is found in place when the auditor come ’round;
Effectiveness… — hard to believe but still almost always clenched-teeth confirmed — being ‘repeatedly established to Exist’ e.g., at surprise revisits. Complaints that Effectiveness is utterly determined by Design, fall on stone deaf ears and overshouting of the mortal impostor syndrome fears.

Back to the subject: Can four separate opinions be generated to the above four qualities ..? Would some stakeholder benefit, and in what way? Should an audit be halted when at some stage of the four, the audit opinion is less than very Satisfactory — i.e., when thing go downhill when moving from ideals and plans to nitty practice — or should the scope of the audit be adapted, narrowed down on the fly so the end opinion of In Control applies only to the subset of scope where such an opinion is justified?
But a lot needs to be figured out still. E.g., suppose (really? the following is hard fact at oh so many occasions) change management is so-so or leaky at best; would it be useful to still look at systems integrity?

Help, much? Plus:
DSCN4069[An optimal mix of complexity with clarity; Valencia]

One extra for Two AI tipping point(er)s

To add, to the post below of a month ago.
This here piece, on how AI software is now writing (better) AI software. Still in its infancy, but if you recall the Singularity praise (terroristic future), you see how fast this can get out of hand. Do you?

The old bits:

You may have misread that title.

It’s about tips, being pointers, two to papers that give such a nice overview of the year ahead in AI-and-ethics (mostly) research. Like, this and this. With, of course, subsequent linkage to many other useful stuff that you’d almost miss even if you’d pay attention.

Be ware of quite a number of follow-up posts, that will delve into all sorts of issue listed in the papers, and will quiz or puzzle you depending on whether you did pay attention or not. OK, you’ll be puzzled, right?

And:
DSCN1441[Self-learned AI question could be: “Why?” but to be honest and demonstrating some issues, that’s completely besides the point; Toronto]

You Don’t Call The Shots

I.E., You Are Not In Control !

This, as a consequence of the ‘In Control’ definition. Where the controlling and ‘steering’ (what Steering Committees are about, if properly functioning … ) are the same.
But as explained previously, such steering doesn’t happen (is impossible) already in a Mediocristan world its complexity, let alone the mix-in (to say the least) with Extremistan that you’ll find everywhere and certainly in your business.

NO you can risk-manage your business to the hilt, or even make it extremely brittle, antiresilient by totalitarian bureaucracy that leaves no human breathing space but switches to full 100% bot-run enterprise, DAO-style ops (hence will fail with complete certainty when interacting with humans like, e.g., your clients),
because complete risk-managed stuff still weighs costs so is imperfect or isn’t…
And of the imperfection of fully-reactive quod non-‘security’, see the above and many of my previous posts…

So either way, things will happen that you didn’t order. Estimates run from 50-50 (where you have zero clue about which 50 you do control) to 90%, 95%, 99% not-your-call shots. The latter category since your brain is not wired [link: huh] to deal with more than 10% ‘free will’ and the rest is, as scientifically determined, reactive to the environment however clever and deep-minded you think yourself to be (the more the latter, the less you are … If you have to say you are wise, you aren’t). Which make the majority of what happens to you and your organisation, accidental and from the outside. Which is by the very definition not you being ‘in control’.

Despite all the ‘GRC’ liars that should be called out for that quality.

[Edited after scheduling, to add: In this here piece, there are very, very useful pointers to break away from the dismal Type I and II In Control (quod non) Statements of all shades. Should be studied, and seen to refer back to the foundations of auditing ..!]

Oh, and:
DSC_1033[Designed to belittle humans — failing since they’re still there…; DC]

Automobiles, (trains,) Planes

What a disaster it would be if all those (self-driving, or augmented-driving as they are today already) cars could be taken over by some madman or unrelatedly hacker … One could remotely steer a car off the road! One could remotely steer a whole bunch of cars within some area / country (?) off the road in a broadcast … With pre-emptively having disabled manual override, of course. [Though, noted before, the ability to do so would on the human side deteriorate very quickly as it wouldn’t be needed to be seriously trained/experienced (anymore).]

Yes, that’s bad. How is this same idea, but applied to current-day planes ..? Where about-all is automated, and users get more and more access hence control (think that one through; qua nothing’s 100% secure) to still but what do you know limited zone(s) of plane networks, e.g., re on-board wifi. The known-to-be-stellar-secure wifi.
Of course, this would be suicide — or airport-proximity (from just outside the fence) runway-DoS …; but not all seem to care about the sacrifice… on the contrary. And don’t come with the argument of having to know systems to break in / run amok. Some had gone through the effort of going through a pilot’s training, right? And here, one can be a passenger and do recce from business class, and/or deliver and C&C from there.

I love my old-style car / driving … and:
Photo15
[Warped, but quite safe from hacking… Somewhere upstate WI]

Ah, security rules — not for Us

When the Last Mile in infosec is convincing the Board to stick to ‘their’ own rules and not think themselves above it, how do we’d want to pull this off ..?
Where, so often, they complain that sticking to the rules is too complex or cumbersome for them — for no extra credit, reflect on their capacities to be in there position to Lead and Show — whilst forgetting their underlings have to deal with it anyway, possibly being more capable yes but not as claimed dealing with less sensitive information …
Where the reaction for themselves is they Have to carry on, counter to sane advice and rules, with unsafe behaviour often in particular when dealing with the most sensitive stuff; either not recognising that as such or hardball playing down the sensitivity and/or their attractiveness as targets — out of some form of cognitive dissonance and often contrary to their lightly-to-grossy inflated self-worth estimates respectively.

Where, also, we see con-zultands playing up their self-importance and -assigned capabilities, as per this. Recognisable, all too recognisable [been there, done that, didn’t even got the T-shirt; ed.].
And realising that this all, seems to work… reminds me of what Thomas Paine can still bring to bear on this, which is not good. Not at all. Though the advisortypes may co-opt and exploit the courtiers’ methods (hey, how hard have you studied these ..?) without being caught in the courtiers’ ‘regulatory capture’ error and maintain a bedrock of sanity until My Precious is had; is that the only viable road?

Or would you have something else? No, not plain forward address that is so sure to fail, to fall flat on your face before it’s out of the starting block; if you don’t see that, you may very well be too inexperienced to have a clue…
But seriously, folks, what have ..?

Oh, and:
20170104_131738_hdr
[When the castle goes down, all go down but the upper class (sic) has (golden) parachutes so why would they care? Bouvigne Breda]

The ransom monster

Now that the ‘No way josé’ solutions against ransomware [regular back-ups, virtualisation of servers, and tight intrusion controls et al.] have become so widely known, and ransomware having evolved to be more of the APT kind (incubating for up to six months before striking — undoing your back-up strategy), a new look at the root cause of the harrassment:

Ransomware is a Monster. Being a thing that refuses to fit a single category for neat classification (sociology/science definition/term).

Which may seem odd, but consider:

  • It (?) uses Confidentiality-sloppyness to enter;
  • It undoes Integrity;
  • Its payload aims at destruction of Availability, both in the Immediate and the Reasonably-timely kinds.
  • [Bonus: It doesn’t care about (your) morality but strikes even (?) at hospitals et al.]

Capice? … Oh, you wanted a Solution, or a Morale. Maybe something with Blended Defense / Step Up Your Game or so. Well, be my guest …, and:

Photo20 (2)

[The ultimate Up Yours [ , Planning Commission of Racine!], by of course the venerable Frank Lloyd Wright]

Maverisk / Étoiles du Nord