IoTA mutiplication; old style, is the new new

Apart from the previously established focus on Integrity, in particular to have Data plane integrity from which actual Information could be derived, through integrity in the Control plane, there’s of course a need for other aspects as well, like Confidentiality, Availability, and Effectiveness and Efficiency.
[Oh that previous Integrity signal is here.]
Though the latter two, we’ll diss straight away as most secondary, at best, along with the even further irrelevant Auditability et al. That take a devastatingly distant back seat to ensuring the first three objectives are met; not to interfere by mention, even.

Intermission:
DSCN5611
[Onto itself, good enough; Papendorp]

And, we’ll square the three foremost information/data/systems/elements quality aspects with the great many objects one can outline in the IoT sphere. Leading to very interesting new combinations of various corners and angles of objects and aspects in all sorts of abstraction levels – multiple, not necessarily constant, consistent or complete when studying for certain overall audit objectives.

And, let’s not forget, we do have OSSTMM for more traditional objects, and may (have to) enhance that to incorporate the ‘new’ more technically oriented objects of sensors and actuators (including a need to understand and probe them, e.g., at the AD/DA-converter and pure signals levels).
But we also need to incorporate the vast blue (rather, muddely grey) ocean of People, as controls and to be controlled elements.
Only then, can we have a full systems view on the to be controlled and to be audited phenomena.

But we dreadnought and fear not; for we have a number of building blocks bricks, even if at Lego size. Like the security suites springing up and spreading, Splunk et al and al. of the proprietary hardware-vendor types.

To Be Continued in extenso, including including these vendors their security-management-first approach which helps a lot, through logging/reporting availability and some security control, and including including the generic risk management approach that is at the limit of what common auditors’ associations seem to have as vanguard developments in lieu of actual understanding of the vast terrain to cover.

All in all, together in order

Ah. Actually, I needed a well-ordered list of the subset of my posts re All Against All. Because searches don’t pony up the rightly ordered results, herewith for future reference:

So… Done. For you:
DSCN4588
[Well-calculated dare, Madrid]

SwDIoT

Recently, there was yet another exepelainificationing of ‘software defined networking’, along the lines of separation of the control plane from the data/content plane (here).
Which ties into a core problem, with IoT the subject of this post: Integrity.
Yes, confidentiality may be an issue, but singular raw data points themselves often are too granular to actually steal any information from. And Availability is of course also of the essence, especially in ‘critical’ systems. But te main point of concern is with Integrity, of the system in a wider sense, but also in the smallest sense.

Take Stux … Integrity breach as the vector space, spanned along a great number of dimensions.
Objective: Degradation of the information value; increasing the variance to a level where noise overwhelms the R2 of the signal (however far from log2(n), big if you understand), through degradation of the (well, original) software integrity.
Path: Introduction of intentionally-faulty (?) software. With use of of, probably, penny-wise correct IAM, being pound-foolish at the medium level. I mean, the level where human and other actors are unwitting accomplices in planting da bomb. That’s what you get by simpleton top-down compliance with just about every thinkable rule: To do any work, underlings will devise ways to circumvent them. And, adversaries will find, see, avenues (that wide) for riding on the backs of the faithfully compliant to still achieve the objective.

But OK, back to … separating the control plane from the data plane. Bringing a shift in efforts to disrupt (no, not of the mehhhh!! destructive, economy-impoverishing kind but in the actual signal degradation kind) from just-about any attack plane down to, mostly, the control plane. That may seem like an improvement, de-messing the picture. But it also means shifting from a general, overall view of vulnerabilities to the core, and a core which is less tested or understood, and harder to monitor and correct, than previously. Or is it ..?

So, if we take this Software Defined to IoT, we’ll have to be careful, very careful. But yes, IoT is constructed that way … With signals to actuators that will result in altered sensor data feedback. Know the actuator signals, and the actuator-to-sensor formulas (!), and you’re good to go towards full control, with good or bad (take-over) intent. Know either (or how to get into the sensor data stream), and at least you can destroy integrity and hence reliability. [DoS-blowing the signal away in total blockade or grey noise wipe-out, and your cover is blown as well. Is a single-shot or semi; you may want to have full-auto with the best silencer available…]

Hm, the above from the tinkering with the grand IoTAuditing framework promised… To turn this all into a risk managed approach. Well, for now I’ll leave you with:
DSCN3214
[It has a glass floor up in there, you know. Blue Jays territory, ON – and yes, a very much sufficiently true and fair horizontal/vertical view picture, according to accountants]

Morozov’s no joke

Just a vey few:

“The fear of appearing inauthentic, of being a fake, has propelled nearly as much technological innovation as pornography.”

“But Adorno does have a point: authentic things are not necessarily morally good, and morally good things are not necessarily authentic.”

“In this, the authenticity rhetoric of Facebook is strikingly similar to the public debates in 1950s America over whether uniformity (everyone living in mass society is essentially the same) was a greater sin than conformity (some people adopt ideas, habits, and beliefs only to get along). The latter, the conformists,were seen as phonies who chose to be someone else; the former, those who were uniform by design, were seen as the real phonies – as people who thought they were making choices and being their unique selves, when in fact they were anything but.”

Worrying about usability – the chief concern of many designers today – is like counting calories on the sinking Titanic.”

The goal of privacy is not to protect some stable self from erosion but to create boundaries where this self can emerge, mutate, and stabilize.”
“Digital technology has greatly expanded the windows and doors of our own little rooms for self-experimentation – but we are now at a point where those rooms are on the verge of turning into glass houses.”

“Given the complexity of the self, trying to reduce the privacy concept to a purely utilitarian framework is like steamrolling a statue to capture its essence in the simpler space of the two-dimensional plane.”

Oh how many more such insights are there, to Learn. And weep. For that:
DSCN5410
[Yes, Gettysburg battlefield. Ominously.]

Span (out) of Control

How is it that for a long time we were used to managerial spans of control being in the 5‑to‑10, optimal (sic) 8 range, whereas what we had in the past couple of decades so often was spans of control in the 2‑3 range ..? [Duh, exceptions and successful organizations aside…]
Because I came across some post on Forbes where there’s an early simple statement that a span of control of 10 would not only be normal, but outdated as well, as the span could be at 30. Well, I doubt the latter, as this would conflict with a lower ‘Dunbar’ number which indeed is about 8, with ramifications for informal control as outlined in Bruce Schneier’s Liars and Outliers.

Oh yes now it springs to mind the 8 figure was developed by the military, the ultimate built‑for‑survival organization, through the ages to be the optimal span of control, aligning with the apparently natural Dunbar number. It was then taken over to business for its apparent effectiveness, and its apparently attractive all‑business‑is‑war metaphor – where the attraction is there only for those not really exposed to the gore of war, I guess. From which the fearful, the accidentally pushed into management roles clueless managers, tried to do what they thought was the gist of managerial literature: fight all uncertainty that might threaten your career. Through micromanagement, through requiring all data to be reported. Not to use it wisely, but to just pretend to be in control. And, if you don’t understand, just do with less direct reports to shrink what’s coming at you.

[Intermission:
DSCN2260
That man on the pole was quite a good leader it seems.]

But whether it’s 8, 10 or 30, the optimal span of control clearly is larger than the common today’s practice. Which has implications:

  • Too low a number will inevitably lead managers to seek to have something to do. Busywork, in their role leading to excessive micromanagement (yes pleonasm but on purpose) and/or excessive meeting behavior, in particular with their underlings and/or likewise trapped colleagues. A bit like an AA group. Which burdens the underlings by taking time away from actual content work and creating the need for action item lists and reporting blub. Thus losing time off colleagues with all sorts of, what actually is, whining.
  • Too low a number and the micromanagement leads to extreme (far overextended) controls burdens on the ones who’d actually produce anything of value. Instead of producing (net) negative value with all their externalities that managers may commonly do. This burdening then leads to ‘process’, ‘procedures’ etc., to ‘standardize’ (otherwise, understanding of actual content would be required; the horror for petty middle managers!). Thus hollowing out even further the value of any work done. As in the abovementioned Forbes article; the Peter Principle will reign.
  • Too low a number and the standardization will drive out the creativity (required in customer service and in product/service design, production and delivery). Where that is ever more essential than before to counter the ever more changing environment. As I typed this, this article arrived…
  • Colleagues (or rather, underling staff) will be demotivated due to having less and less time to do the work they’re assessed and valued for by the organization, and having to produce ever more TPS reports for overhead purposes. This will bring down performance, both directly and indirectly. And by the way, all the controls will also suffer by staff demotivation leading to less effectiveness or even full evasion. With equally rising risks for performance.

An extension to ten, or even twenty or thirty would be very well possible as well, in these times of massively improved information and reporting options. The latter aligns quite well with the increased need to at least have a little oversight left.

Though that would require much more empowerment (again) of your employees. That are the knowledge workers par excellence, right? If you don’t have those yet, you may have an entirely different but even more pressing problem… Have the real knowledge workers left, don’t they want to join anymore, or have you unconsciously but actively numbed them into sully drones? Whatever the cause, the real knowledge workers know better than their manager how to do their work – that’s why they do it and not the manager. If it were the other way around, it would be beneficial for the organization to have the manager do the grunt work. And no, micro management is something else.

The empowerment should go hand in hand with different styles of leadership, direction and oversight. Not based on ‘objectives’, made ‘smart’(not) for Pete’s sake, but based on truly smart KPI’s that leave employees room to do their business in ways they see best fit. Not rails, but guiding rails, and no yard-by-yard route directions but some A and B and a road (?) map. Or, well, a good navi system – or would you the manager want to pull the steering wheel for every correction? Reporting can similarly be made smarter anyway, or is that wishful thinking. In times of smart analysis with big data or not, this should be feasible.

And keeping just a little oversight is enough already for a good manager, and quite different from total(itarian) ‘control’. The former is needed to lead and direct, the latter is paralysis through total suppression and denial of a capricious reality.

And oh, for the record, some form of hierarchy will always be needed, even in fully horizontal network organizations with 360° feedback and what have we. Just re-read Flat Army by Dan Pontefract; without collapsing to a free‑for‑all hippie group, there are quite many ways to manage more humanlike. There just are so many ways to flex- and telework from home or anywhere, and a mature manager and her/his organization will pay for performance not mere attendance over performance.

So yes, we all need to focus on upping the number. To counter stalemates. To counter bureaucracy heavens. To regain flexibility. Still, still, this could only work IF, very very big if, ‘managers’ (not to address actual managers, that I value enormously!) can loosen their frantic, fear‑of‑death‑like Totalitarian Control and compliance attitude. Which I doubt. Maybe we should start to let such managers pursue other careers elsewhere, those that are specialists in sticking to their own chair by sacrificing all capacity and performance in times of cost savings. Then, the ones that are not good at that because they really try to achieve something for their organization wouldn’t have to be let go by the numbers and restore some positive balance of managerial capabilities.

But then, organizations relying on the control freak managers (whether already or after they will have crowded out the actual managers via the Peter Principle and acolyte behavior) will lose out to the upstarts that do keep the mold out. There’s hope.

You(‘)r(e) right(s)

Well, whatever percentages in this; Voltaire was right. Even if there would be just one citizen who’d think otherwise, all others should (also) defend his (her?) right to be wrong, to the death.

As it’s already five o’clock (here), have a nice weekend, with:
DSCN0823
[Not quite St.Pat’s Day material, still quite equivalent of the Green … Frankenmuth, MI]

Total priv’stalking

Errrm, would anyone have pointers to literature (of the serious kind, not the NSFW kind you only understand) regarding comparison of real physical-world stalking versus on-line total data collection ..?
No, not as some rant against TLAs but rather against commercial enterprise than not only collects, but actively circles around you, wherever you go. Giving you the creeps.

Because the psychological first response is so similar, can it be that the secondary behavioural response / adaptation is similar, in self-censorship and distortion of actual free movement around … the web and free choice of information ..?

And also, whether current anti-stalking laws of the physical world, would actually work, or need strengthening anyway, and/or would/could work or need translation/extension, to cover liberty of movement and privacy-as-being-the-right-to-be-left-alone i.e., privacy as the right to not be tracked, privacy as the right to anonymity everywhere but the very very select very place- and time(!!)-restricted cases one’s personal info is actually required. Privacy as in: companies might have the right to have their own information but not the right to collect information of or on me (on Being or Behavioural) as that is in the end always information produced by me, through being or behaving. The (European) principle still is that copyright can be granted, transferred, shared out in common parlance by payment for use (or getting paid for transferring the right to collect such payments) i.e., economically, but not legally; the actual ownership of the copyright remains with the author!

See why I excluded the TLAs ..? They may collect all they would want but not use unless on suspicion after normal-legal specific a priori proof; that’s their job. No officially (…) they may not step outside their confined remit box, but they do have a box to work within.
Now, back to the question: Please reply with other than the purely legal mumbo-jumbo that not even peers could truly understand but just babble along with.

In return, in advance:
DSCN0535
[Foggy (eyes), since in the olden days, probably never to be seen again; Bélem]

Non-Dunbarian compliance

Just a note that the world is in great need for more on Dunbar’s numbers in antidote to totalitarian-bureaucratic compliance efforts.

go-on-gif
Nah, wanted to, but have more urgent issues to discuss. E.g., tomorrow. See you then!

Partially compliant: as a solution

I was recently informed by a respected colleague in a peer-to-peer discussion (see; they’re useful!) about a development of his in the Compliance arena.
About not having just one single Statement of Compliance that all too often wipes deficiencies under the rug for the sake of agreement everywhere. But having two, one on (first-lines’) management awareness of deficiencies as things to actively manage and actively discuss with second and third lines, and one on abstract, ‘anonymous’ no-blame control effectiveness.

So, when the Three Lines of Defense would actually work (yes I’ve ranted against that on this blog frequently as the simpleton approach inherently can’t work!), first-line management can provide their own list of control deficiencies, and the second and third lines can only confirm and not add much of at all. Then, the first line is in control (all is well and/or known-and-WIP), over their own stuff. Hence, awareness ✓ effectiveness X. When the first line doesn’t have much but the 2nd/3rd lines add quite some (other) things, awareness is X and effectiveness is undetermined. Only when the first line doesn’t have much and the second/third lines cannot add quite a few things, will awareness be ✓ and control effectiveness be ✓

Which sounds like a far better, and in practice far better palatable approach than just one messy jumble-together undetermined opinion. For which I leave you with:
DSC_0030
[The bus buck stops here at this chaotic (?) shelter; Aachen. In Control statement: similar]

Effectiveness or Compliance

In assessing ‘compliance’ of your … [fill in the blanks and then colour the picture], do you actually go for correct set-up and design, and operating effectiveness ..?
If so, you’d be ready when the design is suitable.

Though a great many of you would still consider operating effectiveness proven by repeated measurement and establishing everything runs smoothly according to procedures, including the capture and re-alignment of exceptions.

But you would be wrong. That’s just verification in the weakest form.

Actual operating effectiveness would have been dictated (meant literally, not ‘literally’-figuratively) by an appropriate design. The design should be such that there is no way in which, e.g., any transaction could escape procedures, ever.

Which would require very careful study of procedures, the result of design. Which would fail when the design wasn’t aimed for totalitarian control. Which is the case; the design almost always is focused on obtaining the most basic of functionalities of a system – that includes catering for some exceptions, the bulk of the foreseeable ones; at most – not capture all and everything as that would indeed be impossible ex ante. Hence, the inherent impossibility of total operating effectiveness. There’s always unheard-of, thought to be impossible exceptions at the lowest levels of detail. (Let alone in the infrastructure on which any system would have to run, at about all abstraction layers of ‘system’ that one can study.) And there’s Class Breaks, and penny-wise but pound-foolish type of ‘exceptions’ at higher abstraction layers (all the way up to ‘the CEO wishes this. He (sic) only has to wish for it to be done already’).
So, already in the design phase, you know to fail at Operating Effectiveness later, however perfect you think you’re doing. And you delude yourself further if you’d think that the design will be implemented perfectly. On the contrary, in the implementation the very reality will have to be dealt with, where the nitty-gritty will derail your ideas and something that is a bit workable at all, will be the most you can achieve. Always, ever.
Hence there too, you lose a lot of ‘perfection’.

Whihc may show in operations or not. If you don’t look careful enough, you might arrive at a positive conclusion about somewhat-effective control operations. That has little to do with effective operations by the way; the latter (client service) being greatly disturbed by your ….. (insert expletive describing subpar quality) controls.
If you look careful enough … you don’t even have to; just point out where controls didn’t operate effectively and qualify that as total SNAFU.

Oh yes, in theory (contrasting practice) it just might work, by having all sorts of perfectly stacked control loops on top of control loops (as detailed here) but these have their leakage and imperfections as well and would have to be infinitely stacked to achieve anything approaching closure so nice try but no cigar.

Conclusion:
Set-up/Design and Implementation are everything, Operating Effectiveness follows: OE fails logically.
ISAx Statements Type I or II: Logically inherently deficient hence superfluous money- and paper waste.
Revert to Understanding and opining on your guts. It takes guts, yes, as risky as that is, but pretension of logical reasoning and/or sufficiently extensive proof-of-the-pudding auditing (on the paper-based pudding …) cannot but fail: Non-compliance found: negative rating; no non-compliance found: failed at the task.

I’m done now. For you:
DSC_0016
[Just a side corridor, neatly controlled (for!) decoration]

Maverisk / Étoiles du Nord