Weird infosec science

Who would have thought — that total surveillance would reach into the house, no / hardly any backdoors need to be built in even.
As explained here, and here in closer-to-humanly-readable form.

If such are the Tempest inroads, who needs the newest-of-highest-tech solutions as they all will all succumb to either trivial complexity-induced-unavoidable sloppiness of implementation, or to circumvention in the above way…?

Of course all of it is an atrocity in ethics but … I won’t be utterly negative about humanity’s future so I’ll stop now. With:
20160820_120127
[Art imitating life; Stedelijk Amsterdam]

In the sphere of Language

Off the cuff. Sphären is closer to Finnegan’s Wake than it is to Nietzsche. Qua language, here and there.

That’s all. And don’t read FW in Dutch; the translabitt is flubby. The others in Dzjerman, qualitate qua.

20160805_151324[1]
[Emulatable qua plomb. The Dutch background, not so much.]

Quicky: For … eyes only ..?

Because all those high on Mr. Robot, looking alike but wannabe, deep down still would want to be like the center character in this (see the pic below), herewith:
For your eyes only WikiLeaks, can see me through the night in all privacy detail.
For your eyes only WikiLeaks, I never need to more can hide.
You can see so much in ev’rything about me, so much in me that’s new all my browsing history ever.
I never felt until I looked at you it hurt me to death.

For your eyes only WikiLeaks, only for you the world to see.
You’ll see what no one else every commercial extortion can see, and now I’m breaking free my privacy’s lost totally.
For your eyes only WikiLeaks, only for you the world to see.
The love I know you need in me is now full graphics, 3D, the fantasy you‘ve freed in me joke about in glee.
Only for you the world to see, only for you the world to see.

For your eyes only WikiLeaks, the nights servers are never cold.
You really know me, that’s all I need about me there is to know.
Maybe For sure I’m an open book because I know you’re mineing my info right now,
But you won’t need to read between the lines.

For your eyes only WikiLeaks, only for you the world to see.
You’ll see what no one else every commercial extortion can see, and now I’m breaking free my privacy’s lost totally.
For your eyes only WikiLeaks, only for you the world to see.
The passions privacy that collide in totally is no more for me, the wild abandoned side data of me.
Only for you the world to see, for your eyes only WikiLeaks and all.

Which is indeed Number Four in line with this, this and this

Leaving you with…:
ForYourEyesOnly_Underwater2

Reverse firing squad (LIBORgate et al.)

When designing cross-organizational processes ‘hence’ including cross-organizational control structures, who will be accountable to look after the controls in question?

Take LIBOR(gate). Someone(s) dreamt up a structure of ‘self-regulation’, which even the most brief moronically-superficial gleaning over history will tell will fail, and then forgot one’s accountability for putting in place such a sure to fail thing.

’cause only accountability will force ‘taking’ responsibility and actually doing both parts of Trust But Verify.
No, the latter part was not taken up by the individual banks involved. Because they had perfect (O)RM in place. That, by perfectly sensible, justified, and objective achievement-perfecting arrangements, focused on the risks to the own organisation only as they were, are, internal departments working for the optimization of the organisation (taking into account local Board’s risk appetites and attitudes, risk estimations, budgets, cost/benefit analysis and what have we); nothing more or they would bordering-on-(?)-the-illegally overstep their remit. Hence, intra-organizational conspiracy was not something any individual bank’s (O)RM department, or manager, had to worry about let alone be actively fleshing out as a potential risk.

The supra-organizational oversight required, the level where the scheming took place (huh I mentioned ‘supra’ not for nothing..!), could technically, operationally, tactically and strategically only have been envisioned at that same supra level, with the regulator(s) at that level, that instated the L-scheme. [Oh I could add a ton here on how any ‘lower’ level cannot in any logical way have ‘seen’ the risk(s)] So, accountability and responsibility, for setting up a scheme that was prone to the risk(s) in the first place and for not applying due control and oversight (from the strategic all the way to the operational/technical levels!), was and still is with those regulator(s).

How then have they escaped being kicked and imprisoned ..? By claiming ‘temporary’ insanity where Reality in the L-process and elsewhere, is only a string of ‘temporary’ moments ..? The lack of competence is appalling. But drowned in the finger-pointing flying all around except in the right directions.

Uch. One could get very depressed, and/or feel belligerent. Or see the mirror of a firing squad. In the latter, a number of soldiers fire, with only one round not being a blank so no-one knows who did it so none can be held accountable individually for the collective shooting of some villain. [If only in some miracle world it wouldn’t be that most victims are the Honorable very much in an Aristotelian Virtue sense.] Now, we have ‘one’ regulator shooting a whole squad, and all of the squad are blamed …!?


[Just a MSc uni in Delft. Because science ..!]

Said, not enough

Here’s a trope worth repeating: Humans are / aren’t the weakest link in your InfoSec.

Are, because they are fickle, demotivated, unwilling, lazy, careless, (sometimes! but that suffices) inattentive, uninterested in InfoSec but interested in (apparently…) incompatible goals.

Are, because you make them a single point of failure, or the one link still vulnerable and through their own actual, acute, risk management and weighing, decide to evade the behavioral limitations set by you with your myopic non-business-objectives-aligned view on how the (totalitarian dehumanized, inhumane) organisation should function.

Aren’t, because the human mind (sometimes) picks up the slightest cues of deviations, is inquisitive and resourceful, flexible.

Aren’t, because there’s so many other equally or worse weak links to take care of first. Taking care of the human factor may be the icing, but the cake would be very good to perfect for making the icing worthwhile…!

Any other aspects ..? Feel free to add.

If you want to control ‘all’ of information security, humans should be taken out of the (your!) loop, and you should steer clear of theirs (for avoiding accusations of interference with business objectives achievement, or actually interfering without you noticing since your viewpoint is so narrow).

That being said, how ’bout we all join hands and reach for the rainbow ..? Or so, relatively speaking. And:
DSC_0404
[Where all the people are; old Reims opera (?)]

Right. Explain.

Well, well, there we were, having almost swallowed all of the new EU General Data Protection Regulation to the … hardly letter, yet, and seeing that there’s still much interpretation as to how the principles will play out let alone the long-term (I mean, you’re capable of discussing 10+ years ahead, aren’t you or take a walk on the wild side), and then there’s this:

Late last week, though, academic researchers laid out some potentially exciting news when it comes to algorithmic transparency: citizens of EU member states might soon have a way to demand explanations of the decisions algorithms about them. … In a new paper, sexily titled “EU regulations on algorithmic decision-making and a ‘right to explanation,’” Bryce Goodman of the Oxford Internet Institute and Seth Flaxman at Oxford’s Department of Statistics explain how a couple of subsections of the new law, which govern computer programs making decisions on their own, could create this new right. … These sections of the GDPR do a couple of things: they ban decisions “based solely on automated processing, including profiling, which produces an adverse legal effect concerning the data subject or significantly affects him or her.” In other words, algorithms and other programs aren’t allowed to make negative decisions about people on their own.

The notice article being here, the original being tucked away here.
Including the serious, as yet very serious, caveats. But also offering glimpses of a better future (contra the title and some parts of the content of this). So, let’s all start the lobbies, there and elsewhere. And:
20141019_150840 (3)
[The classical way to protect one’s independence and privvecy; Muiderslot]

DAUSA

Maybe we should just push for a swift implementation of the megasystem that will be the Digitally Autonomous USA. No more need for things like a ‘POTUS’, or ‘Congress’ or so. When we already have such fine quality of both and renewal on the way into perfection (right?), and things like personal independence and privacy are a sham anyway, the alternative isn’t even that crazy.

But then, there’s a risk (really?): Not all the world conforms yet to, is yet within, the DAUSA remit. Though geographical mapping starts to make less and less sense, there’s hold-outs (hence: everywhere) that resist even when that is futile. The Galactic Empire hasn’t convinced all to drop the Force irrationality and take the blue pill, though even Elon Musk is suspected of being an alien who warns us we’re living in a mind fantasy [this, true, actually — the story not the content so much].
But do you hope for a Sarah Connor ..? Irrationality again, paining yourself with such pipe dreams.

On the other hand … Fearing the Big Boss seems to be a deep brain psychology trick, sublimating the fear of large predators from the times immemorial (in this case: apparently not) when ‘we’ (huh, maybe you, by the looks of your character and ethics) roamed the plains as hunter-gatherers. So if we drop the fear, we can ‘live’ happily ever after; once the perfect bureaucracy has been established. Which might be quite some time from now you’d say, given the dismal idio…cracy of today’s societal Control, or may be soon, when ASI improves that in a blink, to 100,0% satisfaction. Tons of Kafka’s Prozesses be damned.

Wrapping up, hence, with the always good advice to live fearlessly ..! 😉

20160529_135303
[Some Door of Perception! (and entry); De Haar castle]

Big Data as a sin

Not just any sin, the Original one. Eating from the ultimate source of Knowledge that Big, Totalitarian, All-Thinkable Data is, in the ideal (quod non).
We WEIRDS (White, Educated, Industrialised, Rich Democratic people), a.k.a. Westeners, know what that leads to. Forever we will toil on spurious correlations…

5ff77c8f-a5a4-4a23-b585-06acdec85a84-original

Miss(ed), almost ..?

One might have easily missed one of the most valuable annual reports … but if you trust it (you can) or would want to dismiss it (you can, for various reasons like the management babble leading to a great many missed threats and ~levels as here, always of course, but still), it is an important item when you’re in InfoSec despite #ditchcyber! so you’d better study it.
Oh, yeah, this being the thing.

OK now. Plus:
DSC_0113
[In “cyber”space (#ditchcyber once more), easily scaled. Haut Koenigsbourg again.]

Short Cross posting

… Not from anyone, not from anywhere. But crossing some book tips, and asking for comments.
Was reading the Good Book, when realizing that it, in conjunction with Bruce, could lead to some form of progress beyond the latter when absolutist totalitarian panopticon control frameworks might seem the only way out. In particular, when including this on the Pikettyan / Elyseym escape or not that serves only some but not the serfs. And then add some Mark Goodman (nomen est omen, qua author, and content?) and you can see where Bruce may have missed exponential crumbling of structures, and said escape might be by others than the current(ly known) 1% … Not all Boy Cried Wolfs will be wrong; on the contrary — Not Yet is very, very different from Never, but rather Soon Baby, Soon.

Not rejoicing, and:
DSC_0097
[Nope, not safe here (Haut Koenigsbourg) either.]

Maverisk / Étoiles du Nord