Commoditised exploits

What was first; the exploits or the use of them ..?
When now, we have this kind of reasoning, aptly, there already was this, too.

So, … What now ..?

20161025_163321
[This being the state of (the best of … ;-[ ) Duts design nowadays. Yes the rest is worse, much worse. Law of handicap of head start; Zuid-As]

The legacy of TDoS

So, we have the first little probes of TDoS attacks (DoS-by-IoT). ‘Refrigereddon’.
As if that wasn’t predictable, very much predictable, and predicted.
[Edited to add: And analysed correctly, as here.]

Predicted it was. What now? Because if we don’t change course, we’ll achieve ever worse infra. Yes, security can be baked into new products — that will be somewhat even more expensive so will not swarm the market — but for backward compatibility in all the chains out there already, cannot be relied upon plus there’s tons of legacy equipment out there already (see: Healthcare, and: Utilities). Even when introducing new, fully securable stuff, we’re heading into a future where the Legacy issue will grow for a long time and much worse than it already is, before (need to be) huge pressure will bring the problem down.

So… What to do ..? Well, at least get the fundamentals right, which so far we haven’t. Like this, and this and this and here plus here (after the intermission) and there

Would anyone have an idea how to get this right, starting today, and all-in all-out..?

Plus:
20150323_213334
[IRL art will Always trump online stuff… (?); at home]

All Your Data Are Belong To Us

Or, in the form of a question: When
a. One has to notify authorities of any (possible!) data leak, per law, in Europe and soon maybe also in the USofA,
b. Even BIOSses aren’t secure anymore, baked in from the word Go and onwards,
Shouldn’t all organisations declare all of their infrastructure and hence all their data, possibly compromised ..?

Just asking.

[Edited to add this. Also relevant; this one deeper (?)]

And:
20141101_145950
[Calm, not private; Museumplein Amsterdam]

New Normal Hacking

Errm, anyone still surprised about (not) new news on data being stolen, ransomware striking, or democracy perverted, anywhere, all the time ..?

Got a bit worried, and wondered whether there would be others the same, about the current Mehh impression of everyone in the loop, about even political parties [now openly], voting machines, etc., getting cracked and data stolen which combined with at last, at very last finally, the hackability of voting machines not, against all sane arguments, being tamper-resistant — which leads to the vulnerability and class broken-ness of fundamental human values.

And still, there’s hardly more than Mehhh.

Would anyone have a reason not to worry …?

Ah:

Oh well, blue pills everywhere …? Plus:
20150109_135649
[Sorry to say lads and lassies of the Royal Academy of Arts, but the Gemeentemuseum did beat you, on this one]
[Edited to add: No, this post was written before the NIST October 7 ‘news’ came out that (‘end’?) users are tired of hack-warnings (security fatigue), if that were a thing. Which is also not quite what I meant above, which is worse…]

Are sw bugs taxing your resilience ..?

There would be a solution when we’d find a way to tax software makers for their product faults.

Because caveat emptor can work only if unlike in softwareland, one can duly (!) examine the product before purchase otherwise-and-anyway culpability for hidden flaws remains with the seller/licensor.

Which is impossible with shrink-wrapped stuff — and the ‘license’ claim is ridiculous, moreover the EULA is inconsistent hence null and void: Either the product is used under license hence the product quaility liability remains with the producer/licensor or the licensee is liable for damages the use of the product might cause but then invariably ownership is with the purchaser.

The software maker can’t have their cake and eat it; that would run against basic legal principles. And claiming that one’s always allowed to not use the product and choose another one or not, the Hobson’s Choice that brings about so many legal ramifications that even $AAPL’s pockets would never suffice, would invariably lead to oligopoly/cartel charges …!

Or, as this may easily be solved when taken as a societal problem just like environmental stuff like CO2 pollution (we all need electricity): Why not tax the software makers for their ‘pollution’ of the IS environment with bugs ..? (And prohibit the use of greenhouse gases like SQL injection weaknesses?)
Like, after post-write but before release, this (Dutch) news that casual carelessness is a headache for government(s)… A bit like driving rules with no enforcement, maybe ..?

I’m not one for fighting the real windmills… hence:
dsc_0099
[The outards of the inn(ard)s of courts; Bridget’s London obviously]

Data Classinocation

I was studying this ‘old’ idea of mine of drafting some form of impact-based criteria for data sensitivity when, along with a couple of fundamental logical errors in some of the most formally adopted (incl legal) standards and laws, I suddenly realised:

In these times of easily provable easy de-anonymisation of even the most protective homomorphic encryption multiplied with the ease of de-anonymisation throught data correlation of even the most innocent data points, all even the most innocent data points/elements must (not should) be classified at the highest sensitivity levels so why classifiy data ..!?

This may not be a popular point, but that doesn’t make it less true.
In similar vein, in European context where one is only to process data in the first place if (big if) there is no alternative and one can process for the Original intent and purpose only,

To prevent data from unauthorised disclosure internally or externally, without tight need-to-know/need-to-use IAM implementation, one already does too little; with, enough.

That’s right; ‘internal use only’ is waaay too sloppy hence illegal — it breaks the legal requirement for due (sic) protection, and if the use of data is, ‘by negligence’ not changing a thing here, let possible, the European privacy directive (and its currently active precursors) do not allow you to even have the data. This may be a stretch but is still understandable and valid once you take the effort to think it through a bit.
Maybe also not too popular.

Needless to say that both points will not be understood the least by all the ‘privacy officer’ types that have rote learned the laws and regulations, but have no experience/clue how to actually use those in practice and just wave legal ‘arguments’ (quod non) around as if that their (song and) dance is the end purpose of the organisation but cannot answer even the most simple questions re allowablity of some data/processing with anything that logically or linguistically approaches clarity. [Note the ‘or’ is a logical one, not the sometimes interpreted xor that the too-simpletons (incl ‘privacy officers’) interpret but don’t know exists.]

OK. So far, no good. Plus:
dscn0990
[Not a fortress, nor a real maze once you see the structure; Valencia]

Waves of cyberfud

Not just because #ditchcyber is real. But because only now, the first of the absolute leggards (i.e., gov’t officials) begin to make waves about access to private data, through apparent (sic) complete lack of understanding about the fundamentals of free society. The issue of blanket access to any communications, for whatever purpose, has been settled so shut up for eternity or however much longer it takes ‘you’ to get it or die — whichever comes first, my guess is the latter.

Politics being the only field of work where no education is required; all the cyber-blah being the second, then, apparently ..? And:

dscn1128
[He would have annihilated the little people that clamour for ‘backdoors’, etc., et al.; DC]

Fraud no-angle

There was a lot of work done, mainly from faux legal/ethical corners, on the so-called ‘fraud triangle’. Without pointing only at previous dismissal, there are some fresh insights on why this’all is faux.

One is, as pointed, the presentation that considers the three corners of the triangle (pleonastically not tautologically) to be ‘present’ at one same time. In stead of seeing that there is a (very) definite order of the three. Once started, the march by moral capture / self-blackmail is one-way only. Whether triggered by willful act or casual impulse (i.e., Kahnemann’s System 2 or System 1 ..!); this is just a fact.
Two, the considerations on the triangle, and how to ‘prevent’ ‘it’, are theoretical only. Because they leave out human nature, where Systems 1 and 2 interplay. Where ‘protection’ against that, is not a theoretical exercise somehow (sic) translated into perfect control — as history learns, all totalitarian dehumanising organisations inevitably (sic) fail, and even trivial implementations will fail due to imperfect control everywhere, by definition through its selection by risk vs. budget balancing.

Yes the Faux triangle sometimes appears to be discussed only by those without due experience in practice. That know not of what ‘ethical’ means when it comes to leading and controlling people. That see only a tiny fraction of perverted Bad people (tellingly forgetting about the difference between Bad and Evil, Nietzschian style) that need to be stopped at all cost… Because Ordnung Muss Sein.

We all know how that worked.
Leaving you with:
dscn1377
[Please take a bath…; at Caldelas]

Comedy crashers

No capers, frankly no comedy either, when some of the most respected in the field are concerned about pervasive probing of whole countries in one go. As here.

Probably, the same is pulled off on smaller countries as well; the infra doesn’t distinguish, but the protection budgets probably are much smaller, so a proof of concept might be interesting. Though this may trigger better protection in the larger country/countries, if done ‘right’ the attack(s) may be class break kind of things not so easily protected against in the first place.
And for now, the smaller countries probed, will have even smaller budgets and capabilities to even detect the probing all together / in the first place. Interesting …

But maybe budgets are better spent on all the other actual risks out there, like: ..?
dsc_0789
[Suddenly (of course !!) turned up at the Joinville château; Haut-Marne]

Dronecatcher, now with dronespotter

Ah. Found; yes, probably @DARPA already had theirs, this one’s more (?) private-sector though: A partial solution to the Attack of the Drones thing.

Back some time ago I posted this gem, on a solution against drones, e.g., around objects of ‘vital infrastructure’ — that weren’t, like, so, about a hundred years ago and people may have been happier then.
Once drones were distinguished from birds [the in-the-air kind not the ones you spot on the beach, topless], any kind of mini-Goalkeeper preferably with buck shot (since short-range effectiveness is required only, without any long-range bystander damage risk) might suffice. I’d say Mini- indeed; more like a double-barrel pointed up above e.g., 50° or more to prevent nasty fall-out on hoomans but with some swing capability to cross-fire.

The problem being, was, to have an installation small enough to be easily placeable in sufficient number to get good air dome/cylinder coverage but to not be too obtrusive. Yes, probably @DARPA already had theirs but didn’t want to beat the drum too much, to not lure ‘DoS’ swarm attempts or false-negative probing. But at least, for the rest of the World, there now is Elvira, instantly in fixed, flex as well as military versions.
Apparently, their aim is to prevent bird/drone collisions in mid-air (triggers association with this great clip work, and also this one) but I see use for the inverse, too, picking off the flyers/drones before they don’t, up against ‘vital infra’.

But aren’t we then reverting to an arms’ race where the silly, petty, may be stopped but not the countermeasure-escalation pro’s …? Like these:
drone
[But hey, seems to be on a carrier exactly like I have in my back pond for fun and impressing the babes so can’t you have one, too ..?]

Maverisk / Étoiles du Nord