All against all, part 2

OK, herewith Part II of:
Tinkering with some research that came out recently, and sometime(s) earlier, I had the idea that qua fraud, or rather ‘Cyber’threat analysis (#ditchcyber!), some development of models was warranted, as the discourse is dispersing into desparately disparate ways.

The usual picture suspect:
20141230_220025_HDR
[Art alight, Ams]

Second up, as said: The same matrix of actor threats, (actor) defenders, but this time not with the success chances or typifications, but (read horizontally) the motivations.
Fraud matrix big part 2

Next up (probably the 26th) will be typical main lines of attack vectors. After that, let’s see whether we can say anything about typical countermeasures.
Hmmm, still not sure this all will lead anywhere other than a vocabulary and classification for Attribution (as in this piece).

The Year of SocMess

Yes it’s becoming clear now, if you hadn’t had this on your radar yet from scanning to tracking, that certainly the first half of 2015 will be the Year of SocMess – Social Messaging. Breaking through as a separate category, since SocMed as in the Platform-collecting-all-your-data-for-sale-to-the-prime-examples-of-shadiness kind, think Facebook, will quickly diminish and the Anon apps will spring forward. Where the Anon aspects (as per this explanation and others) may differ, but they all’ll have a mix of the elements that make part of the comms anonymous. Really. As otherwise, they wouldn’t be in this category, right?

And indeed, of course, I wrote ‘part of’ not for nothing.

This development will be interesting to follow. As it mixes with the development of in-crowd apps (that have been around already for a number of years!) where exclusivity is ensured either by limited invite-onwards capabilities or central control over extension.
And there will also be the development in the mix where follower-numbers will stratify further, so we’ll have more 1%-99% skews between interest-pinnacles and the hordes of followers blabbering Likes of all kinds – or flipping into hater gangs.
And, … even more … Subgroups, interests and subcultures emerging to dominate platforms (errr… socmed apps), crowding out others, leading to verticals qua interests segments, and some probably unwantable forms of social exclusion and ‘xenophobia’ where if you’re a n00b you may not be let in on discussions. Kindergarten playground style, but that’s how many (not so, or literally still hardly) grown-ups are.

And on top… The platform providers will battle it out, too, deep under water, where Docker, Google Cloud, Amazon, iCloud, et al., will vie for prominence.
And, the IoT/Appmix will extend. Uber was a quicky, likely to meddle forward in 2015 but the prime has been lost on that one. And Lyft will trail. So will domotics like Nest et al. But, the extend part will be there, with a range of new apps in this area, for functions you still now haven’t even thought of to be handled in this way.

Didn’t I forget to add a little Apple in the mix? No, since there will be hardly anything to mix in, A-wise. Or it be the deterioration of MacBooks’ revenue [Edited to add: Same for iFoan] leading to an emperor in ‘new’ clothes qua prospects, market value and future…

OK, I understand it’s all a bit much, and comes quite late, as predictions for 2015 [Edited to add: apart from these I posted previously; somewhat overlapping], but nevertheless, this all was required to be spelled out for you. Which leaves:
DSCN1382
[Smells like 4711 spirit…]

Attached ITsec

OK, I’m a bit stuck here, by my own design. Had intended to start elaborating the all-encompassing IoT Audit work program (as per this post), but the care and feeding one should give to the methodology, bogged me down a bit too much … (?)
As there have been

  • The ridiculousness of too much top-down risk analysis (as per this) that may influence IoT-A risk analysis as well;
  • An effort to still keep on board all four flavours of IoT (as per this), through which again one should revert to more parametrised, parametrised deeper, forms of analysis;
  • Discomfort with normal risk analysis methods, ranging from all-too-silent but fundamental question discussions re definitions (as per this) and common approaches to risk labeling (as per this and this and others);
  • Time constraints;
  • General lack of clarity of thinking, when such oceans of conceptual stuff need to be covered without proper skillz ;-] by way of tooling in concepts, methods, and media.

Now, before jumping to yet another partial build of such a media / method loose parts kit (IKEA and Lego come to mind), and some new light bulb at the end, first this:
DSCN5608
[One by one …, Utrecht]
After which:
Some building blocks.

[Risks, [Consequences] of If(NotMet([Quality Requirements]))]
Which [Quality Requirements]? What thresholds of NotMet()?
[Value(s)] to be protected / defined by [Quality Requirements]]? [Value] of [Data|Information]?
[Consequences]?
[Threats] leading to [NotMet(Z)] with [Probability function P(X) ] and [Consequence] function C(Y)?
([Threat] by the way as [Act of Nature | Act of Man], with ActOfMan being a very complex thingy in itself)
[Control types] = [Prevent, Detect, React, Respond (Stop, Correct), Retaliate, Restore]
[Control] …? [ImplementationStrength] ?
[Control complex] UnlimitedCombiOf_(N)AndOrXOR(Control, Control, Control, …)
Already I’m missing flexibility there. [ImplementationStrength(Control)] may depend on the individual Control but also on (threat, Threat, …) and on Control’s place in ControlComplex and the other Controls in there. Etc.

Which should be carried out at all abstraction levels (OSI-stack++, the ++ being at both ends, and the Pres and App layers permeating throughout due to the above indetermination of CIAAEE+P for the four IoT development directions, and their implementation details with industry sectors. E.g., Medical doing it different than B2C in clothing. Think also of the vast range of protocols, sensor (control) types, actuator types, data/command channels, use types (primary/control, continuous/discrete(ed)/heartbeat), etc.

And then, the new light bulb as promised: All the above, when applied to a practical situation, may become exponentially complex, to a degree and state where it would be better to attach the security ‘context’ (required and actual) as labels to the finest-grain elements one can define in the big, I mean BIG, mesh of physically/logically connected elements, at all abstraction levels. Sort-of data labeling, but then throughout the IoT infrastructure. Including this sort of IAM. So that one can do a virtual surveillance over all the elements, and inspect them with their attached status report. Ah, secondary risk/threat of that being compromised… Solutions may be around, like (public/private)2 encryption ensuring attribution/non-repudiation/integrity etc. Similar to but probably different from certification schemes. Not the audit-your-paper-reality type, those are not cert schemes but cert scams.

OK, that’s enough for now. Will return, with some more methodologically sound, systematic but also practical results. I hope. Your contributions of course, are very much welcomed too.

All against all, part 1

Tinkering with some research that came out recently, and sometime(s) earlier, I had the idea that qua fraud, or rather ‘Cyber’threat analysis (#ditchcyber!), some development of models was warranted, as the discourse is dispersing into desparately disparate ways.

The usual picture suspect:
DSCN2891[Odd shape; maybe off-putting as a defense mechanism ..!?]

First up, then, an extended version of the matrix I’ve been presenting lately, about offense/defense characteristics. Just to expose it; would want to hear your feedback indeed. (Next up: The same, filled in with What the attacker would want to get out of it, information-wise. After that: Strategy, tactics commonly deployed; rounding off with least-ineffective defense postures (?))

Fraud matrix big part 1

Merely convicted by PPT

Hm, there was this meme about Death By Powerpoint. Now, the toned-down version, conviction (attempt) by PPT, has been found in the wild. As in this here article. Where the prosecutor was too dumb to not hide the culpose text behind the 24-images-per-second visibility screen. [Is that ‘stego’ ..?]

Incentives, incentives…
DSCN1210[Vic chic]

Risk of being Duds

Wow, the new year starts of with … failure. I mean, apart from your inability to keep your New Year’s resolutions (already – if you need such a specific date like Jan 1 and the motivation of a ‘fresh start’ which, on the whole, it isn’t since calendars were invented as easy shorthands not life (so exactly) defined turning points) to change habits, … why didn’t you just change when the need arose ..? You’d be much further with it – the year is really off to a traditional start when failures of the past, are repeated ad nauseam… [As interlude: No, writing shorter sentences isn’t and wasn’t on any of my resolutions’ lists…]

Which one might expect from less-clear thinking professions. Though this post isn’t meant to address the precious few, the exceptions to the rule, I do mean ‘accountants’ and ‘IT-auditors’ (IS auditors that don’t understand) to be among those. That, apart from the slew of other vices you can easily sum up, tend to instruct others to do risk analysis this way:
R6 model
Yeah that’s in Dutch but you probably can make out the (actual) content and meaning. Being that the risk analysis is (to be) carried out top-down indeed, analyzing how lower layers of the model will (have to) protect against risks in the layers above, after the controls at any layer may have failed to ‘control’ the risks… [About that ‘control’ quod NON: See this here post of old.]

See? This just perpetuates the toxic myth of top-down analysis. The more one would follow this model, the more deluded one’s risk estimates would become… And this is proposed to lead the way in financial audits…
If you think this limits the Spectre of errors – this thinking permeates, and will permeate, the audit / inspection environment, leading to ‘Sony’. Yes, this erroneous thinking is at, or near, the root cause of that mess-up. [Anyone has seen proof that the NK actually did it, I mean, not the ‘proof’ trumped up by the most biased party ..?]

Whereas a bottom-up approach would show all the weaknesses that would create logically impossible effectiveness of higher-up ‘controls’. Controls just aren’t put in place to build a somewhat reliable platform to build higher-order controls on… Controls are put in place to try (in vein) to protect same-level risks throughout, as well, and higher-level controls were (are; hopefully not too much anymore) put in place to try to protect against all the lower-level controls failures not remediatable there… [‘Mitigation’ as the newspeak champion]
Hence, the distribution of error ranges (outside the acceptable sliver in the middle of the distribution of, e.g., transaction flows – hopefully that sliver is the intentional one) is ever wider the higher one goes in the ‘model’.

Rightfully wrecking your approach to financial audits, where not the risks of misrepresented true and fair views are managed, but the risks that the auditor is caught and found guilty of malpractice by not doing the slightest bit of the checking promised. ‘Assurance’ hence beginning with the right first three letters. Risk management to cut down the enormous workload (due to the overwhelming risks percolating up the model, as in reality they do..! hence having to check almost everything) to nicely within the commercial cutthroat race-to-the-bottom budget which is supplanted with ridiculously attractive (by bordering(!?)-on-the-fraudulent hourly rates) consulting business.

Now, the only hope we have is that the R6 model will not spread beyond … oh hey, googling it returns zero results – let’s keep it that way! Let’s not follow BAD guidance…

Jeez… And that’s only two of 124 pages of this

I’ll leave you with …
20141101_155144[1]
At the door to
20141101_160525[1]
– if you know what these are, you know why they’re here…

ID card house coming down

With the ‘eavesdropping’ or whatshallwecallit of the German Defense Minister’s fingerprint, it seems that yet another card was pulled from the infosec card house of solutions. It looks like a distant relative in infosec land, on the ID side, has faltered. Or, has shown to be not 100% perfect. Dunno if that is newsworthy; apparently is.
Though apparently, in unrelated (?) news reports, not all tools out there have (yet) been cracked by TLAs. With Tor and Truecrypt as shining examples, but haven’t vulnerabilities in the schemes of those been demonstrated (at least theoretically)? So, are the leaked documents just bait to pull in as many ‘script’/privacy kiddies into environments where they actually can be tracked? If the leaked docu are false admittance of uncrackability … who can you trust, then?

Or is it all The Return To Normalcy, where we know all and every tool and method are not 100% perfect, let alone in themselves, and we will have to return to do a risk weighing for every action we take – allowing the Other Side(s) to also be relatively lax and fetch only the clearest of wrong-doer signals. This would require:

  • the boys-cried-wolf to tone down a little. Maybe selling less tools, maybe achieving more by more carefully spending the budget; a Win;
  • the n00b and drone mob users (think @pple users and like meek followers) to raise their constant awareness; a Win;
  • the ‘adversaries’ to not want to be perfect Big Brothers. Hard, to admit, and to not utterly destroy human rights, but necessary and sobering; a half Win.

So, … this card house tumble may turn out to be Progress.
I’ll leave you with:
DSCN1388[Fragile new, sturdy old; Cologne]

Possible, hence probable means

Why did it take so long for this to surface ..?
As the <link> mentions, steganography in images is detectable and tools are around to help – how many of you already use them on a regular basis, in times when LOLcat pics are so abundant (hint(?)) – but wasn’t it too obvious that the Bad (?) Guys knew that, too, before you the pithy defenders?

So, why?
Either the tools are around but not widespread enough, or as <link> suggests, other means might work better. But the other means… are as cumbersome to deploy, continuously, costly, for the short run for the slightest of changes that anything would be leaked in such a sophisticated way whereas we’re nowhere really nowhere near similar near-water-tight deployment of tooling and methodology against much simpler leaking methods. Leaving you in blissful ignorance. ?

Leaving (sic) you with:
DSCN1043[Tarrega door. Shut closed.]

Maverisk / Étoiles du Nord