Blog

The Year of SocMess

Yes it’s becoming clear now, if you hadn’t had this on your radar yet from scanning to tracking, that certainly the first half of 2015 will be the Year of SocMess – Social Messaging. Breaking through as a separate category, since SocMed as in the Platform-collecting-all-your-data-for-sale-to-the-prime-examples-of-shadiness kind, think Facebook, will quickly diminish and the Anon apps will spring forward. Where the Anon aspects (as per this explanation and others) may differ, but they all’ll have a mix of the elements that make part of the comms anonymous. Really. As otherwise, they wouldn’t be in this category, right?

And indeed, of course, I wrote ‘part of’ not for nothing.

This development will be interesting to follow. As it mixes with the development of in-crowd apps (that have been around already for a number of years!) where exclusivity is ensured either by limited invite-onwards capabilities or central control over extension.
And there will also be the development in the mix where follower-numbers will stratify further, so we’ll have more 1%-99% skews between interest-pinnacles and the hordes of followers blabbering Likes of all kinds – or flipping into hater gangs.
And, … even more … Subgroups, interests and subcultures emerging to dominate platforms (errr… socmed apps), crowding out others, leading to verticals qua interests segments, and some probably unwantable forms of social exclusion and ‘xenophobia’ where if you’re a n00b you may not be let in on discussions. Kindergarten playground style, but that’s how many (not so, or literally still hardly) grown-ups are.

And on top… The platform providers will battle it out, too, deep under water, where Docker, Google Cloud, Amazon, iCloud, et al., will vie for prominence.
And, the IoT/Appmix will extend. Uber was a quicky, likely to meddle forward in 2015 but the prime has been lost on that one. And Lyft will trail. So will domotics like Nest et al. But, the extend part will be there, with a range of new apps in this area, for functions you still now haven’t even thought of to be handled in this way.

Didn’t I forget to add a little Apple in the mix? No, since there will be hardly anything to mix in, A-wise. Or it be the deterioration of MacBooks’ revenue [Edited to add: Same for iFoan] leading to an emperor in ‘new’ clothes qua prospects, market value and future…

OK, I understand it’s all a bit much, and comes quite late, as predictions for 2015 [Edited to add: apart from these I posted previously; somewhat overlapping], but nevertheless, this all was required to be spelled out for you. Which leaves:
DSCN1382
[Smells like 4711 spirit…]

Attached ITsec

OK, I’m a bit stuck here, by my own design. Had intended to start elaborating the all-encompassing IoT Audit work program (as per this post), but the care and feeding one should give to the methodology, bogged me down a bit too much … (?)
As there have been

  • The ridiculousness of too much top-down risk analysis (as per this) that may influence IoT-A risk analysis as well;
  • An effort to still keep on board all four flavours of IoT (as per this), through which again one should revert to more parametrised, parametrised deeper, forms of analysis;
  • Discomfort with normal risk analysis methods, ranging from all-too-silent but fundamental question discussions re definitions (as per this) and common approaches to risk labeling (as per this and this and others);
  • Time constraints;
  • General lack of clarity of thinking, when such oceans of conceptual stuff need to be covered without proper skillz ;-] by way of tooling in concepts, methods, and media.

Now, before jumping to yet another partial build of such a media / method loose parts kit (IKEA and Lego come to mind), and some new light bulb at the end, first this:
DSCN5608
[One by one …, Utrecht]
After which:
Some building blocks.

[Risks, [Consequences] of If(NotMet([Quality Requirements]))]
Which [Quality Requirements]? What thresholds of NotMet()?
[Value(s)] to be protected / defined by [Quality Requirements]]? [Value] of [Data|Information]?
[Consequences]?
[Threats] leading to [NotMet(Z)] with [Probability function P(X) ] and [Consequence] function C(Y)?
([Threat] by the way as [Act of Nature | Act of Man], with ActOfMan being a very complex thingy in itself)
[Control types] = [Prevent, Detect, React, Respond (Stop, Correct), Retaliate, Restore]
[Control] …? [ImplementationStrength] ?
[Control complex] UnlimitedCombiOf_(N)AndOrXOR(Control, Control, Control, …)
Already I’m missing flexibility there. [ImplementationStrength(Control)] may depend on the individual Control but also on (threat, Threat, …) and on Control’s place in ControlComplex and the other Controls in there. Etc.

Which should be carried out at all abstraction levels (OSI-stack++, the ++ being at both ends, and the Pres and App layers permeating throughout due to the above indetermination of CIAAEE+P for the four IoT development directions, and their implementation details with industry sectors. E.g., Medical doing it different than B2C in clothing. Think also of the vast range of protocols, sensor (control) types, actuator types, data/command channels, use types (primary/control, continuous/discrete(ed)/heartbeat), etc.

And then, the new light bulb as promised: All the above, when applied to a practical situation, may become exponentially complex, to a degree and state where it would be better to attach the security ‘context’ (required and actual) as labels to the finest-grain elements one can define in the big, I mean BIG, mesh of physically/logically connected elements, at all abstraction levels. Sort-of data labeling, but then throughout the IoT infrastructure. Including this sort of IAM. So that one can do a virtual surveillance over all the elements, and inspect them with their attached status report. Ah, secondary risk/threat of that being compromised… Solutions may be around, like (public/private)2 encryption ensuring attribution/non-repudiation/integrity etc. Similar to but probably different from certification schemes. Not the audit-your-paper-reality type, those are not cert schemes but cert scams.

OK, that’s enough for now. Will return, with some more methodologically sound, systematic but also practical results. I hope. Your contributions of course, are very much welcomed too.

All against all, part 1

Tinkering with some research that came out recently, and sometime(s) earlier, I had the idea that qua fraud, or rather ‘Cyber’threat analysis (#ditchcyber!), some development of models was warranted, as the discourse is dispersing into desparately disparate ways.

The usual picture suspect:
DSCN2891[Odd shape; maybe off-putting as a defense mechanism ..!?]

First up, then, an extended version of the matrix I’ve been presenting lately, about offense/defense characteristics. Just to expose it; would want to hear your feedback indeed. (Next up: The same, filled in with What the attacker would want to get out of it, information-wise. After that: Strategy, tactics commonly deployed; rounding off with least-ineffective defense postures (?))

Fraud matrix big part 1

Aggregation is stripping noise; close to emergence but …

Still tinkering with the troubles of the aggregation chasm (as in this here previous post) and the hardly visible but still probably most fundamentally related concept of emergent properties (as in this, on Information), when some dawn of insight crept in.
First, this:
Photo11g
[Somewhere IL, IN, OH; anyone has definitive bearings? JustASearchAway found it. WI]

Because I’ve dabbled with Ortega y Gasset’s stupidity of the masses for a long time. Whether they constitute Mob Rule, or are (mentally to action) captives of the (or other!) 0.1%, or what. My ‘solution’ had been to seek the societal equivalent of the Common Denominator – No! That may be in common parlance but what is meant here, much more precise, is the Greatest Common Divisor.

Since that is at work when ‘adding’ people into groups: Through stripping differences (as individuals have an urge to join groups and be recognized as members, they’ll shed those) and Anchoring around what some Evil Minds may have (consciously or not) set as GCD-equivalent idea, the GCD will reinforce itself ever more (immoral spiral of self-reinforcement), mathematically inherent through adding more elements to the group for GCD establishment and (not ‘strictly’) lowering. [The only difference being the possibility of a pre-set GCD to center around; just make it attractive enough so the mass will assemble, then shift it to need ..!] Where the still-conscious may not want to give up too much of their individuality but may have to dive under in their compliance coping cabanas just to survive (!?).

So, aggregation leads to the stripping of ground noise which may lead to patterns having been pervasively present but covered by that noise, to emerge. Like statistically, a high R2 but with a low β – but still with this β being larger than any of the others if at all present. This may be behind the ‘pattern recognition’ capabilities of Big Data: Throw in enough data and use some sophisticated methods to ensure that major subclasses will be stratified into clusters and be noise to the equation. [That GMDH, by the way, was the ground breaking method by which I showed anomalous patterns in leader/follower stock price behavior (Shipping index significantly 2-day leading one specific chemicals company; right…) in my thesis research/write-up back in 1994, on a, mind you, all hard-core coding in C on a virtual 16-core chip from mathematics down to load distribution. Eat that, recent-fancy-dancy-big-data-tool-using n00bs..!]

By which all the patterns that were under the radar will suddenly appear as patterns in Extremistan DisruptiveLand would; staying under the radar until exploding out of control through that barrier (but note this). As emergent.

But just as metadata is not Information but still only Data, the Emergent isn’t, really. Darn! Close, but no Cuban.
As the pattern is floundering on the research bed when the noise around it dries up, it is not necessarily part of every element in the data pool and potentially can only exist (be visible) at aggregate level. But can and ex-ante very much more probable be part in one or some or many elements of the pool, which would be methodologically excluded from the definition of Emergence / Emergent Characteristics (is it?). And, if the noise is quiet enough, would already be visible in the murky pool in the first place as characteristic not ‘only’ as emergent as the definition of that would have it.

So, concluding… a worthwhile thought experiment, sandblasting some unclarity, but still, little progress on understanding, felling-through-and-through, how Emergence works; what brings it about. But we should! It is that Holy Grail of jumping from mere Data to Information ..!
Joe Cocker just died a couple of weeks ago. Fulfill his request, and little help this friend here, with your additional thoughts, please…

Ruled by the petty

When mores are sufficient, laws are unnecessary; when mores are insufficient, laws are unenforceable.

Durkheim, you recall. Only now. Only now that you’ve started the year all refreshed to this time around implement all the nitpicky petty rather childish, kindergarten-level rules to reign in all the misfits (i.e., about everyone except you) that don’t want to dance to your tunes (while you can’t dance, really; admit it. Not even to your own tune you don’t!). Which turns you into a petty fool, given the veracity of the above quote. If you don’t get it, just think it over once again. And again, until you do. Or quit, but then stay away. Like, at these nice locations just for you.

The big Question of 2015, or the decade, being: How to get the mores back

Part of the solution may be your admiration for:
DSCN5159[Some time ago, when photography was still allowed….]

To watch(ed)

Hmmm… We should all watch out for this here documentary (Yes. Really.), and then have a look back at the great many leads up to today that went before it, like here, here, since some five years ago by yours truly, and many other(s). But now, with this team on it, will it break through ..?

I have something to celebrate today; will leave you for now with:
??????????
[Heraclites: All is in transit or in DC]

Merely convicted by PPT

Hm, there was this meme about Death By Powerpoint. Now, the toned-down version, conviction (attempt) by PPT, has been found in the wild. As in this here article. Where the prosecutor was too dumb to not hide the culpose text behind the 24-images-per-second visibility screen. [Is that ‘stego’ ..?]

Incentives, incentives…
DSCN1210[Vic chic]

HTTP status 418 against unpersonation

Though we’re halfway towards granting legal person rights to animals (as this and this show), and you know a lot of co-workers for whom this presents a nice little bit of progress, I’d say we have also moved great strides in the opposite direction.
Which is far more dangerous.

It all started, throughout the ages over and over again, with the already-responsibility-deprived weasels (a.k.a. ‘mere employees’ and ‘leaders’) wiggling out from under the burden of guilt for, e.g. most recently, the Sony hack, the financial crisis; you name it. With excuses ranging all the way from “I wasn’t important enough to had been able to make any noticeable difference anyway” to “If I hadn’t done it, someone else would have and at least now it was me with still some consciousness that did it” – where one’s character speaks through one’s actions …
Which in sum total, through a particularly nefarious twist of aggregation and emergence (read back this little badly unnoticed gem and you’ll get it) leads to … dehumanization of these speakers, and corporations seeking personhood as well.
Which is far more dangerous.

All of you that behave this way: You’re not underestimating the dystopian version of the Singularity, but actively bringing it on … by degrading your own independence, freedom (of mind and action!), identity, humanity, and value. By suppressing any questioning of the Überbureaucracy, actively, by frowning of much worse on those that want to remain human and social (i.e., exchange ideas). Etc. To no end.
To the end of letting the force of nature, the beast within, to explode out through the most deviant, unthinkably inhumane, behavior in particularly with the ones that were most and first in line with ratio, bureaucratic petty rules, i.e., the ones holding sway over all others including you. With the explosion hitting you, too – and you have no answer either now or then…

Complexity, of the world, of societies, of your immediate environments (Sloterdijk’s spheres, yes), of yourself, is no excuse to shut down. It should be a wake-up call, a call to arms, a sacrifice … not to ritually celebrate past developments, but to progress out of the complexity …!
My fabourite option: a healthy dose of status code 418 for all, not always, but every now and then, here and there. Life is too important to always take seriously!

Well, I’m off to some very dense prose, where mere text lines are ever more narrow in their description of the richness of the ideas and constructs to be discussed. Hence will part ways, with:
DSCN1441
[Bam! Out explodes the force of nature]

Short Insight: The Economic Triad

Yes a sudden microrevelation again:

Sedláček, Piketty and Graeber form a triad of social economics; possibly the Way Forward for the global society.

I mean, let’s connect the start and end ideas and conclusions, and the intermediate ones, of Economics of Good And Evil, Capital in the 21st Century, and Debt, the First 5000 Years, and you get my drift. Or not. But the latter is on you, half-joke only.

I cannot but leave you with:
20140917_092755_HDR[2][When you see it (the above), you may need one. At the Fabrique, of course]

Risk of being Duds

Wow, the new year starts of with … failure. I mean, apart from your inability to keep your New Year’s resolutions (already – if you need such a specific date like Jan 1 and the motivation of a ‘fresh start’ which, on the whole, it isn’t since calendars were invented as easy shorthands not life (so exactly) defined turning points) to change habits, … why didn’t you just change when the need arose ..? You’d be much further with it – the year is really off to a traditional start when failures of the past, are repeated ad nauseam… [As interlude: No, writing shorter sentences isn’t and wasn’t on any of my resolutions’ lists…]

Which one might expect from less-clear thinking professions. Though this post isn’t meant to address the precious few, the exceptions to the rule, I do mean ‘accountants’ and ‘IT-auditors’ (IS auditors that don’t understand) to be among those. That, apart from the slew of other vices you can easily sum up, tend to instruct others to do risk analysis this way:
R6 model
Yeah that’s in Dutch but you probably can make out the (actual) content and meaning. Being that the risk analysis is (to be) carried out top-down indeed, analyzing how lower layers of the model will (have to) protect against risks in the layers above, after the controls at any layer may have failed to ‘control’ the risks… [About that ‘control’ quod NON: See this here post of old.]

See? This just perpetuates the toxic myth of top-down analysis. The more one would follow this model, the more deluded one’s risk estimates would become… And this is proposed to lead the way in financial audits…
If you think this limits the Spectre of errors – this thinking permeates, and will permeate, the audit / inspection environment, leading to ‘Sony’. Yes, this erroneous thinking is at, or near, the root cause of that mess-up. [Anyone has seen proof that the NK actually did it, I mean, not the ‘proof’ trumped up by the most biased party ..?]

Whereas a bottom-up approach would show all the weaknesses that would create logically impossible effectiveness of higher-up ‘controls’. Controls just aren’t put in place to build a somewhat reliable platform to build higher-order controls on… Controls are put in place to try (in vein) to protect same-level risks throughout, as well, and higher-level controls were (are; hopefully not too much anymore) put in place to try to protect against all the lower-level controls failures not remediatable there… [‘Mitigation’ as the newspeak champion]
Hence, the distribution of error ranges (outside the acceptable sliver in the middle of the distribution of, e.g., transaction flows – hopefully that sliver is the intentional one) is ever wider the higher one goes in the ‘model’.

Rightfully wrecking your approach to financial audits, where not the risks of misrepresented true and fair views are managed, but the risks that the auditor is caught and found guilty of malpractice by not doing the slightest bit of the checking promised. ‘Assurance’ hence beginning with the right first three letters. Risk management to cut down the enormous workload (due to the overwhelming risks percolating up the model, as in reality they do..! hence having to check almost everything) to nicely within the commercial cutthroat race-to-the-bottom budget which is supplanted with ridiculously attractive (by bordering(!?)-on-the-fraudulent hourly rates) consulting business.

Now, the only hope we have is that the R6 model will not spread beyond … oh hey, googling it returns zero results – let’s keep it that way! Let’s not follow BAD guidance…

Jeez… And that’s only two of 124 pages of this

I’ll leave you with …
20141101_155144[1]
At the door to
20141101_160525[1]
– if you know what these are, you know why they’re here…

Maverisk / Étoiles du Nord