Aggregation is stripping noise; close to emergence but …

Still tinkering with the troubles of the aggregation chasm (as in this here previous post) and the hardly visible but still probably most fundamentally related concept of emergent properties (as in this, on Information), when some dawn of insight crept in.
First, this:
Photo11g
[Somewhere IL, IN, OH; anyone has definitive bearings? JustASearchAway found it. WI]

Because I’ve dabbled with Ortega y Gasset’s stupidity of the masses for a long time. Whether they constitute Mob Rule, or are (mentally to action) captives of the (or other!) 0.1%, or what. My ‘solution’ had been to seek the societal equivalent of the Common Denominator – No! That may be in common parlance but what is meant here, much more precise, is the Greatest Common Divisor.

Since that is at work when ‘adding’ people into groups: Through stripping differences (as individuals have an urge to join groups and be recognized as members, they’ll shed those) and Anchoring around what some Evil Minds may have (consciously or not) set as GCD-equivalent idea, the GCD will reinforce itself ever more (immoral spiral of self-reinforcement), mathematically inherent through adding more elements to the group for GCD establishment and (not ‘strictly’) lowering. [The only difference being the possibility of a pre-set GCD to center around; just make it attractive enough so the mass will assemble, then shift it to need ..!] Where the still-conscious may not want to give up too much of their individuality but may have to dive under in their compliance coping cabanas just to survive (!?).

So, aggregation leads to the stripping of ground noise which may lead to patterns having been pervasively present but covered by that noise, to emerge. Like statistically, a high R2 but with a low β – but still with this β being larger than any of the others if at all present. This may be behind the ‘pattern recognition’ capabilities of Big Data: Throw in enough data and use some sophisticated methods to ensure that major subclasses will be stratified into clusters and be noise to the equation. [That GMDH, by the way, was the ground breaking method by which I showed anomalous patterns in leader/follower stock price behavior (Shipping index significantly 2-day leading one specific chemicals company; right…) in my thesis research/write-up back in 1994, on a, mind you, all hard-core coding in C on a virtual 16-core chip from mathematics down to load distribution. Eat that, recent-fancy-dancy-big-data-tool-using n00bs..!]

By which all the patterns that were under the radar will suddenly appear as patterns in Extremistan DisruptiveLand would; staying under the radar until exploding out of control through that barrier (but note this). As emergent.

But just as metadata is not Information but still only Data, the Emergent isn’t, really. Darn! Close, but no Cuban.
As the pattern is floundering on the research bed when the noise around it dries up, it is not necessarily part of every element in the data pool and potentially can only exist (be visible) at aggregate level. But can and ex-ante very much more probable be part in one or some or many elements of the pool, which would be methodologically excluded from the definition of Emergence / Emergent Characteristics (is it?). And, if the noise is quiet enough, would already be visible in the murky pool in the first place as characteristic not ‘only’ as emergent as the definition of that would have it.

So, concluding… a worthwhile thought experiment, sandblasting some unclarity, but still, little progress on understanding, felling-through-and-through, how Emergence works; what brings it about. But we should! It is that Holy Grail of jumping from mere Data to Information ..!
Joe Cocker just died a couple of weeks ago. Fulfill his request, and little help this friend here, with your additional thoughts, please…

To watch(ed)

Hmmm… We should all watch out for this here documentary (Yes. Really.), and then have a look back at the great many leads up to today that went before it, like here, here, since some five years ago by yours truly, and many other(s). But now, with this team on it, will it break through ..?

I have something to celebrate today; will leave you for now with:
??????????
[Heraclites: All is in transit or in DC]

Merely convicted by PPT

Hm, there was this meme about Death By Powerpoint. Now, the toned-down version, conviction (attempt) by PPT, has been found in the wild. As in this here article. Where the prosecutor was too dumb to not hide the culpose text behind the 24-images-per-second visibility screen. [Is that ‘stego’ ..?]

Incentives, incentives…
DSCN1210[Vic chic]

Risk of being Duds

Wow, the new year starts of with … failure. I mean, apart from your inability to keep your New Year’s resolutions (already – if you need such a specific date like Jan 1 and the motivation of a ‘fresh start’ which, on the whole, it isn’t since calendars were invented as easy shorthands not life (so exactly) defined turning points) to change habits, … why didn’t you just change when the need arose ..? You’d be much further with it – the year is really off to a traditional start when failures of the past, are repeated ad nauseam… [As interlude: No, writing shorter sentences isn’t and wasn’t on any of my resolutions’ lists…]

Which one might expect from less-clear thinking professions. Though this post isn’t meant to address the precious few, the exceptions to the rule, I do mean ‘accountants’ and ‘IT-auditors’ (IS auditors that don’t understand) to be among those. That, apart from the slew of other vices you can easily sum up, tend to instruct others to do risk analysis this way:
R6 model
Yeah that’s in Dutch but you probably can make out the (actual) content and meaning. Being that the risk analysis is (to be) carried out top-down indeed, analyzing how lower layers of the model will (have to) protect against risks in the layers above, after the controls at any layer may have failed to ‘control’ the risks… [About that ‘control’ quod NON: See this here post of old.]

See? This just perpetuates the toxic myth of top-down analysis. The more one would follow this model, the more deluded one’s risk estimates would become… And this is proposed to lead the way in financial audits…
If you think this limits the Spectre of errors – this thinking permeates, and will permeate, the audit / inspection environment, leading to ‘Sony’. Yes, this erroneous thinking is at, or near, the root cause of that mess-up. [Anyone has seen proof that the NK actually did it, I mean, not the ‘proof’ trumped up by the most biased party ..?]

Whereas a bottom-up approach would show all the weaknesses that would create logically impossible effectiveness of higher-up ‘controls’. Controls just aren’t put in place to build a somewhat reliable platform to build higher-order controls on… Controls are put in place to try (in vein) to protect same-level risks throughout, as well, and higher-level controls were (are; hopefully not too much anymore) put in place to try to protect against all the lower-level controls failures not remediatable there… [‘Mitigation’ as the newspeak champion]
Hence, the distribution of error ranges (outside the acceptable sliver in the middle of the distribution of, e.g., transaction flows – hopefully that sliver is the intentional one) is ever wider the higher one goes in the ‘model’.

Rightfully wrecking your approach to financial audits, where not the risks of misrepresented true and fair views are managed, but the risks that the auditor is caught and found guilty of malpractice by not doing the slightest bit of the checking promised. ‘Assurance’ hence beginning with the right first three letters. Risk management to cut down the enormous workload (due to the overwhelming risks percolating up the model, as in reality they do..! hence having to check almost everything) to nicely within the commercial cutthroat race-to-the-bottom budget which is supplanted with ridiculously attractive (by bordering(!?)-on-the-fraudulent hourly rates) consulting business.

Now, the only hope we have is that the R6 model will not spread beyond … oh hey, googling it returns zero results – let’s keep it that way! Let’s not follow BAD guidance…

Jeez… And that’s only two of 124 pages of this

I’ll leave you with …
20141101_155144[1]
At the door to
20141101_160525[1]
– if you know what these are, you know why they’re here…

ID card house coming down

With the ‘eavesdropping’ or whatshallwecallit of the German Defense Minister’s fingerprint, it seems that yet another card was pulled from the infosec card house of solutions. It looks like a distant relative in infosec land, on the ID side, has faltered. Or, has shown to be not 100% perfect. Dunno if that is newsworthy; apparently is.
Though apparently, in unrelated (?) news reports, not all tools out there have (yet) been cracked by TLAs. With Tor and Truecrypt as shining examples, but haven’t vulnerabilities in the schemes of those been demonstrated (at least theoretically)? So, are the leaked documents just bait to pull in as many ‘script’/privacy kiddies into environments where they actually can be tracked? If the leaked docu are false admittance of uncrackability … who can you trust, then?

Or is it all The Return To Normalcy, where we know all and every tool and method are not 100% perfect, let alone in themselves, and we will have to return to do a risk weighing for every action we take – allowing the Other Side(s) to also be relatively lax and fetch only the clearest of wrong-doer signals. This would require:

  • the boys-cried-wolf to tone down a little. Maybe selling less tools, maybe achieving more by more carefully spending the budget; a Win;
  • the n00b and drone mob users (think @pple users and like meek followers) to raise their constant awareness; a Win;
  • the ‘adversaries’ to not want to be perfect Big Brothers. Hard, to admit, and to not utterly destroy human rights, but necessary and sobering; a half Win.

So, … this card house tumble may turn out to be Progress.
I’ll leave you with:
DSCN1388[Fragile new, sturdy old; Cologne]

Careful times

This day and age, one cannot be too careful with one’s digital traces. To the point where normal functioning in modern society is impacted. And then, that’s not enough. Your mere existence may cause trouble by you not being the only one recording your life. As in this here piece

Which, apart from its many manifest errors of thought on the side of the wannabe good guys that by being absolute n00b sorcerer’s apprentices at best, has this nugget of inhumanity: “The RMV itself was unsympathetic, claiming that it was the accused individual’s “burden” to clear his or her name in the event of any mistakes, and arguing that the pros of protecting the public far outweighed the inconvenience to the wrongly targeted few”. Well, if you think that, you might as well join terrorists in the Middle East; they think the same and wouldn’t be allowed to be at all, in any functioning society.

Well, I’ll stop now before suggesting the ones doing such erroneous thinking should be locked up safely in some asylum of the old kind, and leave you with a calming:
20141101_150551[1][On how life actually is]

Possible, hence probable means

Why did it take so long for this to surface ..?
As the <link> mentions, steganography in images is detectable and tools are around to help – how many of you already use them on a regular basis, in times when LOLcat pics are so abundant (hint(?)) – but wasn’t it too obvious that the Bad (?) Guys knew that, too, before you the pithy defenders?

So, why?
Either the tools are around but not widespread enough, or as <link> suggests, other means might work better. But the other means… are as cumbersome to deploy, continuously, costly, for the short run for the slightest of changes that anything would be leaked in such a sophisticated way whereas we’re nowhere really nowhere near similar near-water-tight deployment of tooling and methodology against much simpler leaking methods. Leaving you in blissful ignorance. ?

Leaving (sic) you with:
DSCN1043[Tarrega door. Shut closed.]

Maverisk / Étoiles du Nord