(Ahead of time, because we can)

This will be published in the July ISSA Journal. Just put it down here, already, to be able to link to it. ;-]
And, first, a picture:
DSCN3152
[Toronno, ON]

After which (Dutch version linked here):

You have the Bystander Bug

One of the major pluses of open source software is that anyone, even you, could check the source code so, via logic, the chance that a somewhat hidden bug will be found in a heartbeat will rise to about one when enough people look at the source, now they can.
But we were recently surprised by just such a bug, with global implications. Sure, it turned out actually no-one keeps tabs on what open source software is used (in) where by whom.

So all the global software behemoths turned out to rely on pieces of open source software – and that software, maintained by literally a handful of volunteers on a budget of less than a couple of seconds of the major software vendors’ CEOs, actually had not been scrutinized to the level one would require even on a risk base. Certainly not given the widespread use which would make the risk to society grow high. Did we tacitly expect all the software vendors of the world to have inspected the open source code more carefully before it being deployed in the global infrastructure ..? How would one propose to organize that within those big, huge for-profit companies? What where (not if) the global infrastructure wasn’t ‘compiled’ into one but built using so many somewhat-black boxes? Virtualization and ‘cloud’ abstract this picture even further. Increasing the risks…

But more worryingly, this also means that ‘we all’ suffer from the Bystander Effect. Someone’s in the water, unable to get out, and we all stand by without helping out because our psychology suggests we follow the herd. Yes, there are the stories of the Ones that beats this and jumps in to the rescue – but there’s also stories where such heroes don’t turn up. And, apparently, in the open source software world, there’s too few volunteers, on budgets far less than a shoe string, that jump in and do the hard work of detailed code inspections. Which means there’s also a great number, potentially about all, of us that look the other way, have made ourselves unaware, and just want to do our 9-to-5 job anywhere in the industry. In that way, we’re suffering from the bystander effect, aren’t we ..?

And, even worse, so far we seem to have escaped the worst results of this in e.g., voting machines. Here, how close was the call where everyone just accepted the machine program-ming and expected that because of its open source nature (if …), “someone will have looked at it, right …!?”. Though of course, on this subject, some zealots (?) did indeed do some code checking in some cases, and the troubles with secrecy of votes overshadowed the potential (!) tallying biases programmed in, knowingly or not. But still… when ‘machines’ are ever more relied upon, moving from such simple things like voting machines to a combination of human-independent big data analysis/action automata with software-defined-everything and the Internet of Things, where’s the scrutiny?

Will a new drive for increased security turn out to be too little, too narrowly focused and for too short at time, as many if not all after-the-fact corrections have been? If we leave it to ‘others’ again, with their own long-term interests probably more at heart than our even longer-term societal interests, we still suffer from the bystander effect and we may be guilty by inaction.

But then again, how do we get all this stuff organized …? Your suggestions, please.

[Edited to add I: The above is the original text; improved and toned down a bit when published officially in the ISSA Journal]
[Edited to add II: This link to an article on the infra itself]

[Edited to add III: The final version in PDF, here.]

One more on how the @nbacc accountants’lab’ will work in NL

Why am I somewhat convinced that this article is what the results of the accountants’lab’ that is proposed in the Netherlands, will look like. Even when I like the idea! Just not the direction the proposed content of research is taking. Is (this) a lab for deep, methodologically cast-in-concrete but utterly boring science of the history writing kind with drab results that don’t apply in the chaos of the (business) world out there, or is (this) a lab about doing experiments, exploring the unexplored, with results scored on qualitative scales like Innovation and Beauty …?

With a picture because you held out so long reading the above:
DSCN4777
[Yup, the axis going South]

Accountansdiscussie: Start en eind

Als 28 mei @nbacc met @tuacc in de slag gaat, is deze post misschien een hulp, en onderstaande alsnog het resultaat…
Imago accountants
[Parool, zaterdag 24 mei 2014]

Want laten we wel wezen: Een lab voor diepe vaktechniek loopt consequent (if) aan tegen: “Volgens de theorie kan niks hier werken en weten we exact waarom. In de praktijk werkt alles, maar niemand weet hoe. Wij mixen theorie en praktijk.” en ook: ” Laat niet degenen die zeggen dat iets onmogelijk is, in de weg staan van degenen die dat iets al doen.” Succes …

[En dan krijg ik weer te horen dat ik ‘te’ cynisch ben. Cynisme is een laatste poging vooruitgang te bereiken – waar de meeste betrokkenen (?) in apathie afwezig blijven, geestelijk of fysiek. Apathie; de houding van degenen die alle hoop zijn verloren. Nee, dáár ben je blij mee…]

Cryptostego

Just as last week I’ve been discussing stego with colleagues, I missed this Bruce’s post
Be sure to read the comments, though. A couple on stacking steganography over cryptography, which is what I would presume would work.

And, again the question: what would you know of actual use ‘out there’; is it common, rare, what are the characteristics of its users ..? Is it the next big thing after (?) APTs …?

Oh, here it is; the pic you expected:
DSCN2526
[Would ‘Riga’ be a hint that there’s more to the picture …!?]

Column on Open Source code Bystanders

Well, the column is out there now… Will be published in the July 2014 ISSA Journal but here’s a preview, in Dutch

And, of course:
DSCN6225
[S’bourg, or Brussels – always mix them up and end up in the wrong place for a meeting]

New ton’s theories

OK. Now which parts of his work would you consider unscientific; both by the standards of his time and by your today’s standards…
Because maybe he studied all terrains with comparable rigour, and maybe he found stuff of comparable importance. Who knows…

And, as expected:
DSCN4125
[Viewed this way, less often. Cala at Vale, indeed]

Neural data analysis; where is it?

DSCN9545nw
[Koolhaas locator question]

As [Big | Smart | Process | Socmed] Data Analysis spreads, like this:
age-of-miss-america_murders-using-a-handgun I wondered where the neural network angle was, or is.

Remember, two decades back, at universities there was so much clamour about Neural Networks and their potential, and how credit card companies were the flagship applications with their pattern recognition work ..?
All that had never grown to full fruition, to full potential – as proven by the lack of ubiquitous visibility in our daily lives; if neural networks had been so successful, we’d had seen it on and in just about any IoT device already today.

But we don’t.
Hence, neural networks as a community has missed the boat, somewhere. Would anyone have a good pointer to the algorithms behind the said [Big | Smart | Process | Socmed] Data Analysis, and/or which actual algorithms are behind it ..? Either statistical non-linear regressions of sorts (including outlier detection and semi-smart pattern recognition), or neural networks, or …? Still some non- or true visual Kohonen networks out there ..?
I really would want to hear from you.

Oh and while you’re at it, would you also have good pointers to freeware tools to tinker with the algorithms ..?

Donuts for capital requirements

20130211_144900[1]
[Slippery world, trying to balance on stelts, as long as that goes]

… Suddenly, I realized that the approach towards balanced capital requirements calculations through the combination of four quadrants of external vs. internal, data-driven versus ‘fuzzy’ scenario analysis, is flawed when depicted as a blot around the centre of the said four quadrants or axes.

What is needed, is in fact a donut. As in:
DonutII
Where the ideal is a combination indeed of all sides of the picture, to a quality and coverage as far out as possibly one can get, but one wouldn’t want to lose all that is gained by getting away from the centre when reverting to it to combine the conclusions. One will want to have a donut, capturing all the peculiarities and ground covered. Into one multi-multi-faceted capital requirements calculation.

If that leads to an overall picture so fuzzy and risky that no-one would be able to conclude anything from it … then you’ll have succeeded.
In capturing the future, to a level of reliability that’s still very close to, but off from zero.
The fuzzier it gets, the more demonstration you get it, in commensurate degree.

That would be an achievement!
Hence as a token of appreciation, get yourself a donut!

InfoValue, at last ..?

Ah, finally, some depth shows up to have been developed in the field of infonomics. Very, very happy to see that; as you may recall from many previous posts of mine (and from before the very idea of posts, or blogging, or the Internet as such, did exist) I had an urge to find, or (help) develop, methodology in this direction. So there you have it, and there.

I’m happy for today. But will post much more on all sorts of bits and pieces that may spring to mind, in the near future. May even want to create a new category for this.
Anyone out there that already has more on the, say, more hardcore accounting side of this? Much needed!

And oh, of course:
DSCN4011
[Bridge, over as yet untroubled waters, Calatrava of course yes indeed, at Valencia]

Maverisk / Étoiles du Nord