Progress at the other end

On state of the art innovation – at the lowly (!?) end of programming.

I mean, it’s not rocket science; it’s quite a bit harder to pull off. To produce something decent though that seems to have gone lost in these überagilescrum times of putting apps out before anyone has a clue what they’re intended for. What problem they’d have to solve, for a large enough audience to care. Yes, it seems that “If you’re satisfied with your product at launch, you’ve launched too late” is all the rage now. But to win over the world, over Fubbuck, to win over all the big organisations still out there (and will be there, for decades to come, and will still have the power i.e., money, to dwarf others’ interest when they put their mind(s) to it), one would need decency in the product, hence also in the coding.

But then, there’s this dark and shady epitome-of-big-org backed initiative called Pliny to help out. We’re interested. As it may, when it will deliver results, help towards better programming practices.

  • By introducing predictive text to low-level core programming.
    But I also see other potential use for its ideas, towards:
  • Better coding, pre-emptively less buggy, by using in-line sanity checks on whatever is put out. It says this in the linked article indeed, but only in passing – whereas I’d say this is an important improvement opportunity in its own right.
  • Better re-use of code. When context and (machine level) interpretation of intent is gathered anyway, why not map and match that intent to the existing code base? Through that, lots of pre-programmed, debugged and efficienced (hey I didn’t want to break up the sentence rhythm with ‘made efficient’ oh what am I doing now) routines, re-use could skyrocket and the most hideous issues of non-reuse as listed at the Daily WTF may be prevented.

Would these three be worth it ..? Of course it will. They will raise low-level coding up quite a bit, upping the Lean And Mean Coding Machine sweatshops to greater productivity and hence, to quicker full-scale and mature products. And make all the app bungling less interesting, hopefully. Maybe even making all the stuff more secure. But that … waaay too much to ask for. (?)

To round off:
DSCN8534[Hi DARPA in your dark fortress!
  Oh, not you, your supposedly-opponents-but-in-your-pocket Big G houses here]

Jumping the aggre chasm

On the subject of individuality versus group aggregates. And where the characteristics just don’t add up because they do. As in:

  • Elections. Every vote counts, but no single one matters.
  • ‘Democratic’ (quod non) politics in general. Where one can only change things by joining political parties where your particular issue voice is lost, you are required to toe the party line on many (other) things against your ad hoc will and purpose, and parties end up not representing anyone in particular – no party has exactly all opinions right on all your issues, and in the end even parties don’t do as promised because they have to compromise.
  • Organizations. Where group think (is the) rule(s). Where all collectively are expected to behave individually. Or so. At the end of this.
  • Statistics. Where n times the average of n data points is nowhere the same as any of the data points. The statistician drowned in the river that is 1 ft deep on average. The average human has 1 nipple and 1 ball. Etc. [Let alone causality that is only implied in the human discourse, the Story, but has never yet been proven to exist. Philosophers’ stuff]
  • Mathematics (I). Where the greatest common divisor decreases rapidly as the number of elements increases.
  • Mathematics (II). Where there is a continuity ‘correction’ when jumping from discrete to real arithmetic.

But now, first, your pic of the day:
DSCN1315
[Also Girona, oft missed]

Which all reminds us of Ortega y Gasset’s rants against the hordes, the masses – his their Revolt is the fear of the shrinking greatest common divisor.

Which also reminds us of the perennial individual versus history movements when discussing innovation. One can go it alone but will not gain traction. Or (later) succumb to the pressure of joining others but losing something for the sake of being allowed to join. Hmmm, I feel there’s much more to be said here. But the bits margin on this blog did just not suffice. To be continued. In the mean time, I’d welcome your contributions to the above list …

Coining an answer; Bit-passports

The answer to the final question (“… why the governments didn’t invent this sooner,” he says. “I came up with this over a weekend in my spare time, why didn’t they? …”) in this here very interesting piece, is easy: Enrollment Problem Plus Risk Management.

But still, the idea of using Bitcoin crypto style solutions to the ‘international’ passport problem is useful, and might work. In some way. Not this self-certification one. If you’re aware of how long PGP has been around, you should be aware of all the failures of any form of tribal-cred-branching-out IDs.
And, a great many governments may just not have a sufficiently pressing need for a new passport scheme. The risks of the current model, are known and (again: apparently) manageable.

So I’ll leave you with:
DSCN1415[Apologising calmly. And frequently.]

Your info – value

Wanted to post something on the value of information. Then, this came out a couple of weeks ago. By way of some sort of outside-in determinant of the value of (some) information… [Oh and this here, too, even more enlightening but for another discussion]

who-has-your-back-copyright-trademark-header
Which appears to be an updated but much shortened version of what I posted earlier. Players disappeared or doesn’t anyone care anymore about the ones dropped out ..?
Anyway.

Yes I wasn’t done. Wanted to add something about information value within ‘regular’ organisations, i.e., not the ones that live off ripping off people of their personal data for profit as their only purpose with collateral damage functionality to lure everyone, would value the information that they thrive on, by looking inside not circling around the perimeter.
I could see that being established via two routes:

  • The indirect avenue, being the re-build costs; what it would cost to acquire the info from scratch. Advantage: It seems somewhat tractable. Drawback: Much info would be missed out on, in particular the unstructured and intangibly stored. Employee experience …!?
  • The direct alley. Not too blind. But still, hard to go through safely. To take stock of all info, to locate it, tag it, among other things, with some form of revenue-increase value. Advantage: Bottom-up, a lot of fte’s to profit from the Augean labor (Hercules’ fifth). Drawback: the same.

OK, moving on. Will come back to this, later.

Not yet one IoTA; Auditing ‘technology’

[Apologies for the date/time stamp; couldn’t pass.]
First, a pic:
20140226_113554
[Classy classic industrial; Binckhorst]

Recently, I was triggered by an old friend about some speaking engagement of mine a number of years back. As in this deck (in Dutch…).
The point being; we have hardly progressed past the point I mentioned in that, being that ‘we’ auditors, also IT/IS auditors!, didn’t fully adapt to the, then, Stuxnet kind of threats. (Not adopt, adapt; I will be a grammar and semantics n.z. on that.)
As we dwelled in our Administrative view of how to control the world, and commonly though not fully comprehensively, had never learned that the control paradigms there, were but sloppy copies of the control paradigms that Industry had known for a long time already, effectively in the environment of use there. As in this post of mine. Etc.

But guess what – now many years later, we still as a profession haven’t moved past the administrative borders yet. Hence, herewith

A declaration of intent to develop an audit framework for the IoT world.

Yes, there’s a lot of ground to cover. All the way from classification of sensors and networks, up to discussions about privacy, ethics and optimistic/pessimistic (dystopian) views of the Singularity. And all in between that auditors, the right kind, IS auditors with core binary skills and understanding of supra-supra-governance issues, might have to tackle. Can tackle, when with the right methodologies, tools, attitude, and marketing to be able to make a living.

Hm, there’s so much to cover. Will first re-cover, then cover, step by step. All your comments are welcomed already.
[Edited to add: Apparently, at least Checkpoint (of firewall fame oh yes don’t complain I know you do a lot more than that yesterday’s stuff; as here) has some offerings for SCADA security. And so does Netop (here). And of course, Splunk). But admit; that’s not many.]

Clustering the future

Was clustering my themes for the future of this blog. Came up with:
Future trend subjects[Sizes, colours, or text sizes not very reflective of the attention the various subjects will get]
Low sophistication tool, eh? Never mind. Do mind, to comment. On the various things that would need to be added. As yes I know, I have left much out of the picture, for brevity purposes. But will want to hear whether I missed major things before I miss them, in next year’s posts. Thank you!
And, for the latter,
DSCN0924[Bah-t’yó! indeed]

SPICE things up, maturely

Where just about everyone in my Spheres was busy ‘implementing’ (quod non) all sorts of quality ‘assurance’ or ‘control’ (2x quod non) models, in the background there was quite some development in another, related area that may boomerang back into the limelight, for good reason.
First, this:
DSCN8573[Zuid-A(rt)sifyed]

The subject of course regards SPICE, or rather the ISO 15504 that it has turned into. Of the Old School of software development quality improvement era. Now transformed into much more…
In particular, there’s Capabilities instead of ‘maturity levels’.

What more can I add ..? Systematic, rigorous, robust, resistant against commercial panhandling. The intricacies … let’s just point to the wiki page again; ’tis clear enough or you need other instruction…

Lemme just close off with asking you for your experiences with this Standard…?

Maverisk / Étoiles du Nord