On qualitative calculations


[Guess the country. Wrong.]

Some time ago, I hinted that maybe some combination of fuzzy logic and wavelet-like mathematics might deliver tools for qualitative risk management calculations.
Now is the time to delve a little into the subject. If possible, somewhat methodologically.

But hey, you know me; that will not work too perfectly as perfection is boring. I’ll just take it away with a discussion on scales, and thrown in a handful of Ramsey-Lewis work:
There’s the nominal scale. Where one can only sum up the categories or bins one would want to place one’s observations in. presumably, we’d have observations of just one quality (‘aspect’) of the population of observed items [either ‘real’, if there is such a thing, or abstract..! Oh how many problems we face here already] One cannot establish (in)equality between categories, nor add or substract, nor ‘size-‘compare.
There’s the ordinal scale, too. Here, we have some form of ranking of categories, they can be ordered. Categories are either dichotomous (an observation put into one category excludes the observation be also put into another), or non-dichotomous (category membership is non-exclusive. ‘Completely agree’ supposes ‘mostly agree’). At least we can compare, by order, between ‘larger’ and ‘smaller’ or by precedence, but not much more; no calculation.
On to the interval scale. Which has degrees of difference, like (common) temperature and dates. With their arbitrary zero we can’t talk sensibly about ratios. 10°C is not twice as warm as 5°C … But we can add and substract, just not multiply and divide.
This, by the way, is the ‘first’ scale considered to be quantitative, the above are qualitative..!
Next is the ratio scale, that has all of the above and a definitive zero, hence full calculation can be done.
Trick question for you: Where would the binary scale go …?

Just to mention Cohen’s kappa for qualitative score agreement among raters; it may or should come back later. And to mention all sorts of ‘Big’ Data analysis conclusions, with all the monstrosities of mathematical errors in that squared with lack of understanding of the above (in particular, the degradation of information in the direction from ratio to nominal, impossibly the other way around without adding relatively arbitrary information..!)

Now then, fuzzy logic. It takes an interval scale or ‘better’, and stacks a probability function to arrive at quantitative non-ditochomy. Next, work through cause-effect trees [hey, that’s a new element – I’ll come to it shortly] where some cause both happens with some probability and doesn’t happen with another probability at the same time, which propagates through OR and AND-gates to create effects with weird probabilities / weird aspects of probability, and on through the chain / feedback loops and all. If we can assign probabilities to less than interval scales, we would … oh, we do that already, in fault trees, etc.
Except that we don’t. We do not, I repeat do not, build real fault trees in general business riks management. We do the fresh into kindergarten version of it only! NO you don’t. And also, you don’t assign proper probabilities. You screw them up; apologies for the words.

So, we need combinations. We need the flexibility to work with qualitative scales when (not if) we can do no better, and with quantitative scales wherever we can. Being careful about the boundaries ..! Maybe that is (the) key.

[Interlude: wavelets, we of course use on ratio scales, as a proxy for fuzzy logic in continuous mathematics (?)]

Why would this be so difficult ..? Because we have so limited data ..? That’s solvable, by using small internal-crowd sourced measurements; using Cohen’s kappa (et al.) as mentioned.
Because we have no fault trees? Yes, indeed, that is your fault – trees are essential to analyse the situations anyway. Acknowledging the difficulties to get to any form of completeness, including the feedback loops and numerous time-shifts (Fourier would have ahead-of-time feedbacks …! through negative frequencies…). Not acknowledging consistency difficulties; one could even state that any worthwhile fault tree i.e., any one that includes sufficient complexity to resemble reality in a modeling way (i.e., leaving out unnecessary detail (only!)), will have inconsistencies included or it’s not truly representative… ☺

Hm, I start to run in circles. But:
• Haven’t seen fault trees in risk management, lately. We need them;
• Let’s apply ‘fuzzy logic’ calculations to fault trees. We can;
• When we use less-than ratio scales, let’s be clear about the consequences. Let’s never overrepresent the ‘mathematical rigour’ (quod non) of our work, as we will be dragged to account for the errors that causes, with certainty.

Say [null] to voting, 1 to on-line


[Another of those places where I was hard at work. Right.]

Now, the Netherlands is back to voting electronically … In a way (in Dutch).
One votes, on an electronic machine, that prints your vote and then you hand in the slip, that will be machine-read for tallying. Oh yes that’s voting electronically. Whereas it looks clumsy, and it is, it also doesn’t satisfy some basic but very fundamental requirements anyone would consider perfectly normal for (nation-wide democratic) elections. A great many ‘rogue’ (by the standards of others …) states have some of those concerns covered better…

My personal gripe, however, doesn’t concern this as democracies dominated by parties that have many-issue programs and no direct recourse against plain flat-out lying to voters, are a complete #fail of democracy. And, I am unsure that perfect democracy is feasible.
Or wanted! As democracy will result in mob rule, and other wrongs far better explained by much greater minds.

No, my problem is that now, still one has to be present at one’s own neighbourhood voting station. Why hasn’t some form of ‘Internet voting’ been implemented ..? There have already been many methodology theories out there on how to do that, with the safeguards required. I recall that two decades ago, a Swiss kanton did something with this …?
You may counter that there’s still the problem of reliable code. But that can be solved by creating something open source, to be mandatory checked by all parties (or their tech-savvy proxies; very probably from outside poolitics as the pretense may be there but nothing, zero!, of actual insight into e.g., code). And of course the app would run on all platforms, in particular mobile. See how that would increase voter turnout!

And of course we would have fallback traditional stations anyway, in particular as e.g., the elderly would not ncessarily understand how to #appvote (tag claim).
And dear reader that wants to complain about secrecy: Dig into basic crypto and the protocols developed in science first, please…

If only I had the time (made available to me, paid decently) to really research this all…!

Collateral sustainability


[Paris. Repeatable idea.]

In order to get to real sustainable business (or non-profit, or public), the organisation should not bolt on the sustainability initiatives, but build them in. Into the primary processes themselves. So that ‘sustainability’ becomes collateral next to the money-making, or x-making, that the organisation had set out to do.

One way of doing this, is by (external?) pressure to have the pollutor pay. In that way, similar to VAT I guess, any organisation neutralises the ‘damage’ they do by generating moneys for restaurative initiatives, externally, or internally if one is allowed to spend the surcharge on such restaurative initiatives oneself; this would of course need extensive, costly and fraud-sensitive, ‘independent’ auditing. Self-control will not do! And if the damage may be undone fully; repllanting a few fast-growing trees is not a substitute for eradicating well-developed forests. Covering an open pit mine with green is not a repair of the environmental damage done. Full footprint costs are the only reasonable foundation for calculations. Hence, the moneys may better be spent by others; governments or special interest groups supported by governments.
In this way, too, the production methods (ingredients, raw materials, labour, etc.) that pollute less into the internal, often more into the external environment, will also be cheaper. There will be an incentive, at last, to use more sustainably lean production methods. To let employees work from home more, and/or flex. Etc. By having a pollutor surcharge (that for economy-wide cost neutrality may take the form of a variation of corporate profit tax), the pressure gets real, and the pressure will not be through the public image of the organisation alone.

∗ Note that an economy-wide, i.e., country-wide (or region-wide, e.g., EEA), levy of surcharge may need a compensatory import tax for good, services imported, and one may consider proceeeds to be put into compensation for exports. Otherwise, the playing field wouldn’t be level, globally. Or would corporate tax discounts help; and/or how would the import of raw materials, etc., flow through production ..? Maybe not use an exit-based VAT but an input-based ‘Destroyed Value Included’ charge ..? Will be the bookkeepers’ wet dream either way.

Your thoughts, please!

Mall, where you’re heading?


[Some think this is hideous but it’s fresh, even if it ruined the neighbourhood.]

This then, on the future of retail. With an attempt to analyse and predict [You know my opinions on that …] what malls, or shopping centres in general, may look like in, say, five years from now.
Let’s first define the inputs for my ‘analysis’:

• Rapid developments of web shops in all retail areas, not just non-food and food ;-] Yes indeed, on the surface, even food already has its web sale / home delivery channels.

• At least temporarily, an increase in one-off / small batch production items as people seek things differing from mass products.

• The previous, along with flexibility in price. Not all one-offs / small batch products will have, or require, premium prices; mass products however may be required to ever further lower their price by expectations from buyers.

• Premium-price indetermination and small volumes work only when product offerings are available to (the increasing slice that cares, of) potential buyers. But we have that Internet, and may improve on the freshness of its content. Maybe through ambient intelligence / hyperlocal (discount/sales) offerings when some model of micro-location- or -time-based ad hoc approval of pitch presentation (on the hand-, wrist-, or headheld device) and trust of micro-time and -location permanence of privacy-sensitive (i.e., all) data fed back, would actually be trusted by sufficent numbers of sufficiently affluent potential buyers to get past the network effect point.

• This implicates the expectance that shoppers will still want to go out in the first place, to socialise (of fact; not by verbal communications or so but by mere presence) with the unknown Others out there.

• Through cost increases, people transport will diminsh or will shift to cheaper means; e.g. bikes – with much smaller load carrying capacity.

• So there will be various markets out there:
Mass markets for convenience products, either at hypermarkets out of town or with home delivery – with the trend probably going towards home delivery as that saves on the nuisance to go out and be among the hoi polloi and people in general being ever more in the devolution mode of less and less exposure to the outside-of-the-house environment. (A note; I’m unsure whether fitness will move ‘back’ to indoor clubs, or will move ever more into the ‘fresh’ outside air, or both as more people (will go to!) do anything at all about their bodies ..?
Mass retail webshops, for the home delivery markets as well as for the Lm-markets as below. Also serving product comparison.
Local markets for locally produced one-off / small-batch items. Note that these may be produced in somewhat larger batches if shipping can be done within the perish timeframe, in particular for non-perishables. But no batch may be so big as to become mass as the freshness and uniqueness (only in retail is that a scale not an absolute) would diminish too much; local shoppers for hipness may not like too-large-scale availability. And note that even if shoppers would like to be surprised by the (next)^x new thing that only they have access to, they will still want to be the first to a. compare with Everything else out there, through their mobile Total Information awareness device(s), b. share their überhipness immediately, thus diminishing the newness value of their latest purchase by snapchatting it, i.e., asking for copies to become available ASAP.
Local outlets for mass-produced goods, when the product selection experience can be made a worthwhile experience in itself as with luxury goods (mass produced in sweat shops as they are, along with the mass-produced rip-offs), or when the delivery/pick-up can sufficiently add to the product experience, to warrant the shop rent. This will hold as long as opulent display of one’s Mammon worship (only; by implication) has positive residual value (net of the ridicule by ever greater masses).

• And where does this leave the Mall, or the Shopping centre (if there would be a difference) ..? The above demonstrates: Uniqueness, experience will be the thing. Locality of shops, and trust of local production and uniqueness, will be key. Hence, chain stores can survive only if they enhance the experience, or have a sufficently fluent web-purchase / local delivery experience (i.e., perfect logistics) and not too much generic stuff in stores to ruin the experience. Or provide in-store, the advantages of web shops (enormous collection, vast browsability) in combination with the premium shopping experience but also in combination with affordability! If not affordable, markets will be too small to attract sufficient clientele to warrant rent and staff; concentration will result in ever smaller enclaves of luxury shops. Shoppers will drop out, and not come from far away (don’t want the hassle and cost anymore) to shop around (among a public they don’t associate with, for products they can’t afford and not want for its lack of quality (opulence only, no content quality).
Shoppers will come for Ll- and Lm-shops, though. Aim for those..! Not big shops, but small ones; many. The low rent these may provide, and the insecurity of continuity (as real estate owner, you have control over that: rent should follow shop income, not the other way around…), are a matter of fact. Big stores will be empty, for much longer, and will cost you much more non-rent, much longer.
But oh, will web shops not vacate the shopping centre ..? No. They’re a fully-alternative channel, but they’ll lose for shopping pleasure and for freshness (newness) of products – web sites, one would need to track too frequently and too extensively (in numbers) to keep up with and people will get bored pretty quickly with that. In particular if Ll-shops do that for them anyway.

You comments, please! (see link below…)

Double shhh


[On a rooftop ..! ‘t Spant, Bussum]

Yeah, it’s a post on double secrets again. Not just because I haven’t seen any conclusive research on what to do with it; how to handle oversight (what is warranted, , etc.), what limits to justifications there would be, how to close the recursive secrecy gap, etc.
Not even because of stuff like this.

But because another issue was pointed out yesterday/today in a post at Bruce Schneier’s blog: Where double secrets may exist, trust is lost, and (theoretically and practically) impossible to regain.

Which is a problem not only for ‘current’ (big) companies relying on the trust of ‘consumers’ (who are in fact drone suppliers of almost completely free raw materials) and other business partners on the receiving end, as their business model will crumble to nothing when (not if) those cheapoo supplier leave in massive numbers.
It also spells trouble for the not-yet-big, almost-not-yet-companies. As defined in this slide deck, those new companies rely on distributed power, which is based on trust. The said (not sad) companies can grow only to the point where the base of trusting counterparts in exchanges (~facilitated) still grows. If at one end, trustors still flow into the system, but trustors on the other end flow out at a faster pace, the base will be ever narrower; the house of cards becomes more fragile and will collapse as some business wind (if only draft) comes along.

So, in order to really ‘disrupt’ as if that would be a lofty goal of any business [I am very much opposed to such thinking! ‘Disruption’ invariably leads to massive job losses and ever so many more family members’ life dreams ruined. No, the new industry will be of (relatively) jobless growth and yes, at some scale one has to take the macro effects into account], one would need to have a pre-emptive way to deal with double secrets, so the trustor trust base may grow in breath and depth.

My feeling is now that this sort of issue may also be the foundation of the inevitable-collapse-of-any-democracy issue. As predicted toungue in cheek, and shown practically throughout history. Are we at the verge of such a (Schumpeterian?) collapse, dinosaur extinction phase, in the way societies manage themselves? Utopian or distopian visions of what’s next for the coming era (remember the ‘Mayan calendar’ prediction of such a ‘new era’ ..?) may both be overblown, or … does reality always play out a bleak version of what could have been?

All in all, it seems rather important than someone [preferably someone more intelligent than me – regarding these issues, that is] would have a look at this all…
Is there really nothing out there in the intersection of sociology-, trust-, legal-, and economics- research that has pointers on how to resolve this issue ..? If the NSA or other TLA(s) are listening in and would have some Confi stuff, that’s good, too …!

Domo tics


[Voorburg, Herenstraat; Mú by Ming Hu Chen]

With the Internet of Things coming at us with lightning speed, why haven’t we seen a surge in easily appliable domotics ..? Sure, there have been small little, often not too well-designed and plasticky appliances out there for a while, but they often were in the bargain bins before a full‑scale deployment could take place. And we see trials here and there in offices (cheesily, in Dutch, and video) with bits and parts of true ambient intelligence style domotics. But still, not the real stuff.

If you now say that within our homes, domotics don’t (doesn’t?) take off because of lack of network effects (What are the benefits? [How to size them up and compare/add them?] Where‘s the tipping point?) within the confines of our houses individually, and the prices still being too high (among others, due to the lack of turnover and initial investment recoup), and installation being too cumbersome (all devices need placement and connection, probably to power supplies other than oft- and quickly failing batteries, and either connect cabled (old-fashioned) needing breaking into walls, or wireless, requiring ugly visibility) – I would reply that these are issues to overcome with smarter solutions.

But then, what is the date of construction of Bill Gates’ Xanadu2.0 home [If you’re listening: I would like a tour yes indeed thank you!] that has all the follow-me temperature, lighting and ambient music ..? Over a decade ago; does M$ deliver a full copy solution to every home yet? Domotics Explorer 2.0 doesn’t give too many Bing search hits…

A thing that may need to be settled first, is the general architecture of it all. Sensors and actuators need to be put in place, but how and where; what secondary elements do we need (control centers; where and how many (networked, carry-on or stationary?), signals-generating actuators; separate devices (hi keycard/pin) or built into other things, or programmed into other things (hi smartphone, à la NFC payments, now not so N)). What human control, using monitoring dashboards and some form of input (fingerclicks and voice work well in sitcoms; phone screen swipes, maybe?), can we have, can we allow? Are any ethics involved (haves/have nots; control over one’s environment vs. experiencing new things, conflicts, the commons) ..?

And there’s the question of business models for device suppliers and servicers. Subscriptions with updates of software, and hardware, or plain purchase? What about interconnectivity and multiple standards?

Just check the wikipedia page; so much more to discuss. E.g., re the integration or not, I definitely prefer not, of domotics with “‘smart’ grid” ideas of outside monitoring of our home internals – but connection to the outside may help your fridge to order short stock/supplies. Or what to do when conflicting demands are made within the same room (try sharing music tastes with your kids; they just don’t seem to get Purple, Floyd, or the Maiden or even Zappa).
But the question remains: How to overcome the not-yet-existing network effects requirement ..?

Stats and news


[Van Gogh’s ear]

The news is still filled these days with all sorts of incidents with (new) media, information leaks, etc. etc.
How long will it take us to realise that this is just the noise to the signal of health? It’s not that traditional media, and new media, are filled, it is that their space is too limited. If they would have sufficient space to cover all that goes well, no-one would notice the mishaps. Risk-based controls, anyone ..? Because, if you would do that seriously, you probably wouldn’t control anything!

Your comments, please.

Predictions 2014 the InfoSec edition


[DUO Groningen (couple of years ago), where a leak led to many a student’s funds were defrauded. Looks original, is just chasing outer effects]

So, some of you have seen my Predictions 2014 including the update or two, and other posts on developments in the Information world at large.
But what you desparately needed, awaited before even starting on the Christmas decorations (for those in the West), and held out any shopping for food or beverage for, I know, I know [Heh, that’s the opposite of Unk Unk’s ;-], is my completely and utterly unpretentious predictions for the Information Security arena of 2014.

Well, here we go then:

Advanced Persistent Threats will blossom like weeds (not wiet!) in 2014. APTs being the ultimate blended threats to confidentiality of information. There may be cases of government-on-government espionage, that highlight the ‘modern’ squared or cubed variant of traditional intrusion & spying (information exfiltration) work. But moreover, there will be incidents much publicised where business-on-business espionage, possibly helped by shady government agencies, is outed as more than a theoretical possibility. Of course, you all know that this is nothing new, but the general public will demand an answer from some Board members and State secretaries here and there (literally) of victims and perpetrators (denial becoming less if at all plausible). These answers will be the definition of lame. The infosec industry will rush to develop (maybe not yet fully utilise) the market; contra but hush-hush, also pro…

Certificate vulnerabilities will be shown to be a factor of import. Yesterday’s unprotected printer WiFi, will be the current certificates being stolen or manipulated. Bot victimisation of clients will be less by trojan planting and more by means of hijacking certificates (‘Certjacking’? You read it here, first! [Update to add: well, in this spelling …]) to do … sort of low-level ID / trust theft. This will not be explained nearly well enough to the general public to get them concerned, but with tech-savvy CIOs and IT managers, this will give a stir. And major sweeps through the own infra. And not much by means of future better cert management.

Crypto-failures. Cryptography, as far as actually implemented at all (some ‘implementations’ may embarassingly be found out to be done on paper only!), will show to have failures in at least two ways: procedural errors will lead to (much publicly visible) non-availability of information, e.g., when asymmetry isn’t implemented well due to lack of understanding of bureaucrat procedures writing committees mumbling and fumbling about; and by public demonstration of bad technical implementation, the coding being so shoddy that the strength of crypto will drop to n-day crackability (with 0 < n < 2 I guess).

Quantum computing, on the plus side, for crypto, will see its first practical proofs of concept. Where the PoCs are used to protect the most secret of some government’s information, but with that information having to be used by the most bumbling-about officials hence the overall end user -to- end user effectiveness being close to zero and helping any and all attackers (rogues, organised or not; state(-sponsored) organisations, etc.) to learn about tentative, among them maybe class-hack, attack vectors.

Methodological innovation in information security. As already discussed earlier, and here, and here, and with OSSTMM, and before that in quite some other posts as well. Now, also Docco joined the fray, advising SMEs from the accountancy side … Which shows this prediction for 2014 is already hatching.
On this item, we will see much improvement. E.g., combining the OSSTMM framework with SABSA. Combining the rebooted CIA with the fresh install of the (alternative to the) ‘15.5 risk’ management approach via the OSSTMM framework. And so on. Interesting! Want to contribute!

I’ll leave these here for you to follow up. When (not if) I’m right on any of the above: Yeah, see? Told you so! And if (not when) not all five are square-on: Hey, they’re only predictions! Don’t shoot the messenger that only conveyed the message of the Great Engineer Behind The Scene.
Though I would like to receive a bottle of good (I mean, really good) red Bourgogne for each prediction that shows to be somewhat right; at the good side of 50-50. Any takers?
The opposite, I can unfortunately not do as it would have way too many takers…

Maverisk / Étoiles du Nord