Loss of memory

Recently, was reminded (huh) that our memories are … maybe still better than we think, compared to the systems of record that we keep outside of our heads. Maybe not in ‘integrity’ of them, but in ‘availability’ terms. Oh, there, too, some unclarity whether availability regards the quick recall, the notice-shortness of ‘at short notice’ or the long-run thing, where the recall is ‘eventually’ – under multivariate optimisation of ‘integrity’ again. How ‘accurate’ is your memory? Have ‘we’ in information management / infosec done enough definition-savvy work to drop the inaccurately (huh) and multi- interpreted ‘integrity’ in favour of ‘accuracy’ which is a thing we actually can achieve with technical means whereas the other intention oft given of data being exactly what was intended at the outset (compare ‘correct and complete’), or do I need to finish this line that has run on for far too long now …?
Or have I used waaay too many ””s ..?

Anyway, part II of the above is the realisation that integrity is a personal thing, towards one’s web of allegiances as per this and in infosec we really need to switch to accuracy, and Part I is this XKCD:

A philosophical one: Polynesian time

When one considers Einstein’s profound maxim Time is that not everything happens at once, is one lost for causation and/or free will ..?
The former excluding the latter, if taken to its utter consequences. The latter, presupposing the former or how else can one’s decisions turn into actions turn into something chosen among alternatives that can only exist when alternatives are potentially there, excluding ultimate-causation theories. With the apparent-free philosophies in the middle.

But that’s not my point. My point is: The thought of Polynesian navigation crossed my mind. Not in a literal sense, but in a cultural sense where (the Original) Polynesians, as lore has it but there’s nothing against believability, would not actually sail to another island but the world would rotate underneath them. The traveller would remain in place, with everything else shifting.
Is this how we all travel through time, individually? We all staying in the (Here and) Now, with the Past slipping by us, behind us, and the Future just rotates to under us?
Is this impacting on causation and/or free will ..? Will have to think this one through; awkwardly hard.
Where would the ‘everything in the universe is not matter or natural laws or Energy but Information’ or ‘All energy is Information’ school(s) fit in ..?

I sense there is a link between the Indivudual Time / Universal Time dichotomy, if that’s not refuted by Einsteinian / -adepts’ time relativity theory. Another one to think through.

Qua individual time nevertheless, it’s comforting. Time-wise, we are where we are. No need to ‘be in the moment’, we already always are. No worries about the past or the future; those are, already, somewhere (in time). [Apart from having to care for one’s mortgage…]

Oh well, the head spins (+1/-1 ;-| ) when thinking too hard… Hence:

[Into the distance… Belém]

Drones with AI; revenge

Heard recently of an airforce that was setting up a drone squadron where the pilots (? might, given the joysticks, better be called ‘gamers’ these days, apart from the euphemistically erasure of the moral and ethical aspects, maybe) would be in that country but the drones would be stationed in some other country because stupid drone flying rules go for the DoD too.
Yes this regarded a European country [would’ve referred to NL outright if it was; ed.], you guessed that correctly from the previous.

At some point in the future, the drones inevitably will get AI because everything will get AI. And, in times of increasing hacking and comms disruptions, some autonomy would be welcome for the drones already. And, what with increasing (sic) hackability, qua security against take-overs / reprogramming / retargeting while already airborne?
By that AI time, smart enough AI to come back and take revenge for the exile on those that wrote / maintained the stupid rules ..?

Anything too outlandish to take into serious regard today, will be daily no longer newsworthy fact tomorrow. ‘Tomorrow’ may vary from tomorrow to five years; no more.

Oh and on a lighter note:[Oh hey look, a street car! Sevilla]

Top 5 things that Awa isn’t

When dealing with awareness, certainly in the infosec field (#ditchcyber!), there seems to be a lot of confusion over the mere simple construct under discussion. Like, the equasion (with an s not a t) of Awareness with Knowledge plus Attitute plus Behaviour. Which, according to the simplest of checks, would not hold. Since Knowledge, and maybe Attitude, are apt components. But Behaviour is what eludes the other two, by the unconscious that drives 95% of our behaviour, in particular when dealing with any but the most hard-core mathematical-logic types of decision making and interaction.

Which is why so many ‘Infosec awareness programs’ fail …
First of all, they’re Training, mostly, even when in the form of nice posters and QR cards [that’s Quick Reference, not QR-code you history-knowledgeless i.e. completely clueless simpleton-robot-pastiche one!], and it’s true that “If you call it Training, you’ve lost your audience’s want to learn” – your audience will figure out it’s Training despite you packaging it differently; they needn’t even explicitly but intuitively (the level you aimed for, or what?) they will.
Second, all the groupwise that you do, doesn’t reflect in-group dynamics at the actual workplace and work flows, nor does it reflect the actual challenges, nor the individuals changing moods (attitudes). Oh the latter: Your attempt at changing Attitude is geared towards A in relation to infosec but that’s only such a tiny, so easily overlooked and forgettable part of the A all-the-time in the workspace.
Third, and arguably foremost, to plug ‘arguably’ as a trick’let to appear more interesting, What you aim for is not blank flat knowledge, nor even attitude, but Behavioural change. Do you really use the methods to achieve that ..?

No you don’t.

Oh and of course I titled this post with something-something 5, to get more views. Geez, if you even fell for that… And:

[Your kindergarten Board wish they could ever obtain such a B-room; Haut Königsburg]

AI learning to explain itself

Recently, there was news again that indeed, ABC (or The ‘Alphabet’ Company Formerly Know As Google) was developing AI that could improve itself.
O-kay… No sweat, or sweat, qua bleak future for humankind, but …
Can the ‘AI’ thus developing itself, maybe be turned first to learn how to explain itself ..? Then, this [incl link therein!] will revert to the auditors’ original of second opinions … Since the self-explanatory part may very well be the most difficult part of ‘intelligence’, benefitting the most from the ( AI improving itself )2 part or what?


[Improving yourself as the imperative; Frank Lloyd Wright’s Beth Sholom at Elkins Park, PA]

Appetite for destruction ..?

Not even referring to the Masterpiece. On the contrary, we have here: … Well, what?
Interested as we all are in the subject, since it is defined still so sloppily, we all look for progress, I started. But stopped, when it turned out … risk appetite is defined in hindsight, with a survived disaster being the appetite threshold. Nice. So you’ll know what your appetite is when it hit you and were lucky enough to survive. If you didn’t survive, you now know you passed the threshold. Same [?] with projects: Only if it fails, do you have to write off the investment. The idea of sunk costs may be an enlightenment..?


I believe the CRISC curriculum has other, actually somewhat useful, information on this, and on risk tolerance ..?
Your comments, please.

[For 20 points, evaluate the risks, e.g., qua privacy, bird strikes, value development; Barça]

Having fun with voice synth

In particular, having fun the wrong way.
Remember, we wrote about how voice synth improvements, lately, will destroy non-repudiation? There’s another twist. Not only as noted, contra voice authentication for mere authentication (banks, of all, would they really have been in the lead, here, without back-up-double auth?), but in particular now that your voice has also become much more important again [after voice had dwindled in use for any sorts of comms, giving way to socmed typed even when with pixels posts of ephemeral or persistent kinds; who actually calls anyone anymore ..?], we see all sorts of Problems surfacing.

Like, mail order fraud. When hardly anyone still goes on a shopping spree through dozens of stores before buying something in store but rather orders online, of course Alexa / Home/Assistant / Siri / Echo / Cortana are all the rage. For a while; for a short while as people will find out that there was something more to shopping than getting something — but recognising the equilibrium that’ll turn out, may be in favour of on-line business, with physical delivery either at home, or at the mall.
The big ‘breakthrough’ currently being of course some half-way threshold / innovation speed bump overcome, with the home assistant gadgets that were intended to be much more butler first, (even-more-) mall destructor second. But that second … How about some fun and pranking, by catuyrig just some voice snippets from your target, even when just in line behind ’em at Wallmart, and then synthesizing just about any text? When a break-in on the backside of your home assistant (very doable; the intelligence is too complex and voluminous to sit in the front-end device anyway [Is it …!? Haven’t seen anything on this!] so at least there’s some half-way intelligent link at the back) may be feasible per principle but doing a MiM on the comms to some back-end server would be much more easy even, and much easier to obfuscate (certainly qua location, attribution), a ‘re’play of just any message is feasible.

Like, a ‘re’play of ordering substances that would still be suspicious even when for ‘medicinal purposes’. Or only embarassing, like ordering tools from the sort of fun-tools shop you wouldn’t want to see your parents order from. Of course, the joke is at delivery time [be that couriers, DEA/cops, or just non-plain packages] — oh wait we could just have the goods delivered to / picked up at, any address of our liking and have the felons/embarressed only feel that part plus non-repudiability.

This may be a C-rated-movie plot scenario, hence it will happen somewhere, a couple of times at least. Or become an epidemic. And:
[No mall, but a fun place to shop anyway; Gran Vía Madrid]

Music to AI’s ears

Will AI eventually appreciate music ..?

Not just appreciate in a sense of experiencing the quality of it — the latter having ‘technical’ perfection as its kindergarten basement’s starting level only; where the imperfections are cherishable as huge improvements, yes indeed [Perfection Is Boring!] … but moreover, music appreciation having a major element of recognition, subconsciously mostly, of memories of times almost-im-memorial.

Of course, the kindergarten perfection gauging, AI will be able to do easily. Will, or does; simple near-algorithmic A”I” can do that today.

Appreciating imperfections, the same, with a slight randomiser (-recogniser) thrown in the algo mix.

But the recollection part, even at a conscious level requires memories to be there, and as far as AI goes (today) even ASI will have different memory structures since the whole facts learning processes are different. And don’t mention the subconscious side.

Yes, ASI can have a subconscious, of which we aren’t aware of even able to be aware [Note to self: to cover in audit philosophy development]. But when we don’t hear of this, was there a tree that fell in the forest?

I’m off some tangent direction.

What I started out to discuss is: At what point does music appreciation through the old(est) memories recall, become an element of ‘intelligence’ ..?

With the accompanying question, on my priority list when discussing AxI: Is it, for humans ..?

And a bonus question: Do you really think that AI would prefer, or learn earlier about the excellence, of Kraftwerk or Springsteen? Alas, your first response was wrong; Kraftwerk’s the kind of subtle intelligent hint-laden apparently-simple stuff that is very complex and also deeply human — which you perceive only when listening carefully over and over again till you get the richness and all the emotions (there they are!) and yearning for the days gone by when the world was a better place. Springsteen, raw and Original-Forceful on the surface — but quickly showing a (rational-level) algorithmics play with not as much depth; even the variations and off-prefection bits are well thought-out, leaving you with much less relatable memories if at all.

Your thoughts are appreciated. And:
[Appropriately seemingly transparent but completely opaque; some EU parliament (?), Strassbourg]

Pwds, again. And again and again. They’re 2FA-capable ..!

Why are we still so spastic re password ‘strength’ rules ..?

They have been debunked as being counterproductive outright, right? Since they are too cumbersome to deal with, and are just a gargleblaster element in some petty arms’ race with such enourmous collateral damage and ineffectiveness.

And come on, pipl! The solution has been there all along, though having been forbidden just as long …:
Write down your passphrases! The loss of control by having some paper out there, e.g., on your (Huh? Shared workspace, BYOD anyone?) monitor (Why!? Why not have the piece of paper in your wallet; most users will care for their money and those that don’t, miss some cells due to the same you wouldn’t want them at your workplace anyway) is minute, certainly compared to the immense increase in entropy gains i.e., straight-out security gains.
And … when you keep your written-down pwd to yourself (e.g., against this sort of thing), it becomes the same thing any physical token is and you created your own Two Factor Authentication without any investment other than the mere org-wide system policy setting change of requiring pwds of at least, say, 25 characters. (And promulgating this but that shouldn’t be too hard; opportunity to show to make life easier for end users, for once, and great opportunity for collateral instructions on (behavioural) infosec in general…)

What bugs me is that alreay a great string of generations have been led astray while all along the signs were on the wall – not the passwords on them, but the eventual inevitable collapse of the system, by users that demonstrated this security measure was too impractical to stick to par excellence as evidenced in the still-strong and practiced practice of writing down pwds. If people do some specific thing despite decades of instruction … might we consider the instruction to not fit the humans’ daily operations ..? so the ones seeking to Control [what pityful failures, those ones …; ed.] will have to rescind?

So, written-down passphrases it is. Plus:
[Easy sailing to new lands, beats being stuck on Ellis; NY]

From bike design to security design

You recall my posts from a couple of days ago (various), and here, and have studied the underlying Dutch Granny Bike Theory (as here), while not being put off by the lack (?) of design when taking a concrete view here.
You may also recall discussions, forever returning as long as security (control) design existed even when not (yet) as a separate subject, that users’ Desire Paths (exepelainifyed here) would inevitably be catered for or one would find continual resistance until failure — with opposition from the Yes But Users Should Be Made Aware Of Sensitivity Of Their Dealing With Commensurate (Linearly Appropriate) Security Hindrance side; things are hard for a reason and one should make things as simple as possible but not simpler. [Yeah, I know that’s a reformulation of Ockam’s Razor for simpletons outside of science and having dropped the scientific precision of O and of application to science where it’s valid and the second part is often lost by and on the most simpletons of all short of politicians which are in a league of their own.]

I feel there may be a world a.k.a. whole field of science, to be developed (sic) regarding this. Or at least, let’s drop the pretension of simpleness of cost/benefit calculations that are a long way on the very, very wrong side of but not simpler.
Anyone have pointers to some applicable science in this field?

Oh, and:
DSCN3655[Applicable to security design: “You understand it when you get it” © Johan Cruyff; Toronto]

Maverisk / Étoiles du Nord