Note to self: GDPR scrum with or without the r

Just to remind myself, and you for your contributions, that it’s seriously time to write up a post on Agile development methods [OK, okay, I mean Scrum, as the majority side of the house]; how one is supposed to integrate GDPR requirements into that.
Like, we’re approaching the stage where the Waterfall model      of security implementation, will be Done for most organisations. Not Well Done, rather Rare or Pittsburg Rare, at your firm [not Firm …]. But then, we’ll have to make the wholesale change to Maintenance, short-term and long-term. And meanwhile, waterfall has been ditched for a long time already in core development work, hence we have a backlog (huh; the real kind) qua security integration (sic; the bolt-on kind doesn’t work anyway) into all these Agile Development methods of which word has it everyone and their m/br-other seems to make use these latter days.

But then, the world has managed to slip security into that. Which is praiseworthy, and needs more Spread The Word.

And then, there’s the GDPR. May we suggest to include it in ‘security’ as requirements flow into the agile development processes ..?
As said, I’ll expand on this l8r.
If only later, since we need to find a way to keep the DPOs out of this; the vast majority (sic) of them, with all due [which hence may be severely limited] respect, will not understand to a profound level they’ll try to derail your development even without the most basic capability to self-assess they do it, in ways that are excruciatingly hard to pinpoint, lay your finger on.

But as written, that’s for another time. In the meantime, I’d love to see your contributions (if/when serious) overflowing my mailbox… Plus:
[Lawyers lurking next door…; Zuid-As Ams]

Music to AI’s ears

Will AI eventually appreciate music ..?

Not just appreciate in a sense of experiencing the quality of it — the latter having ‘technical’ perfection as its kindergarten basement’s starting level only; where the imperfections are cherishable as huge improvements, yes indeed [Perfection Is Boring!] … but moreover, music appreciation having a major element of recognition, subconsciously mostly, of memories of times almost-im-memorial.

Of course, the kindergarten perfection gauging, AI will be able to do easily. Will, or does; simple near-algorithmic A”I” can do that today.

Appreciating imperfections, the same, with a slight randomiser (-recogniser) thrown in the algo mix.

But the recollection part, even at a conscious level requires memories to be there, and as far as AI goes (today) even ASI will have different memory structures since the whole facts learning processes are different. And don’t mention the subconscious side.

Yes, ASI can have a subconscious, of which we aren’t aware of even able to be aware [Note to self: to cover in audit philosophy development]. But when we don’t hear of this, was there a tree that fell in the forest?

I’m off some tangent direction.

What I started out to discuss is: At what point does music appreciation through the old(est) memories recall, become an element of ‘intelligence’ ..?

With the accompanying question, on my priority list when discussing AxI: Is it, for humans ..?

And a bonus question: Do you really think that AI would prefer, or learn earlier about the excellence, of Kraftwerk or Springsteen? Alas, your first response was wrong; Kraftwerk’s the kind of subtle intelligent hint-laden apparently-simple stuff that is very complex and also deeply human — which you perceive only when listening carefully over and over again till you get the richness and all the emotions (there they are!) and yearning for the days gone by when the world was a better place. Springsteen, raw and Original-Forceful on the surface — but quickly showing a (rational-level) algorithmics play with not as much depth; even the variations and off-prefection bits are well thought-out, leaving you with much less relatable memories if at all.

Your thoughts are appreciated. And:
[Appropriately seemingly transparent but completely opaque; some EU parliament (?), Strassbourg]

Nutty cryptofails

Considering the vengeance with which cryptobackdoors, or other forms of regulation into tautological-fail limitations, are pursued over and over again (case in point: The soon luckily carved out surrender (to Monay) monkeys [case in point: anyone who has seriously tried an invasion, succeeded handsomely]), it may be worthwhile to re-consider what the current situation is. As depicted in the following:

In which D is what governments et al can’t stand. Yes, it’s that big; pushing all other categories into corners.
Where C is also small, and probably shrinking fast. And B is known; maybe not empty but through its character and the knowledge of it as cracked-all-around part, hardly used if ever, by n00bs only.
And A is what governments want for themselves, but know they can’t have or it will quickly move to B — probably without governments’ knowing of this shift…

And all, vulnerable to the XKCD ‘hack’:

Against which no backdoor-for-governments-only policy will help.
I’ll rest.

What you said, doesn’t matter anymore

Yet another proof class busted: Voice being (allegedly) so pretty perfectly synthesizable, that it loses its value as proof (of identity). Because beyond reasonable doubt isn’t beyond anymore, and anyone venturing to bring voice-based evidence, will not be able to prove (beyond…) that the sound heard, isn’t tampered with i.e. generated. Under the precept of “whoever posits, proofs”, the mere remark that no madam Judge we honestly did not doctor this evidence, is insufficient and there can be no requirement for positive disproof for dismissal from the defense as that side is not the one doing the positing. What about entrapment, et al.?

So, technological progress brings us closer to chaos. “Things don’t move so fast”-believers must be disbarred for their demonstrated gross incapacity — things have moved fast and will do so, ever faster. Or what ..?

Well, or Privacy. Must the above ‘innovator’ be sanctioned severely for violation of privacy of original-content-sound producers ..? Their (end) product(s) is sold/leased to generate false identity or doctored proof, either for or against the subject at hand, <whatever> party would profit thereof. Like an equipment maker whose products are targeted at burglars, or worse e.g., guns. Wouldn’t these be seriously curfewed, handcuffed ..?

[Edited to add, after drafting this five days ago: Already, Bruce is onto this, too. Thanks. (Not my perspective, but still)]

Oh, or:
[Apparently so secure(d), ‘stormed’ and taken practically overnight (read the story of); Casa Loma, Toronto]

Pitting the Good against the Others

When the recent rumours were, are valid that some patches were retracted — and this was because they accidentallt disables other exploits not yet outed in the stash, this would bring a new (?) tension to the surface or rather, possibly explains some deviant comms of the past:
Where some infosec researchers had been blocked from presenting their 0-day vulns / exploit-PoCs, this may not have been for protection of the general public or so, but to keep useful vulnerabilities available for the TLAs of a (variety of?) country(-ies).
Pitting the Ethical researchers against the bad and the ugly…

No “Oh-oh don’t give the bad guys valuable info and allow even more time to the s/w vendors to plug the holes” but “Dammit there go our secret backdoors!
Makes much more sense, to see the pres blocking in this light. And makes huge bug bounties by these TLAs towards soon to be a bit less ethical researchers, more possible and probable. Not as yet better known, though. Thoughts?
[Takes off tinfoil movie-plot security scenario hat]

Oh, and:
[All looks happy, but is looked upon from above …; Riga]

Collateral (un)patching; 0+1-day

Is this a new trend? Revealing that there had been a couple of exploitables, backdoors in your s/w when you patch some other ones and then have to roll back because you p.’d off the wrong ones since you accidentally also patched or disabled some hitherto secret ones.
At least, this is what it seems like when reading this; M$ stealthily (apparently not secretly enough) patching some stuff in negative time i.e., before-zero day. When later there’s rumours about this patch(ing, possibly parts of) is retracted.

For this, there appear (again) to be two possible reasons:
a. You flunked the patch and it kills some Important peoples’ system(s);
b. You ‘flunked’ the patch and you did right, but the patch effectively killed some still-not-revealed (in the stash) backdoors that the Important peoples (TLAs) still had some use for and were double-secretly requested to put back in place.

I’m in a Movie Plot mood (come to think of it, for no reason; ed.) and go for the second option. Because reasons (contradictory; ed.). Your 2¢ please.

Oh, and:
[So crowded and you’re still much less than a stone’s throw from a Da Vinci Code (was it?) big secret — I may have the pic elsewhere on my blog…; Barça]

What should also be in the GDPR

At least, as an idea: Foreign countries that interfere with privacy in the EU, should be included in the penalisation stuff. Same levels, like; 4% of GDP for e.g., registering political opinions of citizens of the EU even when they’re also citizens of that foreign, alien, enemy country, without explicit opt-in consent. [This happened, happens..!] For every transgression. Then enforce via trade sanctions and import taxes [after checking the trade balance will effect the ‘payment’ of the fines; won’t be stupid].

Oh, and:
[Or the supreme leader goes to jail for a long, long time and is struck by lightning; unrelated, Ottawa]

Common(s) as privacy and vice versa ..?

Remember from your econ class that concept of The Commons, and how problematic it was? Is?
There was this intriguing post recently, on how Free Speech might be considered and deliberated in terms of the commons being exhausted by undue over-use (abuse) — for its use alone ( → ). Leading to aversity of the concept not of the abuser or his (sic) apparent locally recognised but globally not, ‘valid’ reason(s) for over-use.

Which, as is my wont of the moment, driven by personal business interests, I took to be applicable to Privacy as well. Maybe not in the same way, but … This will need quite some discussion between me on the one hand, and peers and others on the other who would actually know what they’re talking about. Throwing in a bit of anglo-american data-isn’t-yours versus European (‘continental’ — will brexit – which starts to sound like a lame Benny Hill kind of joke ever more – change that ..??) data-is-datasubject’s-always divides, and some more factors here and there. Complicating matters, but hey life’s not perfect.

Waddayathink? In for a discussion ..? Let’s start!

And:
[Not so very common-s; Toronto]

Authentic means work, you see?

Recalling the recent spat about passwords again (and elsewhere), and some intriguing, recent but also not so recent news (you get it when you study it), it seems only fair to the uninitiated to clarify some bits:
Authentication goes by something you know, something you have or something you are. Password(s), tokens or biometrics, in short. All three have their drawbacks.

But that’s not the point. The point is that authentication is about making the authentication unspoofable by anyone but the designated driver owner.
That is why you shouldn’t dole out your passwords (see the above first link) e.g., by writing them on a post-it™ whereas writing a full long passphrase on just one slip of paper that you keep to yourself more zealously than your money, will work.
That is why tokens shouldn’t be stolen. Which you might not discover until it’s too late; and tokens have a tendency to be physical stuff that can be replayed, copied, etc. just like a too-short password. Maybe not as simply, but nevertheless.
Same with biometrics. When made simple enough for the generic user (fingerprints, ever so smudgy!) also easily copyable, off a lot of surfaces. Other biometrics, maybe more secure i.e. harder to copy but not impossible. And opening possibilities for hijacks et al., focus on breaking into the systems in the login/authentication chain, et al.
Which brings attention to yet more vulnerabilities of Have and Are: Both need quite a lot of additional equipment, comms, subsystems, to operate and work from the physical to the logical (back) to the IS/IT levels. Weakest-link chains they are ..!

So, the strength of authentication covaries with the non-leakability of the key, since both correlate to the source determinant in-one-hand-ity close to the actual person whose identification-as-provided (by that person, or by anyone else posturing) needs to be authenticated. By which I mean that ensuring one item of authentication, closely glued to the person and with the simplest, least-link connection chain to the goal system(s), is best. The latter, clearly, is the written-down-verylongpassword method.

Just think about it. And:
[They’re called locks. Discuss (10pts); Ottawa]

All fine, for whom?

Just to be clear: Where do all the fines that will rain like hail from heck once GDPR comes into force, go to ..? Yes the supervisory authority may levy the fines, but it isn’t clear to whom the payment should go. Certainly leading to huge differences in compliance chasing: When the auth may keep them for themselves, they’re a. richer than the king since b. sure to penalise each and every futile infringement to the max; when the money goes to government’s coffers, that chasing not so much because who’d care?
You don’t believe me, right? Just wait and see. And weep.

Plus:
[Where the coffers are kept ..? Segovia]

Maverisk / Étoiles du Nord