Decision time for informational priv

When discussing Privacy, a lot of attention goes to informational privacy, easily tautologised with person-possibly-indentifying data.
If that reads mixed-up, it’s because it is.
But that’s for another session series. Of series.

What today’s post title is about, is the distinction between the two sides of the house; informational privacy (which is about information about you, or which you generate) versus decisional privacy (commonly defined in terms of your right to freely decide over your body’s integrity). As you read that, clearly the latter needs an update; a heck of a long KBxyzuvw article attached.
Because both the

  • Outright choice limitation through covert or overt profiling and covert or overt automated decision making, sometimes limiting your choice to none when you get rejected (from the ability to even decide) for something, or get no service proposition at all, a.k.a. the Hobson’s choice of socmed,
  • Covert choice limitation through filter bubbles – which would more accurately be called filter fish-trap,

can result from a lack of informational privacy. But both aren’t well covered in the definition of decisional priv whereas that infamous thing with The Freedom of the Pursuit of Happiness or whatsitcalled I don’t care you get it, Freedom, should be guaranteed.
So tightly coupled with all sorts of metaphysics, ontology, and topology of Privacy. Like, the feeling and understanding y’all have when you hear that word. It’s not only ‘bugger off nothing of your interest here’ privacy but also ‘get off my back‘ privacy; no weighing down.

Oh well. This being among my interests but not really my training, so I’ll go read up the latest qua this all. Pointers appreciated. And:
[For no reason whatsoever, totally unconnected; Riga Jugendstil]

Note to self: GDPR scrum with or without the r

Just to remind myself, and you for your contributions, that it’s seriously time to write up a post on Agile development methods [OK, okay, I mean Scrum, as the majority side of the house]; how one is supposed to integrate GDPR requirements into that.
Like, we’re approaching the stage where the Waterfall model      of security implementation, will be Done for most organisations. Not Well Done, rather Rare or Pittsburg Rare, at your firm [not Firm …]. But then, we’ll have to make the wholesale change to Maintenance, short-term and long-term. And meanwhile, waterfall has been ditched for a long time already in core development work, hence we have a backlog (huh; the real kind) qua security integration (sic; the bolt-on kind doesn’t work anyway) into all these Agile Development methods of which word has it everyone and their m/br-other seems to make use these latter days.

But then, the world has managed to slip security into that. Which is praiseworthy, and needs more Spread The Word.

And then, there’s the GDPR. May we suggest to include it in ‘security’ as requirements flow into the agile development processes ..?
As said, I’ll expand on this l8r.
If only later, since we need to find a way to keep the DPOs out of this; the vast majority (sic) of them, with all due [which hence may be severely limited] respect, will not understand to a profound level they’ll try to derail your development even without the most basic capability to self-assess they do it, in ways that are excruciatingly hard to pinpoint, lay your finger on.

But as written, that’s for another time. In the meantime, I’d love to see your contributions (if/when serious) overflowing my mailbox… Plus:
[Lawyers lurking next door…; Zuid-As Ams]

The privacy-nightmare not your pseudo-dreams

Again, some serious flaw in the GDPR: Its reliance on, sponsorship for, pseudonymisation.
Which is worthless, already against break-ins.
And is worse, much worse, when you consider all the exemptions for ‘statistical use’ that are a cover for all the blatant abuse of personal data that the GDPR was originally intended to counter. And is worse, because six publicly available data points are all that is needed to identify anyone of the general public. De-anonymisation may be an art of sorts, but not a difficult one; easily demonstrated by any half-ass capable “hacker” consultant involved. [Of the Real kind]

Outside the controllers/processors conglomerates, such six points may have to be searched for – holdit; done. – but when anyone were to be able to infiltrate (why haven’t we heard of APTs for so long now? Because it was the TLAs, or is the overall picture waaayyy too scary to consider?), those six points are often found winthin one data set, if not with the IDs in some hardly-remote table.

And don’t come with the solution of homomorphic encryption, so usable for the statistical stuff. Also cracked, ever more systemically.

As if in today’s 21st century age, anyone would come forward with ‘these new developments, of motorised aeroplanes, with a “propellor” and all; they hold a promise for possible trans-atlantic flight!’ — Yet the GDPR isn’t different…

And:
[The background has much more circus than the tent before it, ifyaknowaddImean; Zuid-As Ams]

Nutty cryptofails

Considering the vengeance with which cryptobackdoors, or other forms of regulation into tautological-fail limitations, are pursued over and over again (case in point: The soon luckily carved out surrender (to Monay) monkeys [case in point: anyone who has seriously tried an invasion, succeeded handsomely]), it may be worthwhile to re-consider what the current situation is. As depicted in the following:

In which D is what governments et al can’t stand. Yes, it’s that big; pushing all other categories into corners.
Where C is also small, and probably shrinking fast. And B is known; maybe not empty but through its character and the knowledge of it as cracked-all-around part, hardly used if ever, by n00bs only.
And A is what governments want for themselves, but know they can’t have or it will quickly move to B — probably without governments’ knowing of this shift…

And all, vulnerable to the XKCD ‘hack’:

Against which no backdoor-for-governments-only policy will help.
I’ll rest.

What you said, doesn’t matter anymore

Yet another proof class busted: Voice being (allegedly) so pretty perfectly synthesizable, that it loses its value as proof (of identity). Because beyond reasonable doubt isn’t beyond anymore, and anyone venturing to bring voice-based evidence, will not be able to prove (beyond…) that the sound heard, isn’t tampered with i.e. generated. Under the precept of “whoever posits, proofs”, the mere remark that no madam Judge we honestly did not doctor this evidence, is insufficient and there can be no requirement for positive disproof for dismissal from the defense as that side is not the one doing the positing. What about entrapment, et al.?

So, technological progress brings us closer to chaos. “Things don’t move so fast”-believers must be disbarred for their demonstrated gross incapacity — things have moved fast and will do so, ever faster. Or what ..?

Well, or Privacy. Must the above ‘innovator’ be sanctioned severely for violation of privacy of original-content-sound producers ..? Their (end) product(s) is sold/leased to generate false identity or doctored proof, either for or against the subject at hand, <whatever> party would profit thereof. Like an equipment maker whose products are targeted at burglars, or worse e.g., guns. Wouldn’t these be seriously curfewed, handcuffed ..?

[Edited to add, after drafting this five days ago: Already, Bruce is onto this, too. Thanks. (Not my perspective, but still)]

Oh, or:
[Apparently so secure(d), ‘stormed’ and taken practically overnight (read the story of); Casa Loma, Toronto]

Mixing up the constitution

When your state secretary is mixing up all sorts of things. When at the official site, at last email (and other ‘telecomm’) is listed to be included as protected on the same footing as snail mail has always been, qua privacy protection.

Which raises the question: Does that include the right to use (uncrackable) encryption, because that is what is equivalent to a sealed envelope ..? When the same government wanted to ban that, or allow simply-crackable [i.e., with bumblinggovernment means – the most simpleton kind or ‘too hard’] encryption only?
Why would this have to be included so explicitly in the constitution no less, when just about every other tech development isn’t anywhere there, and in the past it has always been sufficient to interpret/read the constitution to automatically translate to the most modern tech without needing textual adaptation ..? [As has been the case in every civilised country, and maybe even in the US too.]
And where would GDPR impinge on this; is the rush necessitated by GDPR (with all its law-enforcement exemptions, pre-arranging the ab-use of those powers GDPR will give), or is this an attempt to pre-empt protection against Skynet overlords (pre-pre-empting GDPR protection for citizens), – recognising that anything so rushed will never be in favour of those citizens – or what?

One wonders. And:
[So many “unidentified” office buildings in NY, NY …]

Pitting the Good against the Others

When the recent rumours were, are valid that some patches were retracted — and this was because they accidentallt disables other exploits not yet outed in the stash, this would bring a new (?) tension to the surface or rather, possibly explains some deviant comms of the past:
Where some infosec researchers had been blocked from presenting their 0-day vulns / exploit-PoCs, this may not have been for protection of the general public or so, but to keep useful vulnerabilities available for the TLAs of a (variety of?) country(-ies).
Pitting the Ethical researchers against the bad and the ugly…

No “Oh-oh don’t give the bad guys valuable info and allow even more time to the s/w vendors to plug the holes” but “Dammit there go our secret backdoors!
Makes much more sense, to see the pres blocking in this light. And makes huge bug bounties by these TLAs towards soon to be a bit less ethical researchers, more possible and probable. Not as yet better known, though. Thoughts?
[Takes off tinfoil movie-plot security scenario hat]

Oh, and:
[All looks happy, but is looked upon from above …; Riga]

DNA not so Determinant; there goes another piece of Evidence

[ Commemoration of the Dead, today in the Netherlands. Never forgotten. Never forget! ]

In the series of surrealisation of proof, in courts and elsewhere, turning anything into faker news than before – a trend that was under way already for a long time, maybe centuries but now speeding up enormously – after the most recent class of proof (yes don’t complain I’m clear, qua ‘class’!) we have even old (?) evidence classes being overthrown. Like, your DNA.
Somehow, we already knew that. Where the analogue of hash collisions happened IRL, with disastrous consequences for peoples’ lives, and that of their families, et al. Really, imagine yourself in the midst of it all: Ragnarök and the collapse of the foundations of society … I’m not joking any bit.

But now, again. What Evidence classes remain? When each and every class can be planted, fabricated (signatures, pictures; untraceably), coerced (‘rat out your partner or all of your family will be killed before your eyes’), etc., indeed nothing remains. Nothing non-repudiatory…

But flipside; Skynet is here. Like before.

And:
[Either way, you lose; Zuid-As Ams]

What should also be in the GDPR

At least, as an idea: Foreign countries that interfere with privacy in the EU, should be included in the penalisation stuff. Same levels, like; 4% of GDP for e.g., registering political opinions of citizens of the EU even when they’re also citizens of that foreign, alien, enemy country, without explicit opt-in consent. [This happened, happens..!] For every transgression. Then enforce via trade sanctions and import taxes [after checking the trade balance will effect the ‘payment’ of the fines; won’t be stupid].

Oh, and:
[Or the supreme leader goes to jail for a long, long time and is struck by lightning; unrelated, Ottawa]

Common(s) as privacy and vice versa ..?

Remember from your econ class that concept of The Commons, and how problematic it was? Is?
There was this intriguing post recently, on how Free Speech might be considered and deliberated in terms of the commons being exhausted by undue over-use (abuse) — for its use alone ( → ). Leading to aversity of the concept not of the abuser or his (sic) apparent locally recognised but globally not, ‘valid’ reason(s) for over-use.

Which, as is my wont of the moment, driven by personal business interests, I took to be applicable to Privacy as well. Maybe not in the same way, but … This will need quite some discussion between me on the one hand, and peers and others on the other who would actually know what they’re talking about. Throwing in a bit of anglo-american data-isn’t-yours versus European (‘continental’ — will brexit – which starts to sound like a lame Benny Hill kind of joke ever more – change that ..??) data-is-datasubject’s-always divides, and some more factors here and there. Complicating matters, but hey life’s not perfect.

Waddayathink? In for a discussion ..? Let’s start!

And:
[Not so very common-s; Toronto]

Maverisk / Étoiles du Nord