Mixing up the constitution

When your state secretary is mixing up all sorts of things. When at the official site, at last email (and other ‘telecomm’) is listed to be included as protected on the same footing as snail mail has always been, qua privacy protection.

Which raises the question: Does that include the right to use (uncrackable) encryption, because that is what is equivalent to a sealed envelope ..? When the same government wanted to ban that, or allow simply-crackable [i.e., with bumblinggovernment means – the most simpleton kind or ‘too hard’] encryption only?
Why would this have to be included so explicitly in the constitution no less, when just about every other tech development isn’t anywhere there, and in the past it has always been sufficient to interpret/read the constitution to automatically translate to the most modern tech without needing textual adaptation ..? [As has been the case in every civilised country, and maybe even in the US too.]
And where would GDPR impinge on this; is the rush necessitated by GDPR (with all its law-enforcement exemptions, pre-arranging the ab-use of those powers GDPR will give), or is this an attempt to pre-empt protection against Skynet overlords (pre-pre-empting GDPR protection for citizens), – recognising that anything so rushed will never be in favour of those citizens – or what?

One wonders. And:
[So many “unidentified” office buildings in NY, NY …]

Pitting the Good against the Others

When the recent rumours were, are valid that some patches were retracted — and this was because they accidentallt disables other exploits not yet outed in the stash, this would bring a new (?) tension to the surface or rather, possibly explains some deviant comms of the past:
Where some infosec researchers had been blocked from presenting their 0-day vulns / exploit-PoCs, this may not have been for protection of the general public or so, but to keep useful vulnerabilities available for the TLAs of a (variety of?) country(-ies).
Pitting the Ethical researchers against the bad and the ugly…

No “Oh-oh don’t give the bad guys valuable info and allow even more time to the s/w vendors to plug the holes” but “Dammit there go our secret backdoors!
Makes much more sense, to see the pres blocking in this light. And makes huge bug bounties by these TLAs towards soon to be a bit less ethical researchers, more possible and probable. Not as yet better known, though. Thoughts?
[Takes off tinfoil movie-plot security scenario hat]

Oh, and:
[All looks happy, but is looked upon from above …; Riga]

Collateral (un)patching; 0+1-day

Is this a new trend? Revealing that there had been a couple of exploitables, backdoors in your s/w when you patch some other ones and then have to roll back because you p.’d off the wrong ones since you accidentally also patched or disabled some hitherto secret ones.
At least, this is what it seems like when reading this; M$ stealthily (apparently not secretly enough) patching some stuff in negative time i.e., before-zero day. When later there’s rumours about this patch(ing, possibly parts of) is retracted.

For this, there appear (again) to be two possible reasons:
a. You flunked the patch and it kills some Important peoples’ system(s);
b. You ‘flunked’ the patch and you did right, but the patch effectively killed some still-not-revealed (in the stash) backdoors that the Important peoples (TLAs) still had some use for and were double-secretly requested to put back in place.

I’m in a Movie Plot mood (come to think of it, for no reason; ed.) and go for the second option. Because reasons (contradictory; ed.). Your 2¢ please.

Oh, and:
[So crowded and you’re still much less than a stone’s throw from a Da Vinci Code (was it?) big secret — I may have the pic elsewhere on my blog…; Barça]

Common(s) as privacy and vice versa ..?

Remember from your econ class that concept of The Commons, and how problematic it was? Is?
There was this intriguing post recently, on how Free Speech might be considered and deliberated in terms of the commons being exhausted by undue over-use (abuse) — for its use alone ( → ). Leading to aversity of the concept not of the abuser or his (sic) apparent locally recognised but globally not, ‘valid’ reason(s) for over-use.

Which, as is my wont of the moment, driven by personal business interests, I took to be applicable to Privacy as well. Maybe not in the same way, but … This will need quite some discussion between me on the one hand, and peers and others on the other who would actually know what they’re talking about. Throwing in a bit of anglo-american data-isn’t-yours versus European (‘continental’ — will brexit – which starts to sound like a lame Benny Hill kind of joke ever more – change that ..??) data-is-datasubject’s-always divides, and some more factors here and there. Complicating matters, but hey life’s not perfect.

Waddayathink? In for a discussion ..? Let’s start!

And:
[Not so very common-s; Toronto]

Authentic means work, you see?

Recalling the recent spat about passwords again (and elsewhere), and some intriguing, recent but also not so recent news (you get it when you study it), it seems only fair to the uninitiated to clarify some bits:
Authentication goes by something you know, something you have or something you are. Password(s), tokens or biometrics, in short. All three have their drawbacks.

But that’s not the point. The point is that authentication is about making the authentication unspoofable by anyone but the designated driver owner.
That is why you shouldn’t dole out your passwords (see the above first link) e.g., by writing them on a post-it™ whereas writing a full long passphrase on just one slip of paper that you keep to yourself more zealously than your money, will work.
That is why tokens shouldn’t be stolen. Which you might not discover until it’s too late; and tokens have a tendency to be physical stuff that can be replayed, copied, etc. just like a too-short password. Maybe not as simply, but nevertheless.
Same with biometrics. When made simple enough for the generic user (fingerprints, ever so smudgy!) also easily copyable, off a lot of surfaces. Other biometrics, maybe more secure i.e. harder to copy but not impossible. And opening possibilities for hijacks et al., focus on breaking into the systems in the login/authentication chain, et al.
Which brings attention to yet more vulnerabilities of Have and Are: Both need quite a lot of additional equipment, comms, subsystems, to operate and work from the physical to the logical (back) to the IS/IT levels. Weakest-link chains they are ..!

So, the strength of authentication covaries with the non-leakability of the key, since both correlate to the source determinant in-one-hand-ity close to the actual person whose identification-as-provided (by that person, or by anyone else posturing) needs to be authenticated. By which I mean that ensuring one item of authentication, closely glued to the person and with the simplest, least-link connection chain to the goal system(s), is best. The latter, clearly, is the written-down-verylongpassword method.

Just think about it. And:
[They’re called locks. Discuss (10pts); Ottawa]

Progress, friends, is here. Only, not everywhere. Yet. Say ‘No’ till then?

You know that the bright new future is here, when amid the torrent (figuratively referring to the physical phenomenon, nothing to do with the on-line tool(s)) of fake news, this still makes it into a headline: ATMs now to begin to start being rolled out with Win10 ‘support’. To be completed per 2020, when support for Win7 stops. Right. 2020; probably not referring to the eyesight of the ones planning this, not being personally accountable and duly informed of the risks.

Because otherwise, wouldn’t it be smarter to come up with a clever idea to do the roll-out within a month, to prevent just about anyone to take ATM security — or is it a signpost for overall infosec’s position — seriously, as seriously as it should ..?

It’s time there comes an agency, Nationwide, worldwide, that has the authority to say NO!!! to all ill-advised (IT- which is the same these days) projects. Infosec professionals tried to ditch the Dr. No image, but it turns out, it’s needed more than ever to prevent the Stupid (Ortega y Gasset’s Masses I guess) from endangering all of us or at least squandering the billions (yes) that could have been applied against world poverty etc.etc.

Oh, and:
[The UBO ‘humanity’ seems to be lost, here; Zuid-As Ams]

Yesterday, same thing.

This is sort-of the same as yesterday’s post, put into practice, when your AGA now not only remotely slow-cooks but slow-betrays you. Slowly either does not at all or over-burns your carefully prepped meat. So the wretched short-lived lambkin died for nothing.
Would anyone know of any device out there that is duly protected against this sort of thing? Or whether (not or not) this is a generic weakness: Access from the outside, offers access from the outside to anyone, to rattle the door. And some, through persistance or imme force applied, will find the door opens. Your convenience, theirs too. Same, with ‘connected’ toys. Yes they are

Oh, and:
[May superficially look like an AGA but isn’t, not even a hacked architecture studio’s design, just purposeful – and beautiful – museum design in Toronto]

Learn you will… Recover, you might.

When your countries largest retailer (primarily F&B but non-F only recently growing as well), has finally heard about something-something-smart-fridge. And wants to do it Right and starts off with a pilot. Of, drumroll, a smart fridge magnet with a mic and barco scanner for adding stuff to your on-line grocery list (on-site self-service pick / pick-up, or delivery to follow separately). Didn’t kno that existed already.
Nice idea, to include not (only) a barco deliberate-scanner (no creepy auto-scans) but also a mic when you don’t have the product at hand (and fresh veggies wouldn’t make it; for a long time already not stickered but weighted at the (vast majority) non-selfscanned check-out).

But what security ..? For fun, e.g., putting reams of alcohol stuff on the to-pickup lists of unsuspecting meek middle-classmen that won’t understand but come home with some explanation to do (bonus for taking the stuff off the list once procured so ‘no’ trace on the shopping list). For less fun, snooping off people’s shopping habits and get rich (by ultra-focused ads or selling off the data, or by extortion-light once you get the Embarrassing Items in view). For even less fun but lulz (grow a pair) when changing the list to violate some family member’s med-dietary choices into harmful variants. And don’t forget the option to (literally) listen in on very much that is said in the vincinity of the fridge. Could be anything, but probably privacy-sensitive.
But what security? The press release point to other countries’ supermarkets already offering the Hiku sensors. Nothing is unhackable. Exploit searches must be under way. People never learn. Reputational (corp) and personal-integrity (clients) damages may or may not be recoverable, at huge expense.

I’m not in, on this one. No need. Plus:
[Where you can learn; Zuid-As Ams]

Right. Without -s

So, we’re into this era of giving up control over our lives. Where we’re either dumb pay-uppers, or (also) victims. Which in turn leads to questions regarding who will have any income at all, to pay for the service of being allowed to sit as stool pigeon until shot anyway.
Because the latter is what follows from this here nifty piece; Tesla not giving your data unless they can sue you. The EU push for human-in-the-loop may need to be extended considerably, but should, must. Possibly similar to the path of the Original cookie directive, from weak opt out to strong double opt in plus all privacy requirements (purpose / functional necessity, minimalisation, etc.etc.).

Do we recognise here again the idea that though your existence creates it and would be different for every human on earth (plus orbit), your data isn’t yours ..? Quod non! When someone takes what you produced (however indirectly! – inferred and metadata and all) without payment, that is theft or worse in any legal environment.
Is there anywhere a platform where the consequences of this global delineation are more clearly discussed, between Your Data Isn’t Yours Because We Process It, versus My Data’s Mine Wherever ..?

I’d like to know. And:
[Your fragile fortress…; Barça]

Behaviour is key to security — but what if it’s perfect?

When the latest news on information security points in the direction, away from reliance on technical stuff, of the humans that you still can’t get rid of (yet!), all are aboard the ‘Awareness is just the first step, you’ll need to change the actual behaviour of users‘ train. Or should be, should have been, already for a number of years.
In Case You Missed It, the Technology side of information security has so far always gobbled up the majority of your respective budgets, with all of the secondary costs to that, buried in General Expenses. And the effectivity of the spend … has been great! Not that your organisation is anywhere near as secure as it could reasonably have been, but at least the majority of attackers rightly focus not on technology (anymore – though still a major headache) but on the feckle user discipline. Oh how dumb and incompetent these users are; there will always be some d.face that falls for some social engineering scam. Sometimes an extremely clever one, when focusing at generic end users deep down in your organisation, sometimes a ridiculously simple and straightforward one when targeting your upper management – zero sophistication needed, there.

The point is, there will always be some d.face that makes an honest mistake. If you don’t want that, you’ll have to get rid of all humans and then end up overlording robots (in the AI sense, not their superfluous physical representation) that will fail because those underling users of old held all the flexibility of your organisation to external pressures and innovation challenges.
Which means you’re stuck with those no-good [i.e., good for each and every penny of your atrocious bonus payments] humans for a while.

Better train them to never ever deviate from standard procedures, right?
Wrong.
Since this: Though the title may look skewed and it is, there’s much value in the easy step underpinning the argument; indeed repetitive work makes users’ innate flexibility explode in uncontrolled directions.
So, the more you coax users into compliance, the worse the deviations will get. As elucidated, e.g., here [if you care to study after the pic; study you’ll need to make something of the dense prose; ed.].

So, here too your information security efforts may go only so far; you must train your users forever, but not too much or they’ll just noncomply in possibly worse directions.

Oh well:
[Yeah, Amsterdam; you know where exactly this depicts your efforts – don’t complai about pic quality when it was taken through a tram’s window…]

Maverisk / Étoiles du Nord