Common(s) as privacy and vice versa ..?

Remember from your econ class that concept of The Commons, and how problematic it was? Is?
There was this intriguing post recently, on how Free Speech might be considered and deliberated in terms of the commons being exhausted by undue over-use (abuse) — for its use alone ( → ). Leading to aversity of the concept not of the abuser or his (sic) apparent locally recognised but globally not, ‘valid’ reason(s) for over-use.

Which, as is my wont of the moment, driven by personal business interests, I took to be applicable to Privacy as well. Maybe not in the same way, but … This will need quite some discussion between me on the one hand, and peers and others on the other who would actually know what they’re talking about. Throwing in a bit of anglo-american data-isn’t-yours versus European (‘continental’ — will brexit – which starts to sound like a lame Benny Hill kind of joke ever more – change that ..??) data-is-datasubject’s-always divides, and some more factors here and there. Complicating matters, but hey life’s not perfect.

Waddayathink? In for a discussion ..? Let’s start!

And:
[Not so very common-s; Toronto]

Authentic means work, you see?

Recalling the recent spat about passwords again (and elsewhere), and some intriguing, recent but also not so recent news (you get it when you study it), it seems only fair to the uninitiated to clarify some bits:
Authentication goes by something you know, something you have or something you are. Password(s), tokens or biometrics, in short. All three have their drawbacks.

But that’s not the point. The point is that authentication is about making the authentication unspoofable by anyone but the designated driver owner.
That is why you shouldn’t dole out your passwords (see the above first link) e.g., by writing them on a post-it™ whereas writing a full long passphrase on just one slip of paper that you keep to yourself more zealously than your money, will work.
That is why tokens shouldn’t be stolen. Which you might not discover until it’s too late; and tokens have a tendency to be physical stuff that can be replayed, copied, etc. just like a too-short password. Maybe not as simply, but nevertheless.
Same with biometrics. When made simple enough for the generic user (fingerprints, ever so smudgy!) also easily copyable, off a lot of surfaces. Other biometrics, maybe more secure i.e. harder to copy but not impossible. And opening possibilities for hijacks et al., focus on breaking into the systems in the login/authentication chain, et al.
Which brings attention to yet more vulnerabilities of Have and Are: Both need quite a lot of additional equipment, comms, subsystems, to operate and work from the physical to the logical (back) to the IS/IT levels. Weakest-link chains they are ..!

So, the strength of authentication covaries with the non-leakability of the key, since both correlate to the source determinant in-one-hand-ity close to the actual person whose identification-as-provided (by that person, or by anyone else posturing) needs to be authenticated. By which I mean that ensuring one item of authentication, closely glued to the person and with the simplest, least-link connection chain to the goal system(s), is best. The latter, clearly, is the written-down-verylongpassword method.

Just think about it. And:
[They’re called locks. Discuss (10pts); Ottawa]

Progress, friends, is here. Only, not everywhere. Yet. Say ‘No’ till then?

You know that the bright new future is here, when amid the torrent (figuratively referring to the physical phenomenon, nothing to do with the on-line tool(s)) of fake news, this still makes it into a headline: ATMs now to begin to start being rolled out with Win10 ‘support’. To be completed per 2020, when support for Win7 stops. Right. 2020; probably not referring to the eyesight of the ones planning this, not being personally accountable and duly informed of the risks.

Because otherwise, wouldn’t it be smarter to come up with a clever idea to do the roll-out within a month, to prevent just about anyone to take ATM security — or is it a signpost for overall infosec’s position — seriously, as seriously as it should ..?

It’s time there comes an agency, Nationwide, worldwide, that has the authority to say NO!!! to all ill-advised (IT- which is the same these days) projects. Infosec professionals tried to ditch the Dr. No image, but it turns out, it’s needed more than ever to prevent the Stupid (Ortega y Gasset’s Masses I guess) from endangering all of us or at least squandering the billions (yes) that could have been applied against world poverty etc.etc.

Oh, and:
[The UBO ‘humanity’ seems to be lost, here; Zuid-As Ams]

Yesterday, same thing.

This is sort-of the same as yesterday’s post, put into practice, when your AGA now not only remotely slow-cooks but slow-betrays you. Slowly either does not at all or over-burns your carefully prepped meat. So the wretched short-lived lambkin died for nothing.
Would anyone know of any device out there that is duly protected against this sort of thing? Or whether (not or not) this is a generic weakness: Access from the outside, offers access from the outside to anyone, to rattle the door. And some, through persistance or imme force applied, will find the door opens. Your convenience, theirs too. Same, with ‘connected’ toys. Yes they are

Oh, and:
[May superficially look like an AGA but isn’t, not even a hacked architecture studio’s design, just purposeful – and beautiful – museum design in Toronto]

Learn you will… Recover, you might.

When your countries largest retailer (primarily F&B but non-F only recently growing as well), has finally heard about something-something-smart-fridge. And wants to do it Right and starts off with a pilot. Of, drumroll, a smart fridge magnet with a mic and barco scanner for adding stuff to your on-line grocery list (on-site self-service pick / pick-up, or delivery to follow separately). Didn’t kno that existed already.
Nice idea, to include not (only) a barco deliberate-scanner (no creepy auto-scans) but also a mic when you don’t have the product at hand (and fresh veggies wouldn’t make it; for a long time already not stickered but weighted at the (vast majority) non-selfscanned check-out).

But what security ..? For fun, e.g., putting reams of alcohol stuff on the to-pickup lists of unsuspecting meek middle-classmen that won’t understand but come home with some explanation to do (bonus for taking the stuff off the list once procured so ‘no’ trace on the shopping list). For less fun, snooping off people’s shopping habits and get rich (by ultra-focused ads or selling off the data, or by extortion-light once you get the Embarrassing Items in view). For even less fun but lulz (grow a pair) when changing the list to violate some family member’s med-dietary choices into harmful variants. And don’t forget the option to (literally) listen in on very much that is said in the vincinity of the fridge. Could be anything, but probably privacy-sensitive.
But what security? The press release point to other countries’ supermarkets already offering the Hiku sensors. Nothing is unhackable. Exploit searches must be under way. People never learn. Reputational (corp) and personal-integrity (clients) damages may or may not be recoverable, at huge expense.

I’m not in, on this one. No need. Plus:
[Where you can learn; Zuid-As Ams]

Fake your news

So this is your future, part II:
Fake news is (to be – timeframe in question is ..?) battled by platforms that have full control over just about everything out there. By whatever algorithm these might bring to bear, most probably with a dose of ill-aligned AI creating a filter bubble of the most beneficial to the platforms kind for sure which is the most profitable one to their *paying* customers which is the ad industry which hence is by definition detrimental to the users, the global general public (sic).
Thus suppressing Original Content by users that isn’t verifiable against the ever narrowing ‘truth’ definitions that benefit the platforms.
Thus installing the most massive censorship ever dreamt of.
And despite some seemingly (!) benign user support in this

In the olden days, anything of such ubiquity that it was factually (sic) a (inter)national utility, was nationalised to bring it under direct control of the People.
May we now see the appropriation of Fb by the UN due to exactly the same reason ..?

One can hope..? Plus:
[Rosy window on the world ..? Not even that; Zuid-As Amsterdam]

Right. Without -s

So, we’re into this era of giving up control over our lives. Where we’re either dumb pay-uppers, or (also) victims. Which in turn leads to questions regarding who will have any income at all, to pay for the service of being allowed to sit as stool pigeon until shot anyway.
Because the latter is what follows from this here nifty piece; Tesla not giving your data unless they can sue you. The EU push for human-in-the-loop may need to be extended considerably, but should, must. Possibly similar to the path of the Original cookie directive, from weak opt out to strong double opt in plus all privacy requirements (purpose / functional necessity, minimalisation, etc.etc.).

Do we recognise here again the idea that though your existence creates it and would be different for every human on earth (plus orbit), your data isn’t yours ..? Quod non! When someone takes what you produced (however indirectly! – inferred and metadata and all) without payment, that is theft or worse in any legal environment.
Is there anywhere a platform where the consequences of this global delineation are more clearly discussed, between Your Data Isn’t Yours Because We Process It, versus My Data’s Mine Wherever ..?

I’d like to know. And:
[Your fragile fortress…; Barça]

Behaviour is key to security — but what if it’s perfect?

When the latest news on information security points in the direction, away from reliance on technical stuff, of the humans that you still can’t get rid of (yet!), all are aboard the ‘Awareness is just the first step, you’ll need to change the actual behaviour of users‘ train. Or should be, should have been, already for a number of years.
In Case You Missed It, the Technology side of information security has so far always gobbled up the majority of your respective budgets, with all of the secondary costs to that, buried in General Expenses. And the effectivity of the spend … has been great! Not that your organisation is anywhere near as secure as it could reasonably have been, but at least the majority of attackers rightly focus not on technology (anymore – though still a major headache) but on the feckle user discipline. Oh how dumb and incompetent these users are; there will always be some d.face that falls for some social engineering scam. Sometimes an extremely clever one, when focusing at generic end users deep down in your organisation, sometimes a ridiculously simple and straightforward one when targeting your upper management – zero sophistication needed, there.

The point is, there will always be some d.face that makes an honest mistake. If you don’t want that, you’ll have to get rid of all humans and then end up overlording robots (in the AI sense, not their superfluous physical representation) that will fail because those underling users of old held all the flexibility of your organisation to external pressures and innovation challenges.
Which means you’re stuck with those no-good [i.e., good for each and every penny of your atrocious bonus payments] humans for a while.

Better train them to never ever deviate from standard procedures, right?
Wrong.
Since this: Though the title may look skewed and it is, there’s much value in the easy step underpinning the argument; indeed repetitive work makes users’ innate flexibility explode in uncontrolled directions.
So, the more you coax users into compliance, the worse the deviations will get. As elucidated, e.g., here [if you care to study after the pic; study you’ll need to make something of the dense prose; ed.].

So, here too your information security efforts may go only so far; you must train your users forever, but not too much or they’ll just noncomply in possibly worse directions.

Oh well:
[Yeah, Amsterdam; you know where exactly this depicts your efforts – don’t complai about pic quality when it was taken through a tram’s window…]

Pwds, again. And again and again. They’re 2FA-capable ..!

Why are we still so spastic re password ‘strength’ rules ..?

They have been debunked as being counterproductive outright, right? Since they are too cumbersome to deal with, and are just a gargleblaster element in some petty arms’ race with such enourmous collateral damage and ineffectiveness.

And come on, pipl! The solution has been there all along, though having been forbidden just as long …:
Write down your passphrases! The loss of control by having some paper out there, e.g., on your (Huh? Shared workspace, BYOD anyone?) monitor (Why!? Why not have the piece of paper in your wallet; most users will care for their money and those that don’t, miss some cells due to the same you wouldn’t want them at your workplace anyway) is minute, certainly compared to the immense increase in entropy gains i.e., straight-out security gains.
And … when you keep your written-down pwd to yourself (e.g., against this sort of thing), it becomes the same thing any physical token is and you created your own Two Factor Authentication without any investment other than the mere org-wide system policy setting change of requiring pwds of at least, say, 25 characters. (And promulgating this but that shouldn’t be too hard; opportunity to show to make life easier for end users, for once, and great opportunity for collateral instructions on (behavioural) infosec in general…)

What bugs me is that alreay a great string of generations have been led astray while all along the signs were on the wall – not the passwords on them, but the eventual inevitable collapse of the system, by users that demonstrated this security measure was too impractical to stick to par excellence as evidenced in the still-strong and practiced practice of writing down pwds. If people do some specific thing despite decades of instruction … might we consider the instruction to not fit the humans’ daily operations ..? so the ones seeking to Control [what pityful failures, those ones …; ed.] will have to rescind?

So, written-down passphrases it is. Plus:
[Easy sailing to new lands, beats being stuck on Ellis; NY]

Leaking profiles

Got an attention raiser during an off-the-cuff discussion on data leakage. Qua, like, not getting the first thing about what privacy has been since Warren&Brandeis’ eloquent definition, and subsequent codification in pretty hard-core, straightforward laws.
The problem being, that no theory of firm (incl public) allows subsumption of employees into slavery, of mind or otherwise. Think Universal Declaration of Human Rights, article 12. Hence, tracking and tracing every keystroke of employees, i.e., treating them as suspect of e.g., data leakage before one has any a priori clue about everyone individually actually doing anything wrong, not having been granted any rights of surveillance in this jurisdiction, is a crime in itself.
And no, the comparison with street cameras that bother no-one and make everyone safer, is a lie on two counts. And, in many countries (the civilised ones; a criterion in reverse), such (total or partial) surveillance isn’t outlawed without reason.
So, your data leakage prevention by tracing everyone is an illegal act. Don’t.

No, your security concerns are not valid. Not the slightest, compared to the means you want to deploy. Stego to files of all kinds, when all are aware of its implementation, may help much better. And supplies you with the trace you want; not to your employee that you (but no-one else) suggest is rogue – (s)he knows about the traceabilitry so will be self-censored (ugch) into compliance – but to the third party that spilled the beans. Since stego-cleansing tools may exist, your mileage may vary. Encryption then, the destruction of content accessibility for those not authorised (through holding a password/token/~), will fail when anything you send out, might have to be read off a screen; the PrtScn disabling being undone by good ol’ cameras as present in your good ol’ S8 or P900 (though this at 0:50+ is probably the typical TLA stakeout vid/result).

Conclusion: Excepting very, very rare occasions, your data leakage prevention by employee surveillance will land you in prison. Other methods, might be legal but fail. Your thoughts now on outbound traffic keyword monitoring. [Extra credit when including European ‘human in the loop’ initiatives.]

And:
[No privacy in your prayers, or ..?? Baltimore Cathedral]

Maverisk / Étoiles du Nord