Remember Castranova’s Synthetic Worlds? You should. If only because this, is, still, in The Atlantic [edited to add: with an intelligent comment/reply here] – because that is still around, it allows for comparison of origins, and the utopical futures as described in the book, with-or-to the current state of the world. Where we have both the ‘hardly any change noticeable in current-state affairs when compared with the moonshot promises of yesterday’ and the ‘look what already has changed; the whole Mobile world change wasn’t even in those rosy pictures’ and ‘hey don’t critique yet we may only be halfway through any major tectonic humanity shifts’.
Where the latter of course ties in with the revitalised [well there‘s a question mark attached like, what do you mean by that; is it ‘life’ and would we even want that] version in ‘singularity’ the extreme nirvana of uploaded, eternal minds. As if the concept of ‘mind’ or ‘intelligence’ would make any sense, in that scenario. And also, since this (pronounced ‘Glick’ one guru told me lately; the importance of continued education), where the distinction between ‘eternal’ and ‘forever in time’ clearly plays up as in this (same), against …
In circles, or Minkovsky‘s / Penrose‘s doodles [wormholing their value], the issue comes back to flatlander (i.e., screen) reality, if there is such a thing…
Oh well; leaving you with:
[Warping perspectives, in ‘meaning’, too; Salzburg]
Yet again, some seem to not understand what they’re talking about when it comes to transparency in AI…
Like, here. Worse, this person seems to be a rapporteur to the European Economic and Social Comittee advising the European Committee. If that sounds vague – yes it does even for Europeans.
For the ‘worse’ part: The umpteenth Error, to consider that the secrecy of algorithms is the only thing that would need to change to get transparency about the fuctioning of a complete system.
1. The algorithm is just a part of the system, and the behaviour of the system is not determined in anything close to any majority part by the algorithm – the data fed to it, and the intransparent patterns learned by it, are. The transparency needs to be about the algorithm but much more about the eventual parameters as learned throughout the training time and the training/tuning after that. [Update before press release: There seems to be an erroneous assumption by some way too deep into EC affairs that the parameters are part of the ‘algorithm’ which is Newspeak at its worst, and counterproductive certainly here, and hence dangerous.]
2. The algorithm can just be printed out … If anyone would need that. One can just as easily run an AI code analyser (how good would that be? They exist already, exponentially increasing their quality, savvyness) over the source- or decompiled code.
3. The eventual parameters … not so much; they’re just there in a live system; unsure how well they are written out into any file or so (should be, for backup purposes – when not if AI systems will get legal personhood eventually (excepting the latter-day hoaxes re that), will a power switch-off be the same as attempted murder, and/or what would the status of a backup AI ‘person’ be ..?).
4. Bias, etc. will be in the parameters. The algorithms, mostly-almost-exclusively will be blank slates. No-one yet knows how to tackle that sufficiently robustly since even if the system is programmed (algorithm..!) to cough up parameters, the cleverer systems will know (?) how to produce innocent-looking parameters instead of the possibly culpable actual ones. Leads into this trickery by AI systems, have been demonstrated to develop (unintentionally) in actual tests.
5. How to trick AI pattern recognition systems … the newest of class breaks have just been demonstrated in practice – their theoretical viability had been proven long before – e.g., in this here piece as pointed out only yesterday [qua release; scheduled last week ;-]. Class break = systemically unrepairable … [ ? | ! ].
Let’s hope the EC could get irrelevant just that little less quickly by providing it with sound advice. Not the bumbling litlle boys’ and girls’ type of happythepreppy too-dim-wits. [Pejorative, yes, but not degrading: should, maybe not could, have known better ..!]
[Already at a slight distance, it gets hazy what goes on there; from the Cathédrale]
Recently, was reminded (huh) that our memories are … maybe still better than we think, compared to the systems of record that we keep outside of our heads. Maybe not in ‘integrity’ of them, but in ‘availability’ terms. Oh, there, too, some unclarity whether availability regards the quick recall, the notice-shortness of ‘at short notice’ or the long-run thing, where the recall is ‘eventually’ – under multivariate optimisation of ‘integrity’ again. How ‘accurate’ is your memory? Have ‘we’ in information management / infosec done enough definition-savvy work to drop the inaccurately (huh) and multi- interpreted ‘integrity’ in favour of ‘accuracy’ which is a thing we actually can achieve with technical means whereas the other intention oft given of data being exactly what was intended at the outset (compare ‘correct and complete’), or do I need to finish this line that has run on for far too long now …?
Or have I used waaay too many ””s ..?
Anyway, part II of the above is the realisation that integrity is a personal thing, towards one’s web of allegiances as per this and in infosec we really need to switch to accuracy, and Part I is this XKCD:
All Are Ardently Awaiting – stop, semantics go over syntactic alli – the release of Asibot, as here.
Because we all need such a system. The inverse of Dragon Naturally (into Nuance, too little heard of as well!) combined with a ghost writer, as it were / is / will be. When prepped with one’s own set of texts, should be able to generate useful ground work for all those books you have been wanting to write for a long time but couldn’t get started.
Now, would such a system be able to extract hidden layers, stego-type of themes, that are in your texts that you aren’t even aware of ..? What kind of services would be interested most? Oh, that one’s answered before I finished typing; the three-letter abbrev (uh?) kind of course.
Still, would very much want to meddle with the system… Plus:
[If applicable to music, sunny Spring days in Saltzburg, too for …]
You get that. Since Verify → ¬Trust. When you verify, you engender the loss of trust. And since Trust is a two-way street (either both sides trust each other, or one will loose initial trust and both will end up in distrust), verification leads to distrust all around – linked to individualism and experience [we’re on the slope to less-than-formal-logic semantics here] this will result in fear all around. And Michael Porter’s two books, not to mention Ulrich Beck in his important one. So, if you’d still come across any type that hoots ‘Trust, but verify’, you know you’ve met him.
Since the above is so dense I’ll lighten up a bit with:
Part of the $10 million I spent on gambling, part on booze and part on women. The rest I spent foolishly. (George Raft)
Which is exactly the sort of response we need against the totalitarian bureaucracy (i.e., complete dehumanisation of totalitarian control – which approaches a pleonasm) that the world is sliding into. Where previously, humanity had escapes, like emigrating to lands far far away, but that option is no more. Hopefully, ASI will come in time to not be coopted by the One Superpower but then, two avenues remain: a. ASI itself is worse, and undoes humanity for its stubborn stupidity and malevolence (the latter two being a fact); b. ASI will bring the eternal Elyseum.
Why would it b. ..?
And if it doesn’t arrive in time, a. will prevail since the inner circle will shrink asymptotically which is unsustainable biologically.
Anyway, on this bleak note I’ll leave you with:
[Escape from the bureacrats; you-know-where]