Asad day for all you aficinados of this blog. After some five years and about 1163 posts (you’ll see…; own, mostly with own pics), this is the last of the (work)daily update. Yes, I’ve managed. But will turn to more serious, somewhat-more long-form content so will stop the drivel. I will not post daily, but when I do … And I’ll intersperse with some margin notes posts. Per 1/1 these will have no picture, the long for ones will – just check the link-post’lets and you’ll see. In line with the season: Enjoy less frequent but more professional, beautiful fireworks.
Or be safe with your own fireworks. Else, stand candidate for the Darwin Awards, which is also OK with me especially if you’ve not appreciated my blog; excepting the few I care for ;-|
Now then …
[Some room available. Live and die to be worth it, or take a hike; Arlington]
After you read this, you’ll get the following:
- [After two empty lines] ‘seed AI’ may not be necessary. Think of how the Classics built their arches: The support may be removed. Same here; some ‘upbringing’ by humans, even opening the possibility of ethics education / steering;
- Proponents of this theory also regard intelligence as a kind of superpower, conferring its holders with almost supernatural capabilities to shape their environment / A good description of a human from the perspective of a chimpanzee. – correct. As such, slightly ad hominem and we know what that is about (here);
- If the gears of your brain were the defining factor of your problem-solving ability, then those rare humans with IQs far outside the normal range of human intelligence would live lives far outside the scope of normal lives, would solve problems previously thought unsolvable, and would take over the world — just as some people fear smarter-than-human AI will do. – an interesting argument, as I had the idea of drafting a post about a new kind of ‘intelligence’, apart from the human/animal one.
An interesting and profound read… Plus of course:
[“Intelligence”… Winter Wonderland London]
As we turn the leaf towards a new year, let’s not forget what values – in operation, operationalised – protect our Human Rights, in the form of de-mock-racy, and how they are ever so quickly being repelled by, e.g., AI and fake news but in particular, the deployment of bots as here.
Yes I know, that’s three layers of tools but still, the focus is on the first two but the latter plays almost the foulest role.
Yes I know, the ‘operationalised’ part may need elucidation on the side of ‘transparency’, ‘access and inclusion’ etc., but when you read after the link, you’ll understand that the issue is society-wide, not just FCC / net-neutrality.
Well, that was a quicky… hence:
[München, for zero (as in: 0.0) reason]
Remember Castranova’s Synthetic Worlds? You should. If only because this, is, still, in The Atlantic [edited to add: with an intelligent comment/reply here] – because that is still around, it allows for comparison of origins, and the utopical futures as described in the book, with-or-to the current state of the world. Where we have both the ‘hardly any change noticeable in current-state affairs when compared with the moonshot promises of yesterday’ and the ‘look what already has changed; the whole Mobile world change wasn’t even in those rosy pictures’ and ‘hey don’t critique yet we may only be halfway through any major tectonic humanity shifts’.
Where the latter of course ties in with the revitalised [well there‘s a question mark attached like, what do you mean by that; is it ‘life’ and would we even want that] version in ‘singularity’ the extreme nirvana of uploaded, eternal minds. As if the concept of ‘mind’ or ‘intelligence’ would make any sense, in that scenario. And also, since this (pronounced ‘Glick’ one guru told me lately; the importance of continued education), where the distinction between ‘eternal’ and ‘forever in time’ clearly plays up as in this (same), against …
In circles, or Minkovsky‘s / Penrose‘s doodles [wormholing their value], the issue comes back to flatlander (i.e., screen) reality, if there is such a thing…
Oh well; leaving you with:
[Warping perspectives, in ‘meaning’, too; Salzburg]
May it be that research has stumbled onto a find of fundamental significance ..?
I’m not overly overstating this; on the surface this looks like a regular weaving error in AI development fabric; one or a few pixels changed, can completely throw off some learned pattern recognition net. Yes, this means that (most probably) neural nets are ‘brittle’ for these things.
But does this perhaps moreover mean, it may be a #first qua Gödel numbers …? I mean not the regular the Hofstädter type that, even when syntactically perfectly normal, will when fed to a machine able to parse syntactically correct form, nevertheless wreck the machine. The halting problem, but more severe qua damage.
Just guessing away here. But still…
Oh, and (of course another pic; no worries not dangerous):
[Yup, dingy car (license plate!) being overtaken by actual productivity; Canada]