Just as we’re entering dystopia at te speed of light (downward rounding off of the speed of scope creep for creepy applications of the
third AI kind) when e.g., Customs are scrambling to implement AI-supported profiling systems — acknowledging that human profiling is less adequate already, and having built in zero safeguards against abuse by the system or its overlords behind the scenes, on a completely different associated line some idea cropped up with me:
Can’t we deploy suitably trained AI to rate, detect and respond to online discussions on socmed (@Twitter, @LinkedIn et al in particular) to prevent Godwins ..?
Because I encountered some sits where the first kaizen steps in that direction were detectable by a human. [disclaimer: me] By, e.g., repeating arguments in different wording, not responding to others’ agruments, slowly inserting derogative pointers to the others’ character (without knowing a simgle thing about them), etc.
When all the talk is about oh how great AI already is with textual analysis, this should be a simple thing to pull of, right ..?
Just hoping. And:
[The eternal cycle of (mostly) demise, or a ferris wheel. This city has seen it all..? Cut short at the Pinnacle at least it is. This pic.]