Already, we were aware that
- With ML systems, the lines between software/fixed algorithms, parametrisation and semantic meaning of the outcomes, are blurred. We have no ‘place’ where the ‘logic’ sits or is stored/used; it’s all getting mushy and that’s not a good thing;
- The law wants neat yearsteryears’ algorithms (protocols, parameters, provable actions upon intent, etc.);
- Adversarial AI exists, whether we call it AI or the mere ML that it is.
All these three in concert, don’t give hope. Like explained here and more profoundly here, ‘hacking’ may not be appropriately defined, if it currently is at all, once one uses ad-AI to mess with ML-driven (literally) systems. The latter is more like solicitation or so…?
To expound, I copy a little from the article:
“Unless legal and societal frameworks adjust, the consequences of misalignment between law and practice include (i) inadequate coverage of crime, (ii) missing or skewed security incentives, and the (iii) prospect of chilling critical security research. This last one is particularly dangerous in light of the important role researchers can play in revealing the biases, safety limitations, and opportunities for mischief that the mainstreaming of artificial intelligence appears to present.”
“… why this lack of clarity represents a concern. First, courts and other authorities will be hard-pressed to draw defensible lines between intuitively wrong and intuitively legitimate conduct. How do we reach acts that endanger safety—such as tricking a driverless car into mischaracterizing its environment—while tolerating reasonable anti-surveillance measures—such as makeup that foils facial recognition—which leverage similar technical principles, but dissimilar secondary consequences?
Second, and relatedly, researchers interested in testing whether systems being developed are safe and secure do not always know whether their hacking efforts may implicate federal law … Third, designers and distributors of AI-enabled products will not understand the full scope of their obligations with respect to security.”
Yes there’s a call to action.
Since “We are living in world that is not only mediated and connected, but increasingly intelligent. And that intelligence has limits. Today’s malicious actors penetrate computers to steal, spy, or disrupt. Tomorrow’s malicious actors may also trick computers into making critical mistakes or divulging the private information upon which they were trained.”
Haven’t heard too much reflection on this, yet.
Would definitely want to hear yours. Please.
[Edited to add: Do also read between the lines of this, qua probably mostly surreptitios data capture contra the GDPR… And what if I want my data to be removed from the ML-parameters ..?? See upcoming Monday’s post]
Oh, and:
[On Mare Nostrum I mean Mare Liberum, the legal ship may have sailed. On a vast expanse of not much. Outside Porto]