Apologies for the play on the pronunciation of Auditing AI for Camouflage. Auditing AI for Stealth would’ve been taken to abbreviated with two s’es, winkwink right?
And I don’t mean this sort of training. Or maybe I do:
How are adversarial examples in AI, not what previously was called camouflage ..? Hence:
Why aren’t camouflage techniques, or rather anti-~, used in training AI systems, and in auditing them for quality of operational survivability ..?
And, consider that this area of camou-vs-anti, is old hat.
Weren’t the ‘original’ F-117 stealth bombers not so much, through flipping ..? [As here; just can’t find a link to the rumour that already at operational release, Russian 50s radars kept in reserve behind the Ural, could pick off F117s with much ease.] And Mk1 eyeballs already helped a bit, too, and still. When the technology concerned was developed, wasn’t it overly focused on the particular strand of arms’ race, unaware of the huge context hence going to the (math-wise) limit of the race to nowhere while forgetting other ‘basics’?
And what were IEDs other than stealthed explosives?
Apparently, the lessons haven’t been learned. Lessons will be repeated until they are learned. In this case, at the cost of how many lives, and sorrow on relatives? And how much cost to society (through not spending the ginormous investments on bettering the fate of the underprivileged)..?
Plus, the AI field is (going to be) deployed as a new battle area, apparently.
Now, the focus is on protecting pedestrians against auto-cars, even when protective security is still a problem, possibly not categorically solvable.
Where training is focused on picking out pedestrians, to avoid (or ..? Apparently, there was a no-breaking systems overrule since avoiding fender damage from too close up back vehicles was more important). And adversarial examples are all around. Still, to poke fun (really …!?) and to learn.
But class solutions may not be. Case in point: It’s all still about the above camouflage/stealth issues, the same as ever before. When humans were enlisted to see through it, things not necessarily worked out (just an example). Though simple stuff, did work.
So, you better learn how crypsis and mimiry work.
Because, question: Why did the pedestrian cross the road? To catch insurance claims? Where else do you deploy AI systems, and may encounter such ‘pedestrians’ in analysed traffic/environments?
Maybe complex hybrid AI systems are needed, required. Like, some that do pay attention to secondary (Kohonen) classifications.
Whereas the above camouflage pics showed, humans had only been able to recognise the obvious, to survive the savannah (hey predator camouflage was also just good enough to work a little bit not more given evolutionary development costs), systems learning from scratch may need to be bigger than our brains. Against insurance claims, as that seems to be a more economical (evo, too; evo favours the economical over the ethical …), but also in other fields, against deceit on purpose or not.
Now, I haven’t given you the answer on how to audit your AI system for such (high) quality. Duh – I’m only approaching a publication (date) and it could be my daily bread. Hire me and I can tell ;-/
[Already here, it works a bit (when seen through your brows); Twente AFB again, analog pic again]
Oh and to close out, a recent find: