May it be that research has stumbled onto a find of fundamental significance ..?
I’m not overly overstating this; on the surface this looks like a regular weaving error in AI development fabric; one or a few pixels changed, can completely throw off some learned pattern recognition net. Yes, this means that (most probably) neural nets are ‘brittle’ for these things.
But does this perhaps moreover mean, it may be a #first qua Gödel numbers …? I mean not the regular the Hofstädter type that, even when syntactically perfectly normal, will when fed to a machine able to parse syntactically correct form, nevertheless wreck the machine. The halting problem, but more severe qua damage.
Just guessing away here. But still…