So, once we had Turing machines. If you don’t know what that were / are, please recuse yourself from the discussion…
Now that we have only a handful left, globally, the following [my following’s also a handful, ya’know]:
After, we had hardware (with what we now call firmware; still programmable). Then Operating Systems et al. (remember enterprise buses, message queuing ..?), and other Software that were based on algorithms just like firmware was/is but now, recognisably and readably so. Next came parametrisation, like your SAP software just sits there until you direct the traffic flow through the system based on the 30.000+ parameters in SAP you set, or not, and possibly on the content of the data as well (yes this goes for regular algorithms too; think if-then) — also, parametrisation on e.g., your ‘software defined’ networks… — semi-static algorithms with changes controlled not only by data but by almost-(relatively-)static params.
Suddenly, when we have the latter but with the parameters badly pin-downable i.e., in neural network weights, we get #algorithmsagainsthumanity — that might seem a step further abstracted but isn’t. Advantage: There’s nothing new, also in e.g., auditing; just some translation and extension to do.
Major drawback: The discussion about the dangers of ‘algorithms’ is mostly carried among (sic) ‘governance’/legally affected debatebabblers, of the hideous kind. That have zero clue what an ‘algorithm’ is. But hey, they babble so nicely so why don’t you let them? Well, for one; they pretend to understand what they’re babbling about and pretend they have a say in these matters. Which, due to their demonstrated sheer incompetence, they do not. And their conclusions have zero weight anyway because their scope and depth is a joke. The only thing they seem capable of, is debating into eternity without ever reaching a conclusion but they’ll remain happy they had such a fruitful debate – the only thing they’ll have nailed is the waste of time. Let alone that they themselves for certain (sic) consider hemselves ‘moral’, but their mileage may vary widely…
Or is it about ML disregarding all that happened in systems development before, and now the ‘ethicists’ are trying to catch up from the wrong end ..? Like here, they come up with what had been around for over half a century. I suggest that all that think they’re seasoned debaters on the ‘dangers’ of ‘algorithms’, wait another half a century before being allowed / wheeled into the discussion again.
And yes, the previous two paragraphs exclude to the extreme, the few that do understand their business, you know, like the late Stephen Hawking (salve, eminence), prof. Bengio, et al.; they form the opposite end of the wisdom scale. But they seldomly ‘debate’, they discuss. If you don’t know the difference, see the 1st line above.
There’s just no angle that helps, other than offboarding the debatebabblers.
And let them concentrate on:
[If you don’t agree with all the above but recognised Ferranza: you’re triply wrong. It’s Noto (Palazzo Nicolaci di Villadorata) and Ferranza doesn’t even exist.]