A somewhat cryptic title again.
This, about the latest strand of news (‘thread’ or worse..?) wherein ML is accused of gobbling up so much compute capacity that … Well, at least that the world will come to an end through global warming that was so much due to power consumption for ML. Oh no, that was ‘blockchain’ as the culprit, until recently. Where of course the solution was already to dumb down ‘chain-algo’s to a level that no-one would be sure that the self-righting ship [hey look that has its own wiki: here with nice pics to boat boot] mechanisms are still in operation. If they ever would have worked, in full public chains only, … if.
Now then, the ML part. As here. Given a. it’s historic data not logic, b. it’s nicely exponential, c. we all know that exponential is only a local characteristic ..! [yes, sic], d. meltdown hasn’t been upon us yet … we should conclude e. apparently, what ML finds in data has not necessarily any bearing on reality, f. we should do something; e.g., adapt our methods [which was over-, very over-due anyway ..!]:
Don’t rely on induction.
At best, rely on deduction [e.g., from business process analysis downwards ..!!] and possibly verify / falsify with data analytics. Analog to [but at a sharp angle with] this.
And/or look for hidden patterns in the data, but assume not that there are any, worthwhile ones ..!!
And if the latter, augment with pushing through induction that until here wasn’t anything of the kind, with analysis into theory/logic which would give you back the rules you’d need to turn the whole thing into either expert systems [the fruitful path] or episodic learning [the haphazard amateurs’ path].
And, at the very least, do not I repeat do not do any analysis before you have clearly established you have the right data for your problem. If you don’t know your problem, woe you. Turn to At best above.
And, realise that the future production runs will be on – relatively …! – moderate flows of transactions through your system, not the ginormous bulk of bwarp that you train the system on.
And remember this.
Then, your capacity problems will not aggregate to a global problem.
Also, isn’t this quite the similarsame as what happened say, a decade ago in Storage ..? Demand rose out of control, and couldn’t be supplied with sufficient supply [hmm, semantic ‘logic’] to even keep up to keep the gap (linearly) as it was.
Nowadays, is that problem still here ..? What happen ..?
So yes, ML causes 99½ problems. Compute capacity is ½ one.
For now:
[Ask yourself: Do you feel 5D-computy today, punk ..!? Do You …!?; Toronto]