This idea, or lack of it, crossed my mind:
When it comes to predictions, following the lead of Tetlock’s Superforecasters may very well work (though note much of it starts with the, sort-of, mental, 50-50 approach of soberly realizing that one may improve, by admitting imprecision and those that claim precision or high scoring rates are wrong) … for issues and questions that converge on one, somewhat exactly determinable, outcome. This, all being within the realm of said book which is very much recommended by the way.
Where some questions, like “What is the best strategy?” may not have such a single outcome; the world changes, and (business-like) having a vision is a grand prediction already. Let alone that the ‘mission’, one’s desired place in that vision of how the world will be in the future, (often / always without a miss) skips the implicit choice issue of what one’s future place could be within that, vaguely defined, future state of affairs. Even if you shoot for the moon [and end up in an infinite and infinitely cold vacuum, among the stars but near-infinitely dwarfed by them] and miss, you may end up in a not-first but still pretty comfortable position; no hard feelings. … This, as an explication of what I’d call diverging predictions: Wide-ranging future states that you might ‘predict’ but most probably in a vocabulaire that will not be valid or understood in the future so traceability of your predictions is … quite close to zero hence your advance predictions have no worth ..! This of course is also in the book but still, too often not realised.
Now, let’s combine this with Maister’s Advisor let alone simple consultancy …
Oh well. Plus:
[Predicting quality of resulting still wines … for second fermentation, mariage, and onwards — priceless; Ployez-Jacquemart]