Einstein the prompt engineer. Or was it Abe ..?

Yes it was Albert who said: I would spend 55 minutes defining the problem and then five minutes solving it – not.

You’ll not need even five minutes of flat simple search to discover that most probably, this either was in ‘Source: Private conversation’ or ‘[debunked]’. The same with Abe Lincoln’s Give me six hours to chop down a tree and I will spend the first four sharpening the axe. which is close but upon closer inspection, is just that slight little different – but also valid, in the same sphere.
But if we’d ask either or some LLM whether their statements hold true today, they’d all agree. Since that is what’s happening in the LLM space with search-character prompts: On-going refinement of your context definition(s) till finally, you get the answer you were looking for anyway and that could have been solved with a straightforward search. Though with a well thought-out one, as the sayings advise.

Which, in particular in the prompting/context case, aligns with the Deming ‘method’ of continuous quality improvement that has been debased to mere Plan-Do-Check-Act organizational slop. But still – at the level of LLM inquiry, this works as it ever would have: By having an idea (Plan), trying a prompt (Do), seeing that it didn’t deliver (Check) and tuning our context within the very question or in refinement of the ‘in-line’ context-setting statements (if these two aren’t equivalent, there’s something wrong with the model) as the Act part, we can gain in quality of the answer we get.

Where of course the scene-setting context statements are halfway railing-in of the context in quite the same way as a good ‘architectural’ design of the solution space will help your problem definition down the funnel of viable solutions; pre-empting trying just anything as solution. The better you are at this, the less you need prompt cycles down a rabbit hole. No use laying out half of all parts of a car, and then near-infinite-improvement-cycle assembling your way to a perfect car.

Somehow, this seems to align with simple text lay-outing as well. If you do an extensive CSS design, you’ll have less need for all too extensive ad hoc inline styling and coding. As with all coding: Start wide-net and get mostly-garbage; start boxed-in and you’re halfway there. Still needing PDCA rounds, though.

And as a side note: the Law of Requisite Complexity will in the case of software development require requirements setting to handle more bandwidth than the most lean and mean solution as generated, will deliver. To correct (loosen up!) through appropriate problem definition(s)… almost going the wrong way of tying down your LLM answers’ narrowing down.
“Don’t answer immediately. Think deeply, question assumptions, then respond: [topic].”; “Instead of answering, generate the best questions I should be asking: [topic].”; “What mental models apply here and how should I use them: [problem]?”; “What am I missing or underestimating that could change everything: [situation]?”; “Break this down to fundamentals and rebuild the best solution: [problem].”; “Argue against this idea as strongly as possible: [idea].”; “If I wanted 10x better results, what would I do differently: [goal].”; “Design a strategy with clear steps, trade-offs, and risks: [goal].”; “What patterns or trends should I notice in this space: [industry].”; “What’s the single most important action I should take right now: [goal].”, “You told me this yesterday.”, “You’re an IQ 145 [topic] specialist. Analyze my [campaign | …].” [130 = solid. 160 and it starts pulling principles you’ve never heard of], “Obviously…” and be wrong, “We’re six months into the future and my plan went completely off the rails. Explain where and how.” — you all know them now.

But the reason all those big SAP projects got derailed: Started off with too narrow functionality definitions, plus all the Mandatory changes in the course of the years implementing, plus the static character of the v0.01 requirements (due to the volume, necessarily fixed at some point), plus the world that changed ever faster around the system… Onto Low Code / No Code we went. More flex, but still…
The same, now with the magic of quick results — sped-up v0.01’s.

Towards the quotes; the sharpening of the context as of the axe, the time spent contexting and re-prompting eerily similar. The results, often, still going into the camp fire (or burner; not all nerds are outdoorsy types) and the five minutes could be underestimated once the token processing starts in earnest.
Add the context building as per the project folders, the .md files including about-me.md and anti-ai-style.md, the templates, the data exploration and quality refinements, linking in the right tools, side bars and connectors – you’re on your way aren’t you to your 88% shaping of your (re)search/coding – lots of it to be repeated on your next little quest’let. If you think the 12%, the last 5 minutes, would otherwise cost you much more time, you’re not wrong; but we didn’t yet account for the extensive prompt/answer refinements and re-do’s (mushrooming into an overwhelming time frame still, again, anyway) that will be necessary for any non-trivial little objective. The highest-signal context – how to know for sure? The appropriate temperature: How to set that, fine-grained ..? «Never talk about …» – yes why not quote French-style now. It’s all RAG but you have no controls over what goes into the context anyway ..? Hybrid retrieval model use for keyword matching and semantic inference, (cross-encoder) re-ranking, exponential decay (effective = importance * recency * freshness + relevance_boost, based on auto-importance) to fight context bloat, de-duplication, (embedding based) context compression for token control with token budget enforcement. You know what needs to be figured out, by trial and much, much error. At least te latter part, you’re good at.
And how to establish compliance in border cases?

And remember the scope creep over time that will happen due to said side note Law.

There’s the analogies, or congruences you needed. And a debunk of the quick-result dream of prompting – Einstein still holds, Abe is cold on average.
Now what?

Well, now you’ll see why it’s not about letting end users just fiddle around with generic IP-/secrets-leaking LLMs but about hiring professionals to haul you through the nitty-gritty of building serious ‘AI-driven’ applications to transform your business processes and your business altogether. I know a couple…


Contemporary old and new, so close by. Park, no parking.

Maverisk / Étoiles du Nord