Academy

I Started Asking Better Questions.

Stop forcing prompts, ask questions

Tom WilsonTom Wilson
Share article:

For a long time, I thought getting great AI output was mostly a writing problem.

Write a tighter prompt. Add more detail. Clarify the format. Add examples. Repeat.

I did that for years. Thousands of prompts. Endless refinements. Sometimes it worked, sometimes it didn't, and when it failed I usually blamed myself for not being precise enough.

Then I noticed a pattern I couldn't ignore.

With older models, I had to front-load almost everything. If I missed a key detail, the output drifted. If I added too much, the model got noisy. Prompting felt like trying to preempt every possible misunderstanding before the work even started.

With newer models—GPT-5, Claude 4.x—my best results came from a different move entirely: I stopped trying to guess the perfect prompt and asked the model what it needed first.

That shift changed everything.

The Moment It Clicked

One clear example: a strategy deck.

I started the way most people do: "Build me a compelling presentation on grocery prices." Then I followed with the usual details—audience, tone, length, structure.

The draft came back usable, but generic. The kind of output that looks polished until you try to use it in a real meeting.

So instead of rewriting the prompt again, I asked a different question:

"What would you need from me to make this genuinely persuasive, not just complete?"

The answer wasn't generic. It asked for specific things I hadn't included: recent regional volatility, pricing behavior by store format, margin pressure by category, and—crucially—what decision I wanted the deck to influence.

That last one was the kicker. I had asked for a presentation but hadn't told it what the presentation was supposed to do.

Once I supplied that context, the next draft had a point of view.

Why This Works Better Now

The key difference isn't that models became magical. It's that the newest models are much better at reasoning about context gaps.

Older systems were overconfident about missing information. You could ask "What are you missing?" and get a shallow checklist. They didn't reliably identify where uncertainty would hurt output quality.

GPT-5 and Claude 4.x are noticeably stronger here. They can look at a task, infer where ambiguity sits, and tell you which missing inputs matter most.

In practice, that means reverse prompting finally works the way people hoped it would:

  1. You ask what they need
  2. They identify real gaps
  3. You fill only the important ones
  4. Output quality jumps with fewer iterations

This lines up with what people call context engineering: as context windows grow, quality doesn't come from stuffing in more tokens. It comes from supplying the right context at the right time.

Reverse prompting is a practical way to do that.

How I Use It

These days my workflow starts with capability and context checks, not a giant instruction block.

I ask things like:

  • "Can you access current data, or should I paste source material?"
  • "What assumptions are you making right now?"
  • "What are the top three missing inputs that would most improve this result?"

Then I provide only what closes those gaps.

What surprised me is how often this reveals my own blind spots. I might think a task is about feature comparison, and the model asks for pricing philosophy. I might think a summary needs breadth, and it asks for a clear decision criterion instead.

Reverse prompting doesn't just improve the model's output. It improves my framing.

Forward vs. Reverse: When to Use Each

I still use direct prompting all the time.

If the task is deterministic—"Return valid JSON with these keys" or "Rewrite this paragraph to 120 words"—forward prompting is faster and cleaner.

But when the task is exploratory, high-stakes, or fuzzy, reverse prompting wins. That's where assumptions hide.

The decision is simple:

  • If I know exactly what good looks like → prompt forward
  • If I'm still discovering what good looks like → prompt in reverse

That distinction has saved me more time than any prompt template I've ever used.

The Bigger Shift

The old mental model: "How do I get the model to do what I want?"

The better model: "What does this model need from me to do this well?"

That sounds subtle, but it changes the relationship from command execution to collaborative diagnosis.

If you've been frustrated by prompt iteration loops, try this once:

Start with the goal, then ask the model to identify the missing context that would most improve the result.

You'll usually discover two things quickly: you've been over-specifying the easy parts and under-specifying the decisive ones.

That's been my experience. Repeatedly.

Not because I became a better prompt writer overnight.

Because newer models got better at showing me what I was missing.

Tom WilsonTom Wilson
Share article:

Ready to get started?

Sign up for free or chat live with a Scout engineer.

Try for free