In partnership with |  |
|
Reading Time: 5 minutes |
Hey Prompt Lover, |
Last newsletter we covered what happens when you give AI one large complicated task versus breaking it into a focused sequence of smaller ones. A lot of you tested the Least-to-Most structure on real work and replied with results. |
The pattern in those replies was consistent: the sequenced output felt like a different tool had produced it. It hadn't. The tool was the same. The approach was different. |
Today we're still inside Module 4 and we're covering the technique that took that same principle and pushed it further than I expected when I first read about it in The Prompt Report. |
The technique is called Tree-of-Thought. |
And the difference between this and everything we've covered so far is this: every technique we've discussed moves in a straight line. |
You give AI a task, it reasons through it, it gives you an answer. |
Forward. Linear. One path. |
Tree-of-Thought doesn't move in a straight line. It branches. |
Instead of following one reasoning path to one conclusion, it generates multiple possible next steps at each decision point, evaluates which branch looks most promising, and continues down the best one. |
It searches for the answer instead of walking toward it. |
For the right type of task, this is not a small improvement. It's a completely different category of output. |
|
Here's Why This Matters |
Most prompts ask AI to commit to a direction before it has evaluated the alternatives. |
You ask a question, the AI picks the most likely answer, reasons toward it, and delivers it with confidence. |
If that first direction was slightly wrong, the entire output is built on a shaky foundation. And because the AI sounds equally confident regardless, you have no way of knowing the answer you got was the first path it considered rather than the best one available. |
Tree-of-Thought fixes this by forcing evaluation before commitment. |
Generate multiple paths first. Assess each one against the actual constraints of the problem. Then commit to the best one and go deep. That sequence — branch, evaluate, commit, execute — produces outputs that are more specifically right for your situation rather than generically correct for the average version of your problem. |
The research documents this consistently on tasks involving planning, creative decisions, and problems with multiple viable solutions. On tasks requiring search and multi-step reasoning, Tree-of-Thought significantly outperforms linear Chain-of-Thought. Not because the AI knows more. Because it looked at more options before picking one. |
|
What You'll Learn In This Newsletter |
By the end of this issue, you'll have: |
• A clear explanation of how Tree-of-Thought differs from linear prompting |
• A working template for any planning or decision task |
• The specific task types where this technique produces the biggest improvement |
• Two related techniques from the research worth keeping in your toolkit |
Let's get started. |
|
How Marketers Are Scaling With AI in 2026 |
|
61% of marketers say this is the biggest marketing shift in decades. |
Get the data and trends shaping growth in 2026 with this groundbreaking state of marketing report. |
Inside you’ll discover: |
Results from over 1,500 marketers centered around results, goals and priorities in the age of AI Stand out content and growth trends in a world full of noise How to scale with AI without losing humanity Where to invest for the best return in 2026
|
Download your 2026 state of marketing report today. |
Get Your Report |
|
|
What Most People Do Wrong |
Most people ask AI for the answer. |
Not an answer. The answer. As if there's one correct path and the AI's job is to find it. |
For lookup tasks, that's fine. There is one correct capital of France. There is one correct syntax for a Python function. The answer exists. AI finds it. |
For planning tasks, strategic decisions, creative work, and anything with genuine complexity — there isn't one answer. |
There are multiple viable paths, each with different tradeoffs, each better suited to different constraints. |
When you ask AI for the answer to this type of problem, it gives you the most statistically common one. Which is usually the most generic one. |
Here's what that looks like in practice: |
"What's the best way to grow my consulting business?" |
The AI has seen thousands of answers to this question. It gives you the most common one. Build an audience. Create content. Get referrals. Productize a service. All reasonable. None specific to you, your market, your constraints, or your actual situation. |
Tree-of-Thought doesn't ask for the best way. It asks for multiple viable ways, evaluated against your specific situation, with a reasoned recommendation at the end. |
The difference between those two questions is the difference between generic advice and specific guidance. |
|
Quick Reality Check |
I once ran the same strategic question through a linear prompt and a Tree-of-Thought prompt side by side. The linear prompt took 40 seconds and gave me three bullet points I'd read before on a marketing blog. The Tree-of-Thought prompt took four minutes and gave me a recommendation I hadn't considered that turned out to be the right one. Four minutes versus 40 seconds. On a decision that affected six months of work. That is not a close call. |
|
The Prompt That Works |
▼ COPY THIS PROMPT: |
Task: [Your specific planning or decision problem]
Context: [Your specific situation, constraints, resources, timeline, and goals — be detailed here, the more specific the better]
Instructions: I want you to approach this as a search problem, not a lookup.
Step 1 — Generate branches: Identify three distinct approaches to solving this problem. Not variations of the same approach. Genuinely different paths that could each work. Label them Option A, Option B, and Option C.
Step 2 — Evaluate each branch: For each option, assess: what makes this viable, what are the real weaknesses given my specific context, what does success look like at 90 days, and what's the biggest risk.
Step 3 — Compare and recommend: Based on your evaluation, which option is strongest given my specific constraints? Explain why the other options are weaker for my situation specifically, not in general.
Step 4 — Go deep: Take your recommended option and break down the first three concrete steps to move forward.
|
|
How To Use This Prompt |
Step 1: Copy the template exactly. The four-step structure is doing real work here. Don't collapse it into one instruction. |
Step 2: Spend time on the Context section. This is more important in Tree-of-Thought than in almost any other technique we've covered. The evaluation in Step 2 of the prompt is only as good as the constraints you give it. Vague context produces generic evaluation. Specific context produces specific recommendations. |
Step 3: Let the AI complete all four steps before you respond to anything. Don't interrupt after Step 1 to say which option you prefer. The value is in the evaluation happening before commitment. If you push toward an option before the assessment, you've recreated the sycophancy problem we'll cover in Module 11. |
Step 4: Read the evaluation section carefully before reading the recommendation. Sometimes the recommended option is obvious once you see the evaluation. Sometimes the evaluation surfaces a constraint you hadn't consciously articulated. Both are useful. |
Step 5: If the recommendation doesn't feel right, go back to the context section. Nine times out of ten, a recommendation that feels off means a constraint that wasn't in the prompt. Add it and run again. |
|
Why This Prompt Works |
Tree-of-Thought works because it separates generation from evaluation. |
Most prompts collapse these two things. The AI generates and evaluates simultaneously, which means the first viable option tends to win by default. It's fast but it's not thorough. |
When you force generation first — produce three distinct options before assessing any of them — you prevent premature commitment. The AI can't talk itself into the first reasonable answer because it's required to find two more before it's allowed to evaluate any of them. |
The evaluation step then does something linear prompting can't: it tests each option against your actual constraints rather than against the generic version of the problem. That's where the specificity comes from. Not from the AI knowing more about your situation but from the structure requiring it to compare options against your situation explicitly. |
The research found this structure is especially effective on tasks with more than one viable solution path, tasks where the best answer depends heavily on specific constraints, and tasks where a wrong early commitment leads to significant wasted effort downstream. Which describes most real planning work. |
|
Two Related Techniques Worth Knowing |
Skeleton-of-Thought: |
When speed matters more than depth, this variation generates an outline of the answer first, then fills in each section in parallel rather than sequentially. Faster than full Tree-of-Thought. Better than linear prompting. Use it when you need structure and breadth rather than deep evaluation of competing paths. |
Add this to any prompt: "First generate a skeleton outline of your complete answer with section headers only. Then go back and fill in each section fully." |
Metacognitive Prompting: |
Before tackling any complex problem, add a five-step warm-up at the top of your prompt. Ask the AI to clarify what the question is actually asking, make a preliminary judgment, evaluate that judgment, confirm the direction, and assess its own confidence before answering. |
It sounds like overhead. On genuinely complex questions it catches misunderstandings in the framing before they become wrong answers at the end. |
Add this line before any complex task: "Before answering, clarify what this question is actually asking, make a preliminary judgment about the answer, evaluate that judgment critically, then give me your final response with a confidence assessment." |
|
Quick Reality Check |
Tree-of-Thought was originally tested in the research on tasks like crossword puzzles and creative writing challenges that required deliberate planning and search. It outperformed standard prompting by margins large enough that the researchers described the gap as striking. Then it got tested on real-world planning problems. Same result. The technique that helped AI solve puzzles turned out to help it make better strategic recommendations. Which makes sense. Planning is search. Whether you're filling a crossword or building a business, the skill is the same: evaluate your options before you commit to one. |
|
The Bigger Lesson Here |
One path to an answer is guessing. Multiple paths evaluated against real constraints is searching. |
Most prompts guess. They take the first reasonable direction and follow it. Sometimes that works. When the stakes are low and the task is straightforward, guessing is efficient. |
When the stakes are real and the task is genuinely complex, searching is worth the extra minutes. Tree-of-Thought is a searching structure. It doesn't find the most common answer. It finds the most appropriate answer for your specific situation. |
That distinction — common versus appropriate — is the difference between AI that produces generic output and AI that produces useful output. |
|
What Changes After Using This |
The first time you use Tree-of-Thought on a real decision, you'll notice that the option the AI recommends is not always the one you expected going in. |
That's the point. If you already knew the answer, you wouldn't need the prompt. |
After a few weeks of using this structure on planning and strategic tasks, you'll develop a habit of asking for options before asking for recommendations. Not just with AI. The structure itself — generate alternatives, evaluate against constraints, then commit — is a better decision-making process than most people use with or without AI. |
The technique teaches a habit. The habit is worth more than the technique. |
|
Try This Right Now |
Find one decision you've been sitting on this week. Something with more than one possible path forward. |
Run the Tree-of-Thought prompt on it. Fill in the context section properly — your specific constraints, timeline, and goals. |
Read the evaluation before you read the recommendation. See if the assessment surfaces anything you hadn't consciously weighed. |
Then ask yourself: is this recommendation different from what you would have gone with before running the prompt? If it is, that difference is the technique working. |
|
What's Coming Next |
Module 4 is done. |
Next we move into Module 5: AI Checking Itself. |
The first newsletter in this module covers Self-Refine — the technique where AI generates an output, critiques that output against specific criteria, and revises it before you ever see it. |
One follow-up prompt that catches errors, fills gaps, and removes generic language without you having to do any of that manually. |
The difference between asking AI to "make this better" and asking AI to find specific problems and fix them is larger than most people realize. We'll cover exactly why and exactly how to structure the critique prompt to get revision that actually improves the work. |
See you then. |
|
Reply With Your Results |
Run Tree-of-Thought on a real decision this week and reply with what the evaluation surfaced. |
Tell me what the three options were. Tell me if the recommended option surprised you. Tell me if anything in the evaluation changed how you were thinking about the problem. |
I read every reply. |
— Prompt Guy |