🔮 You’ve been prompting wrong this whole time
🔮 You’ve been prompting wrong this whole timeThe leaked rulebook that shows how AI labs really promptMany of us feel fluent in prompting AI by now, but still feel frustrated when the results fall short. The issue usually isn’t the model. It’s how we talk to it. Too often, we fall into what developer Mitchell Hashimoto calls “blind prompting”, treating the AI like a helpful colleague instead of instructing it with purpose and structure. In one 2023 study, participants who got a decent result early on assumed their prompt was “done,” and never refined it further. If you hired a new team member and gave them only vague tasks with no feedback, you wouldn’t be surprised if they struggled. The same goes for AI. Models need context. They need iteration. And they benefit from review, correction and calibration, just like people do. And as AI becomes one of the most important inputs in modern work, it’s time to start managing it as such. There are now dozens of system prompts from labs and startups floating around online. AI labs have invested millions of dollars in developing them. They’re all expertly crafted and sophisticated, but Claude 4 is one of the most extensive to date. Claude’s system prompt is at 24,000 tokens, or ~9,700 words, 453 sentences in length. For comparison, the Gemini-2.5-Pro and o3_o4-mini leaks stand at 1,636 words and 2,238 words, respectively. For all of you who want to improve your skills, studying these leaks might be one of the best routes available. Our team has studied Anthropic’s internal guide to Claude 4 to identify seven key rules, along with examples (included in the leak and our own in italics), that will enhance your prompting game, guaranteed. Rule 1: You are an instructor, act like itYou might not realize it, but being specific, with clear, formatted instructions, can dramatically improve your AI results. Even subtle tweaks in wording can improve the accuracy by as much as 76%. The Claude system prompt clearly defines that the standard responses should be in paragraph format, unless the user specifies otherwise:
There are also numerous stylistic notes instructing Claude precisely how to craft its responses in different contexts, such as:
Conversely, Claude remains thorough with more complex and open-ended questions. And for those seeking more affective conversations, the system prompt writes:
How can we make use of this? When prompting your large language model (LLM), clearly define the AI’s role, your exact task and the desired output. This can include the format of the output in terms of the style, length, and formatting, to more precisely get your desired result.
Rule 2: DON’T do that; do use negative examples.It is becoming more well-known that providing clear examples of what you want can result in more aligned outcomes with your goal. However, we also see that describing what not to do can be of benefit, too. These crop up multiple times in the Claude system prompt, such as in the following example:
Alongside this are some examples to guide Claude on where not to use the analysis tool:
In fact, the system prompt contains a larger frequency of the word “never” compared to “always”, with 39 instances of the former versus 31 for the latter. The word “always” doesn’t even make it into the top 20 words in the prompt, while “never” has the 13th highest frequency. Although these arbitrary examples of what not to do may seem random, they help to guide the tool behaviour of the LLM, leading to faster outputs and fewer tokens needed to process the task. How can we make use of this? The context windows (available window of attention for an LLM) are growing bigger and bigger. For complex problems, provide the LLM with examples of what you do and do not want, to clarify the expectations and improve accuracy.
Rule 3: Provide an escape hatchBy now, we all know that LLMs tend to hallucinate if they don’t know the right answer. Aware of this, Anthropic has opted to include a line to ensure that Claude can send the user to the best place for the latest information.
Providing an “escape hatch” can help reduce hallucinations, allowing the LLM to admit where it is lacking in knowledge instead of attempting to answer (and often being very convincing even if the result is completely incorrect). How can we make use of this? Explicitly ask the LLM to mention “I don’t know” if it does not have the knowledge to comment. Additionally, you can provide another “hatch” by adding that it can and should ask for clarifying information if needed to provide a more accurate answer.
Rule 4: Search strategicallyClaude spends 6,471 tokens, nearly a third of its system prompt, just instructing itself how to search. ... Subscribe to Exponential View to unlock the rest.Become a paying subscriber of Exponential View to get access to this post and other subscriber-only content. A subscription gets you:
|
Similar newsletters
There are other similar shared emails that you might be interested in: