Subject

Prompting Patterns

Reusable techniques like role prompting, tree-of-thoughts, ReAct, prompt chaining, self-consistency, and negative prompting — with the kind of example you can paste and modify.

Index cards laid out on a workbench showing reusable prompt patterns

Prompting patterns are reusable shapes for how you talk to a model. Each one exists because somebody had a specific failure mode and found a shape that fixed it. The catalog has grown enough that it is worth knowing what is in it before reinventing.

This page is the landing page for the cluster. Each named pattern has its own deeper article. Read this one first to decide which deeper one you actually need.

What counts as a pattern

A pattern is not a prompt; it is a way of organizing a prompt. "Always write a polite preamble" is not a pattern. "Show the model two worked examples before asking it to do the third" is — it has a shape, a known failure mode it fixes, and reasonable evidence it works.

The patterns covered in this cluster are:

  • Role prompting. Asking the model to adopt a specific role (lawyer, code reviewer, copy editor). Useful when the role implies a register or method. Useless when the role is decoration.
  • Few-shot prompting. Showing 1–5 input/output examples before the real input. Locks in shape, vocabulary, and edge-case handling.
  • Chain-of-thought prompting. Asking the model to write its reasoning before its answer. Was a free lunch on early models. On modern reasoning models it is mostly redundant.
  • Tree-of-thoughts. Branching reasoning paths and scoring them. Useful for hard search problems, expensive elsewhere.
  • ReAct. Interleaving reasoning ("Thought:") and actions ("Action: tool(...)"). The default shape for tool-using agents.
  • Self-consistency. Sampling the same prompt multiple times and majority-voting the answer. A real accuracy lift on benchmark tasks, at a cost.
  • Prompt chaining. Breaking a job into a sequence of prompts where each step's output feeds the next. Often beats one giant prompt.
  • Negative prompting. Explicitly saying what NOT to do. Sometimes essential, often a sign the positive instruction was vague.

Which one for which job

A rough decision tree:

  • The model is giving wrong answers on multi-step tasks → chain-of-thought (if not on a reasoning model) or self-consistency.
  • The output format keeps drifting → few-shot with a strict example, or move to structured output (JSON schema).
  • The model is acting outside its role → tighten the system prompt (see the Fundamentals cluster) or use role prompting with a clear scope.
  • The task needs to call tools → ReAct.
  • The job is too big to fit one prompt or one mental step → prompt chaining.
  • The model keeps doing one specific bad thing → negative prompting as a temporary patch, then redesign the positive instruction.
  • The task has a search-like structure (puzzles, planning) → tree-of-thoughts.

What patterns cannot fix

Most production failures are not prompting failures. They are:

  • Information failures. The model lacks the fact. No pattern fixes this; retrieval does.
  • Capability failures. The model genuinely cannot do the task. No pattern fixes this; a different model does.
  • Tooling failures. The model is bad at the API surface around it. No pattern fixes this; better tool definitions and structured output do.

The mistake to avoid is reaching for a pattern when the real problem is one of these three. Patterns are local accuracy lifts; they do not move the model's underlying skill.

How to use this cluster

Each linked article goes one level deeper: what the pattern is, what it actually fixes, when not to use it, and a worked example you can paste. Start with the failure mode you have right now and click into the pattern that matches.

Forthcoming

  • How to Prompt for Structured Output
  • Json Mode and Schema Prompting
  • How to Make an Llm Cite Its Sources

Where to go next

A short editorial reading list. Pick whichever fits how you like to learn.

  • NerdSip: 5-minute AI micro-course on almost any topic, on iOS and Android