Subject

Prompting By Use Case

Long-form, specific guides for the actual things people prompt for: code, writing, summarization, data extraction, classification, and analysis.

Workbench split into zones for code, writing, and data tasks

The four levers from the fundamentals cluster — instruction, context, examples, output spec — apply to every prompt. What changes per use case is which lever matters most.

This cluster is the per-task playbook. Each linked article goes deep on one use case; this page is the map.

Code

Code prompting leans on context and output spec.

  • Paste the relevant code, not the whole repo.
  • State the constraints up front (Python 3.12, no third-party deps, must not change the public API).
  • Ask for a unified diff or for a specific file you can drop in.
  • Add a one-line "ignore X" if there is an obvious red herring (deprecated function, unrelated comment).

Code prompting also benefits a lot from conversational refinement: instead of writing one giant prompt, expect 2–4 turns where you correct course. Modern code-aware models handle this well.

Writing

Writing leans on examples and style anchors.

  • Show two or three short examples of the voice you want. Adjectives ("warm but precise") do less than samples.
  • Pin the audience ("software engineers who don't follow politics").
  • Say what you do not want explicitly: no listicles, no emojis, no "let's dive in."
  • If you want structure, give the structure as bullets the model must follow, not as adjectives ("snappy lead, three concrete examples, one-line takeaway").

Summarization

Summarization leans on instruction and output spec.

  • Be specific about purpose ("summarize for a CFO who hasn't seen this product before" beats "summarize this").
  • Pin the length (word count or bullet count). Otherwise the model averages toward "medium."
  • Tell the model what is NOT in the source if relevant ("do not include action items; this is a status doc, not a meeting").
  • Ask for the summary in a known shape (TLDR + 3 bullets + 1 risk) that downstream readers can scan.

Data extraction

Extraction leans on output spec, hard. You almost always want JSON.

  • Provide a JSON schema or a representative example.
  • Specify what to do when a field is missing (null vs. omit vs. "unknown").
  • Use few-shot for the tricky fields. One example of a ambiguous case beats three paragraphs of rules.
  • Use a JSON-mode-aware model. Frontier models support strict schemas; lean on that, do not rely on natural-language formatting.

Classification

Classification leans on examples and label discipline.

  • Make the label set explicit and tight. "Bug, feature, other" beats "various categories."
  • Provide one canonical example per label, especially the edge cases ("this looks like a bug report but is really a refund request").
  • Ask for the label only — no explanation, no probability, no prose. Parsers thank you.

Analysis

Analysis is the use case where prompting alone often is not enough. If you want the model to reason about a body of data and produce a non-obvious conclusion:

  • Decompose. Ask first for the data shape, then for hypotheses, then for the analysis.
  • Give the model the tools to verify (search, code execution, citation).
  • Treat the first output as a draft to argue with, not as an answer.

Where to go next

Pick the use case that matches what you are doing today. Each deeper article has a paste-and-modify starter prompt for that task type, plus the failure modes specific to that use case.

Forthcoming

  • How to Prompt for Translation
  • How to Prompt for Research Synthesis
  • How to Prompt for Brainstorming Without Slop

Where to go next

A short editorial reading list. Pick whichever fits how you like to learn.

  • NerdSip: 5-minute AI micro-course on almost any topic, on iOS and Android