What to learn first
-
Prompting Fundamentals
The mechanics every prompt rests on: system vs. user messages, context windows, zero- and few-shot patterns, templates, and how to give a model what it needs to answer well.
Take me to the Fundamentals hub -
Prompting Patterns
Reusable techniques like role prompting, tree-of-thoughts, ReAct, prompt chaining, self-consistency, and negative prompting — with the kind of example you can paste and modify.
Take me to the Patterns hub- how to prompt for structured output
- json mode and schema prompting
- how to make an llm cite its sources
-
Prompting By Use Case
Long-form, specific guides for the actual things people prompt for: code, writing, summarization, data extraction, classification, and analysis.
Take me to the Use Cases hub- how to prompt for translation
- how to prompt for research synthesis
- how to prompt for brainstorming without slop
-
Local Models
Running LLMs on your own hardware: how the stack works, which runtimes to pick, what quantization actually changes, and which open-weight models are genuinely usable right now.
Take me to the Local hub- best local llms snapshot 2026 05
- gpu vs cpu for local llms
- vram requirements for popular models
-
Model Benchmarks
Honest head-to-heads between frontier and open-weight models. We disclose the prompts, the temperature, the seed, and the limits — every comparison is timestamped.
Take me to the Bench hub- best coding llm snapshot 2026 05
- best writing llm snapshot 2026 05
- best summarization llm snapshot 2026 05
-
Release Radar
A dated, sourced tracker of new and rumored model releases. Every claim is tagged Confirmed, Strong signal, or Speculation, with a link back to the primary source.
Take me to the Radar hub- model release confidence labels explained
- primary sources for llm rumors
- how leak quality changes over time