Most "AI model release" coverage on the wider web is a downstream cite of a downstream cite. Someone tweets a screenshot of an article about a leak; an aggregator quotes the tweet; another newsletter quotes the aggregator. By the third hop, dates are off, model names are slightly wrong, and the original post — which had the actual facts — never gets clicked.

The fix is to read primary sources directly. There are not many of them per lab, and they change rarely. Bookmark them once and the rest of the noise stops mattering.

The hosted-API labs

Anthropic (Claude)

Anthropic posts model releases, capability announcements, and research papers on the news page. The docs are the canonical source for which models are currently callable through the API.

OpenAI

OpenAI publishes release posts as both general posts (news index) and product-integration announcements (the index page for partner work). The docs page is what to check for "is GPT-5.x the same model I'm calling today?"

Google DeepMind (Gemini, Gemma)

Gemini and Gemma releases land on the DeepMind blog and the corresponding model pages. Gemma model cards live on the Hugging Face google/ org.

The open-weight labs

These almost always release through a combination of an official lab page and a Hugging Face organization. Subscribe to the HF org and you see every new weight as it lands.

If you build a single RSS feed from these, you have a release tracker more accurate than any third-party aggregator on the web.

Two pages every model engineer should bookmark

Outside the labs themselves, these two pages are unreasonably useful:

  • arxiv.org (and cs.CL specifically) — when a lab ships a technical report alongside a model, this is where it lands first. The model is sometimes already public on Hugging Face by the time the arXiv preprint clears moderation.
  • Hugging Face's trending page — a real-time indicator of which model weights are getting attention. Not a substitute for a release feed, but a useful "what should I look at" filter.

What to filter out

Avoid building your tracker around:

  • Aggregated "best LLM 2026" pages. Almost always SEO content with stale info, cited screenshots, and no testing.
  • Anonymous "insider" posts. No primary artifact, no trail to follow.
  • Leaderboards that do not disclose their evals. Numbers without methodology are decoration.

Subscribe to the labs, ignore the rest, and the modern AI release calendar becomes tractable.