About ModelBeat

What we do

ModelBeat aggregates pricing data from hosted inference providers for open-weights large language models. We track per-token input and output prices, context windows, rate limits, and throughput figures across providers like Together AI, Fireworks AI, DeepInfra, OpenRouter, and Groq — and we update that data daily.

Our workload calculator lets you enter your actual token volumes and constraints, then ranks every provider by projected monthly cost. There is no sign-up required and the data is freely available.

Data sources and methodology

Pricing data is scraped from each provider's public pricing page once per day. We do not access any private APIs or authenticated endpoints. All data on ModelBeat reflects publicly listed prices — the same prices any user would see before signing up.

When a provider publishes throughput or latency benchmarks on their pricing or documentation pages, we include those figures. We flag any data point where confidence is below 1.0 — for example, when we estimate per-second billing costs from published throughput numbers.

Scraping policy and bot identification

Our scraper identifies itself with the following User-Agent string:

ModelBeatBot/1.0 (+https://modelbeat.dev/about)

We respect robots.txt and limit ourselves to one request per second per provider. Cached responses are stored locally for 24 hours to avoid unnecessary load on provider servers.

If you operate a provider and would like to be added, removed, or have corrections made to your data, please open an issue on GitHub.

Accuracy and freshness

Prices change without notice. We flag data older than seven days with a warning on calculator results, and each data page shows a “last verified” timestamp. If you need guaranteed real-time pricing, always check the provider's pricing page directly — we link to it from every provider page.

We do not publish a snapshot if any data point has changed by more than 50% relative to the previous scrape without a corresponding announcement from the provider. Large changes are held for manual review.

Open source

The scraper, normalization logic, and calculator are open source. View the repository on GitHub.