DeepSeek Coder V2 Instruct — pricing, providers, and benchmarks
DeepSeek Coder V2 Instruct (236B total / 21B active parameters) was the first open-weights code model to credibly compete with GPT-4o on HumanEval and LiveCodeBench when it launched in mid-2024. It supports 338 programming languages and includes a 128K context window suitable for whole-repo refactors. Hosted pricing typically falls between $0.14/$0.28 per 1M tokens on cheap providers and $0.50/$1.00 on premium providers. Best deployed for code-specific workloads where Llama 3.3 70B starts to feel general-purpose: autocomplete, code review, bug-fix generation, and multi-file refactors. For mixed-purpose agents that do some coding plus chat, a general model like Llama 3.3 70B is usually easier to operate than splitting workloads across models.
Provider pricing
Sorted by total monthly cost for 100M input + 10M output tokens.
No pricing data available for this model yet.
Frequently asked questions
How much does it cost to run DeepSeek Coder V2 Instruct for 100M tokens?▾
Pricing varies by provider. Use the workload calculator above to get an accurate estimate for your specific usage pattern.
What is the cheapest provider for DeepSeek Coder V2 Instruct?▾
No pricing data is available yet. Check back after the next scrape run.
What context window does DeepSeek Coder V2 Instruct support?▾
DeepSeek Coder V2 Instruct supports a context window of 131,072 tokens. Individual providers may cap this lower — see the pricing table for per-provider context limits.