The Evolution of DeepSeek: From DeepSeek-Coder to R1 and V3 — A History of China's Open-Source AI

The Evolution of DeepSeek: From DeepSeek-Coder to R1 and V3 — A History of China's Open-Source AI

DeepSeek has grown from a coding-focused lab in 2023 into one of the world’s most talked-about open-source AI companies, with DeepSeek-R1 and DeepSeek-V3 competing with top proprietary models. This article traces its development—from DeepSeek-Coder and DeepSeek-LLM through MoE, V2, R1, and V3—and how it reshaped the global conversation about open-source AI.

Release Timeline & Major Milestones

Date Milestone Significance
July 2023 DeepSeek founded Founded in Hangzhou; backed by High-Flyer
Nov 2023 DeepSeek-Coder / DeepSeek-LLM Code generation and general LLM; first public models
Jan 2024 DeepSeek-MoE Mixture of Experts (Base + Chat)
2024 DeepSeek-Math, V2, Coder-V2, V2.5 Math reasoning, efficiency, coding upgrades
Dec 2024 DeepSeek-V3 671B MoE, 37B active; open-source; low training cost
Jan 2025 DeepSeek-R1 Reasoning model; iOS/Android app; MIT license; topped U.S. app store
2025 V3.1, V3.2, R1-0528 Agent capabilities, reasoning upgrades, GPT-5–level claims

2023–2024: Coder, LLM, and MoE

DeepSeek-Coder and AI coding: programming and development

DeepSeek-Coder and LLM laid the foundation for later models

DeepSeek was founded in July 2023 in Hangzhou, backed by High-Flyer. In November 2023 it released DeepSeek-Coder for code generation and DeepSeek-LLM for general language tasks. In January 2024 it added DeepSeek-MoE (Mixture of Experts) in Base and Chat variants, followed over the year by DeepSeek-Math, DeepSeek-V2, DeepSeek-Coder-V2, and DeepSeek-V2.5, each improving efficiency, reasoning, or coding performance.

DeepSeek-V3 (December 2024)

DeepSeek-V3 was a 671B parameter MoE model with only 37B parameters active per token, released as open-source. It used novel architecture choices and was trained at a fraction of the cost of comparable proprietary models, showing that efficient scaling could compete with much larger closed models.

DeepSeek-R1 and the 2025 Breakout

In January 2025, DeepSeek released DeepSeek-R1, a reasoning-focused model with extended chain-of-thought and reinforcement learning. It was open-sourced under the MIT license and shipped as a free chatbot app on iOS and Android. The app briefly topped the U.S. iOS App Store free chart, highlighting strong demand for capable, open AI. R1 was later updated (e.g. R1-0528) with better reasoning and fewer hallucinations.

V3.1 and V3.2

DeepSeek-V3.1 added better agent and tool-use capabilities with fewer reasoning tokens. DeepSeek-V3.2 was positioned as competitive with top proprietary models on mixed reasoning and agent tasks. Together, the V3 line and R1 established DeepSeek as a leading open-source alternative for coding, reasoning, and assistants.

Summary

DeepSeek’s path from a 2023 coding and LLM lab to V3 and R1 in 2024–2025 shows how fast open-source AI has advanced. Understanding this history helps you choose the right DeepSeek model for chat, coding, or reasoning and see how it fits into the broader open-source landscape.

Key Takeaways

  • DeepSeek-Coder and DeepSeek-LLM (Nov 2023) were the first public models; MoE and V2 followed in 2024
  • DeepSeek-V3 (Dec 2024) brought a 671B MoE, open-source, with efficient training
  • DeepSeek-R1 (Jan 2025) added reasoning and a top-downloaded app; V3.1/V3.2 extended agents and performance
  • All major models are open-source (MIT), enabling self-hosting and broad adoption

Try DeepSeek on FuseAITools for chat, coding, and analysis in one place.

Disclaimer: Release dates and product details are based on public information and may be updated by DeepSeek. This article is for educational and informational purposes.