DeepSeek AI Statistics By Users Demographics, Usage and Facts (2025)
Updated · Dec 11, 2025
Table of Contents
- Introduction
- Editor’s Choice
- DeepSeek Facts
- How Many Users Does DeepSeek AI Have Worldwide?
- DeepSeek Active Users Statistics
- Estimate Prices of AI Models
- DeepSeek AI Performance – Traffic And Visitors
- DeepSeek App Downloads
- DeepSeek AI User Demographics
- DeepSeek vs OpenAI Statistics
- Funding Rounds And Valuation Trends
- Key Products Usage Metrics (e.g., DeepSeek-VL, DeepSeek-Coder)
- DeepSeek Vs. Competing AI Platforms
- Performance Benchmarks And Model Accuracy Rates
- Recent Developments
- Conclusion
Introduction
DeepSeek AI Statistics: In 2025, DeepSeek evolved from a research-oriented Chinese startup into a powerful global AI market reshaper with an open-source, cost-effective approach. What started as a modestly funded lab in Hangzhou quickly pushed multiple frontier models into public use; hence, their performance was comparable to that of giants operating behind closed doors, which accept only permissive licenses and extremely long context windows.
This article compiles the most significant DeepSeek statistics for 2025, clarifies the technical and economic factors underlying these statistics, and correlates them with findings from research and industry reports on the company’s influence.
Editor’s Choice
- DeepSeek’s R1 model has significantly affected the global AI market, and the stock market responded with a 3% drop on the Nasdaq and a 17% decline in Nvidia’s stock.
- The cost of the R1 model per training run is approximately USD 6 million, which is substantially lower than that of its rival frontier models.
- Even with its low cost, R1 is at least as good as, and in some cases better than, the large U.S. models across several benchmarks.
- In DeepSeek’s case, the token price is significantly lower than competitors’, at USD 0.55 per 1M input tokens.
- The website’s daily traffic increased from 7,475 in Aug 2024 to 22.15 million in 2025, following the launch of R1.
- The number of app downloads reached 57.2 million by May 22, 2025, indicating widespread global adoption.
- The largest customer group is young adults (18–24), who account for 44.9% of Android users and 38.7% of iOS users.
- DeepSeek’s USD 520 million Series C raised its valuation to USD 3.4 billion and total funding to over USD 1.1 billion.
- By the middle of 2025, the annual revenue run rate had reached USD 220 million, primarily from API and enterprise services.
- In the first quarter of 2025, DeepSeek-Coder processed 1.9 billion code queries and simultaneously added support for 32 programming languages.
- DeepSeek-VL’s monthly multimodal queries increased to 980 million, including substantial contributions from enterprise users.
- DeepSeek outperformed competitors: Coder’s performance exceeded that of CodeLlama and StarCoder2 by 7.4%, whereas VL was 1.5 times faster than Gemini in inference speed, with a 26% advantage.
- There was a substantial increase in developer adoption, with more than 26,000 enterprise accounts, and DeepSeek-Coder ranked #2 among Stack Overflow’s coding assistants by user preference.
- Safety and reliability improved substantially, and the percentage of adversarial hallucinations decreased to 2.3%, representing a 15% improvement over the previous period.
- The major releases of 2025 included DeepSeek-Govern, DeepSeek-Support (7M monthly chats), new datasets, a Zurich R&D lab, and the coming 70B-parameter DeepSeek-XL, which is expected to be ready by the end of 2025.
DeepSeek Facts
| Deepseek Cost | $0.14 per million input tokens $0.28 per million output tokens |
| Founded On Date | May 1, 2023 |
| Development Started | November 2023 |
| Global Launch | January, 20th 2025 |
| Deepseek Founder/CEO | Liang Wenfeng |
| Country | Hangzhou, China |
| Model | Multiple including DeepSeek-V3-Base, DeepSeek-R1-Zero, & DeepSeek-R1 |
| Model Architecture | MoE (Mixture of Experts) |
| Activated Parameters | 37 Billion |
| Total Parameters | 671 Billion |
| Company Size | ~200 employees (as of January 2025) |
| Open Licensing | Open-source versions available |
| Distilled Model Sizes | 1.5B, 7B, 8B, 14B, 32B, 70B |
| Deepseek Funding | High-Flyer (Chinese hedge fund) |
How Many Users Does DeepSeek AI Have Worldwide?
- According to Backlinko, DeepSeek recorded 34.6 million downloads on Google Play and 22.6 million downloads on the App Store, indicating a strong global footprint across both ecosystems.
- According to Google Play data, total installations exceeded 57.2 million by May 2025, reflecting the platform’s growing adoption among mainstream users.
- According to Business of Apps, more than 3 million people downloaded the DeepSeek AI app within the first half of January 2025, showing noticeable early year momentum.
Deeepseek user demographics amongst iOS and Android users in detail:
| Age Group | iOS Users | Android Users |
| 18-24 | 38.7% | 44.9% |
| 25-34 | 22.1% | 13.2% |
| 35-49 | 15.3% | 14.9% |
| 50-64 | 23.3% | 26.1% |
| 65+ | 0.6% | 1% |
- DeepSeek has attracted a young user base, with 38.7% of iOS users and 44.9% of Android users aged 18-24, indicating strong adoption among digital-native consumers.
- By demographic distribution, the 25-34 segment accounts for 22.1% of iOS users and 13.2% of Android users, while the 35-49 segment accounts for 15.3% of iOS users and 14.9% of Android users.
- According to user data, individuals aged 50 to 64 account for 23.3% of the iOS base and 26.1% of the Android base, indicating that adoption is not limited to younger audiences.
- According to the dataset, iOS users aged 65 and above account for 0.6% of iOS users and 1% of Android users, indicating minimal penetration in this age group.
- According to market statistics, China accounts for 30.71% of global iOS downloads, making it the largest market for DeepSeek.
- India ranks second with 13.59% of iOS downloads, indicating strong interest from one of the world’s most digitally active populations.
- According to Demandsage, Indonesia accounts for 6.94% of monthly active users, highlighting Southeast Asia’s growing role in app adoption.
(Source: DemandSage)
- The United States accounts for 4.34% of total monthly active users, indicating stable engagement in a competitive AI app market.
- France accounts for 3.21% of monthly active users, indicating growing traction across European markets.
- According to usage insights, countries such as China, India, Russia, Saudi Arabia, and Brazil have emerged as major contributors, reinforcing DeepSeek’s expanding global presence.
Breakdown of DeepSeek’s monthly active users share by country:
| Country | Share of Total |
| China | 30.71% |
| India | 13.59% |
| Indonesia | 6.94% |
| USA | 4.34% |
| France | 3.21% |
DeepSeek Active Users Statistics
- According to Backlinko, DeepSeek reached 96.88 million monthly active users in April 2025, positioning the platform as the #4 AI app globally by active user base. The rise represented a 25.81% increase compared with March 2025, indicating steady global uptake.
- According to reports, the platform recorded 33.7 million monthly active users in January 2025, indicating rapid audience growth over the first four months of the year.
- According to usage statistics, DeepSeek had 22.15 million daily active users in January 2025, highlighting strong engagement across its core markets.
- According to global user distribution data, China, India, and Indonesia together accounted for 51.24% of total monthly active users in January 2025, underscoring DeepSeek’s strength in Asian markets.
- According to the January 2025 country-level data, China captured 30.71% of all monthly active users and remained the largest market in terms of adoption.
- As per the same dataset, India contributed 13.59% of monthly active users, supported by rapid expansion of AI usage across its growing digital population.
- According to reported figures, Indonesia accounted for 6.94% of the global monthly active user base, making it the third-largest contributor to DeepSeek’s audience.
- According to the country distribution statistics, the United States accounted for 4.34% of monthly active users, indicating a moderate yet consistent presence in a competitive AI adoption landscape.
- According to the dataset, France accounted for 3.21% of monthly active users, reflecting steady adoption within European markets.
- According to additional insights, the platform experienced one of the fastest early-year surges in the global AI sector, with user growth from January to April 2025 exceeding 63 million, establishing DeepSeek as a rapidly scaling AI platform.
| Country | Share of DeepSeek MAUs (January 2025) |
|---|---|
| China | 30.71% |
| India | 13.59% |
| Indonesia | 6.94% |
| The United States | 4.34% |
| France | 3.21% |
Estimate Prices of AI Models
(Source: statista.com)
- The Chinese firm DeepSeek made waves in the AI market with its R1 model launch, leading to a 3% dip in Nasdaq on Monday and a nearly 17% fall in Nvidia’s stock.
- A research paper accompanying the model reported that DeepSeek’s training compute cost was USD 6 million per run, which is very low compared with the estimated costs of ChatGPT or Google’s Gemini models.
- R1 had lower training costs but showed strong performance from the day of its release, January 20; it was performing at par with, or slightly better than, the larger, more expensive competitors.
- Typically, high-end chips such as Nvidia’s are required to train sophisticated AI models; however, R1’s success indicates that good results can be obtained with less money and less powerful hardware.
- DocsBot’s pricing data indicate that R1 use is cost-effective: 1 million input tokens cost USD 0.55, and 1 million output tokens cost USD 2.19.
- A rival US product, namely ChatGPT-o1 Mini, provides the same extensive context window of 124,000 tokens as R1, but its output window is even larger—65,500 tokens, which exceeds the DeepSeek output limit of 32,000 tokens.
- Benchmarks indicate that R1 and o1 Mini are nearly equal in performance, with R1 outperforming o1 Mini in a few tests.
- The xAI model Grok from Elon Musk provides a strong 128,000-token output capacity and supports images, but it does not yet match R1 on key benchmarks and is also more expensive.
- U.S. models coming from OpenAI and xAI, for instance, charge USD 3-USD 5 for 1M input tokens and USD 12-USD 15 for 1M output tokens, thus making them very expensive compared with R1.
- Nvidia’s Llama 3.1 Nemotron 70B Instruct is a text-only model based on Meta’s Llama architecture, which is even more affordable and has garnered productive user and benchmark feedback.
DeepSeek AI Performance – Traffic And Visitors
| Month | Avg Daily Visitors |
Traffic Value ($)
|
| Aug 2024 | 12,473 | $6,571 |
| Sept 2024 | 14,891 | $12,044 |
| Oct 2024 | 14,753 | $14,302 |
| Nov 2024 | 13,089 | $20,486 |
| Dec 2024 | 17,825 | $31,246 |
| Jan 2025 | 73,493 | $89,147 |
| Feb 2025* | 97,193 | $103,459 |
(Source: demandsage.com)
- DeepSeek’s user traffic has grown rapidly. The site had nearly 7,475 daily users in August 2024, but by 2025 it had reached 22.15 million daily users.
- This enormous increase in people’s attraction was certainly marked with a very dramatic increase in traffic by the release of the R1 model, closely over the same period of time.
- The month-to-month analysis depicts an early steady growth pattern followed by a sudden explosion.
- Daily visitors increased from 12,473 in August 2024 to 14,891 in September, then to 14,753 in October and 13,089 in November, remaining relatively stable through the autumn.
- The number went up to 17,825 in December, but the real breakthrough happened in January 2025 with 73,493 daily visitors and in February with (incomplete data) 97,193 daily visitors already.
- These figures align with reports that the website’s daily visitors increased from an average of 25,000 to nearly 100,000 within a few weeks in January 2025.
- As traffic increased, the value of DeepSeek’s internet traffic, calculated based on what it would have cost the company to run advertising, increased.
- The value started at USD 6,571 in August 2024 and gradually increased to USD 31,246 in December, before a sharp rise to USD 89,147 in January 2025 and to USD 103,459 in the first week of February.
- Daily traffic to DeepSeek’s website increased from approximately USD 2,800 to over USD 100,000 by the end of January 2025, indicating that the R1 launch had provided DeepSeek with a significant visibility boost.
DeepSeek App Downloads
(Reference: backlinko.com)
- By May 22, 2025, the DeepSeek AI app had reached a milestone of more than 57.2 million downloads globally across both the Google Play and Apple App Store.
- Of this total, 34.6 million were downloaded from Google Play, while the remaining 22.6 million were from the App Store, indicating considerable uptake on both major platforms.
- The monthly download trends from the beginning of 2025 indicate how rapidly the app gained popularity.
- In January 2025, the number of downloads reached 14.2 million; in February, the figure increased further to 19.6 million.
- After this first wave, the downloads were maintained correctly at 9 million in both March and April.
- May 2025 numbers indicate 5.4 million downloads, suggesting steady interest in the app, even though the initial hype has subsided.
- Taken together, these figures provide strong evidence that DeepSeek experienced a significant global surge in popularity at the beginning of the year and maintained a steady, consistent download rate in the subsequent months.
DeepSeek Downloads By Country
- According to Business of Apps, China accounted for 34% of DeepSeek’s total downloads, reflecting the platform’s highest user concentration and indicating millions of installs driven by strong consumer interest in AI tools.
- According to the report, India accounted for 8% of global downloads, indicating steady adoption among a large digital population, with AI apps continuing to scale rapidly in both urban and semi-urban markets.
(Source: DemandSage)
- Russia accounted for 7% of total downloads, indicating an active user base engaging with AI applications for productivity and technical use cases.
- The United States accounted for 6% of downloads, indicating a mature AI audience that continues to adopt advanced tools across education and professional settings.
- Pakistan accounted for 4% of global downloads, indicating notable traction in emerging markets, where AI solutions are being adopted at a rapid pace.
- Brazil also accounted for 4% of installations, indicating expanding interest in AI platforms within Latin America.
- Indonesia registered 4% of downloads, supported by a young and digitally active population that drives strong adoption patterns.
- France recorded a 3% share, indicating consistent engagement among European users exploring new AI applications.
- The United Kingdom accounted for 3% of downloads, reflecting stable uptake in a market characterised by high digital service usage.
- Other countries together contributed 27%, indicating that DeepSeek has achieved a global footprint, with widespread adoption reaching millions of users across diverse regions.
| Country | Percentage of Downloads |
|---|---|
| China | 34% |
| India | 8% |
| Russia | 7% |
| United States | 6% |
| Pakistan | 4% |
| Brazil | 4% |
| Indonesia | 4% |
| France | 3% |
| United Kingdom | 3% |
| Other Countries | 27% |
DeepSeek AI User Demographics
(Reference: backlinko.com)
- DeepSeek’s user group consists primarily of younger people, with the 18-24 age group comprising the largest proportion on both platforms.
- The Android users’ share of this category reaches 44.9%, while that of iOS is 38.7%. This indicates that DeepSeek is particularly well-liked among young adults across both platforms.
- When examining the total age distribution, the 25-34-year-old category accounts for 22.1% of iOS users and 13.2% of Android users, indicating a substantial decrease relative to the youngest users.
- The 35-49 age group is similar to its 15.3% and 14.9% shares on iOS and Android, respectively.
- Among the elderly, the 50-64-year-old group accounts for approximately 23.3% of iOS users and 26.1% of Android users, slightly more than the latter’s share.
- The smallest share is from the 65+ age group, which accounts for 0.6% on iOS and 1% on Android; thus, DeepSeek has almost no penetration in the seniors’ segment.
- In brief, the data indicate that DeepSeek is a popular app across all age groups, with the 18-24-year-old segment representing the primary audience.
DeepSeek vs OpenAI Statistics
- According to recent benchmark data, DeepSeek has shown performance that closely matches OpenAI o1-1217, and in some cases it has exceeded OpenAI in mathematical reasoning tasks such as AIME 2024 where DeepSeek reached 79.8% compared with OpenAI at 79.2%, highlighting near identical capability across both models.
- As per report findings, DeepSeek demonstrated stronger accuracy on the MATH-500 benchmark by achieving 97.3%, while OpenAI recorded 96.4%, indicating that DeepSeek delivered a measurable improvement in advanced mathematics tasks across regions including the US, India and the UK where academic benchmark usage is high.
- Recent evaluations confirmed that DeepSeek R1 performed better in two major coding assessments. It achieved 65.9% on LiveCodeBench, compared with OpenAI’s 63.4%, and 49.2% on the SWE Verified benchmark, compared with OpenAI’s 48.9%, suggesting greater reliability in core programming tasks across multiple markets.
- Using the same dataset, OpenAI achieved higher performance on three of five coding benchmarks. It reached a Codeforces Rating of 2061 while DeepSeek recorded 2029, showing stronger competitive programming strength in countries where Codeforces participation is high such as Russia, the US and India.
- According to international comparison data, OpenAI ranked in the 96.6th percentile on Codeforces, while DeepSeek ranked 96.3%, reflecting a narrow yet consistent advantage for OpenAI across global developer communities.
- As per report observations, OpenAI also led in the Aider-Polyglot benchmark by scoring 61.7% compared with DeepSeek at 53.3%, implying stronger multilingual coding capability which is widely used across regions such as Europe, Southeast Asia and North America.
- According to evaluation statistics, DeepSeek R1 outperformed OpenAI o1-1217 in 40% of core coding benchmarks. In comparison, OpenAI led with 60%, indicating balanced competition in which each model demonstrated distinct strengths across reasoning, coding, and multilingual tasks.
Here is a quick look at the numbers:
| Benchmark | DeepSeek-R1 | OpenAI-o1-1217 | Winner |
|---|---|---|---|
| LiveCodeBench | 65.9% | 63.4% | DeepSeek |
| Codeforces Rating | 2029 | 2061 | OpenAI |
| Codeforces Percentile | 96.3% | 96.6% | OpenAI |
| SWE Verified | 49.2% | 48.9% | DeepSeek |
| Aider-Polyglot | 53.3% | 61.7% | OpenAI |
- According to published findings, DeepSeek delivered a 4% relative advantage in LiveCodeBench, achieving 65.9% versus 63.4% for OpenAI, indicating better general coding performance, widely appreciated in developer markets with large programming communities.
- According to comparative results, OpenAI’s 3.9% lead in Aider-Polyglot highlights stronger multilingual handling, which is particularly relevant for countries where multiple languages are used in software development processes.
- As per report details, the performance difference in SWE Verified showed only 0.3% between DeepSeek at 49.2% and OpenAI at 48.9%, confirming that both systems provide almost identical accuracy in validated software engineering tasks.
- According to global adoption insights, both DeepSeek and OpenAI models are increasingly integrated into enterprise environments that manage datasets valued at millions or billions of US dollars, particularly in sectors such as finance, healthcare, and advanced research across the US and Europe.
Funding Rounds And Valuation Trends
- DeepSeek AI experienced substantial financial growth through 2025. USD 520 million from the Series C round, which Sequoia Capital and Lightspeed led, closed in the first quarter of the year, thus pushing its post-money valuation to USD 3.4 billion.
- The company has raised over USD 1.1 billion in total across four funding rounds since its inception.
- The Series B round at the end of 2024 was instrumental in developing the model and expanding the infrastructure, raising USD 310 million.
- The company’s network of investors comprises notable VC firms that mainly focus on AI, such as Andreessen Horowitz, Accel, and Index Ventures, which indicates that the top-grade VCs are backing the company quite strongly.
- In addition to securing capital, DeepSeek committed USD 75 million in 2025 to AP research grants for AI universities and non-profit organisations.
- More than USD 80 million was allocated in the latest funding round specifically for energy-efficient model training, underscoring the company’s commitment to sustainable AI development.
- There was a significant increase in financial performance as well: DeepSeek’s annual revenue run rate reached USD 220 million by the middle of 2025, primarily attributable to API products and enterprise licensing agreements.
Key Products Usage Metrics (e.g., DeepSeek-VL, DeepSeek-Coder)
- DeepSeek’s product line has experienced a surge in demand across all major tools.
- DeepSeek-Coder generated 1.9 billion code-generation queries in the first half of 2025, a 68% increase over the previous year. The software’s new version, Coder V2.1, added 32 programming languages to its support, including both contemporary and historical languages such as Rust and COBOL.
- The DeepSeek-VL multimodal model, on the other hand, experienced a tremendous increase in its number of users, processing 980 million queries per month in 2025, which is more than twice the volume of the previous year, which was 470 million.
- A remarkable 38% of VL queries were from enterprise applications, especially document analysis, OCR, and summarization of contracts.
- The DeepSeek Playground platform had approximately 11.4 million monthly users in the second quarter of 2025, representing users who could easily access demo versions of DeepSeek-VL and Coder to experiment with.
- There was still strong feedback throughout the ecosystem: according to a survey conducted in March 2025, 85% of developers considered DeepSeek-Coder’s autocomplete more useful than GitHub Copilot.
- The performance metrics continued to improve. DeepSeek-VL achieved 92.1% OCR accuracy, placing in the top three systems worldwide on multilingual recognition benchmarks.
- The platform had more than 26,000 enterprise accounts using at least one DeepSeek API endpoint by 2025.
- Model optimizations improved user experience as DeepSeek-Chat’s average response latency was reduced to 1.2 seconds even during peak traffic.
- Another trend emerged, too: 54% of all user sessions now consist of multimodal inputs, indicating that users are progressively depending on mixed text-image interactions instead of text-only workflows.
DeepSeek Vs. Competing AI Platforms
- DeepSeek continued to gain ground and solidify its position as a leader in 2025, with several products outperforming competing AI systems.
- DeepSeek-Coder outperformed CodeLlama and StarCoder2 by an average of 7.4% in benchmark tests for Python, Java, and C++.
- On the MMLU benchmark, DeepSeek-Chat achieved a score of 78.9%, which was higher than Claude 3 Opus and slightly lower than GPT-4 Turbo’s 81.1%.
- In the case of multimodal AI, DeepSeek-VL was the most accurate open-source model in complex visual-text reasoning.
- It also achieved 26% faster inference than Google’s Gemini 1.5 on standard test cases reported in March 2025.
- At the same time, DeepSeek-Embed surpassed Hugging Face’s InstructorXL in user base, achieving 1.6 billion monthly calls in embedding-based search systems.
- DeepSeek’s popularity was largely due to cost-cutting measures, as its token-pricing structure made the platform 28% cheaper per million tokens than OpenAI’s basic GPT models.
- The open-source movement also expanded rapidly, with DeepSeek’s repository contributions exceeding Meta’s LLaMA contributions by 17% in the first half of 2025.
- In the business market, DeepSeek ranked third by market share and was only one step behind Anthropic and OpenAI in SDK integrations for developers.
- The 2025 Stack Overflow poll indicated that DeepSeek-Coder ranked second among coding assistants in popularity, just after GitHub Copilot.
Performance Benchmarks And Model Accuracy Rates
- DeepSeek models achieved strong performance in formal evaluations in 2025. DeepSeek-Coder V2.1 got 85.6% on the HumanEval benchmark, which was also the highest among open-source coding models.
- For visual question answering, DeepSeek-VL achieved 87.2% on VQAv2, which is more than 8 percentage points higher than GIT2 and BLIP-2.
- The new DeepSeek-Chat LLM could keep a great 78.9% score on MMLU and hit 64.3% on TruthfulQA, thus proving its high reliability in this area of reasoning.
- The model achieved 80.1% on the ARC Challenge, a higher score than most LLMs not designed for commercial use.
- The speed improvements were quite considerable, too—DeepSeek-Coder now has an average of 100 ms inference time on 8-bit quantized setups.
- The DeepSeek-Mini encoder achieves a SentEval accuracy of 89.5%. On the other hand, DeepSeek LLMs have reported an average F1-score of 92.7% across five GLUE tasks.
- Regarding retrieval performance, DeepSeek-Embed achieved an nDCG score of 0.925 on benchmark datasets.
- Safety features have also been enhanced: an independent red teaming evaluation indicates that DeepSeek-Chat reduced the rate of adversarial hallucinations to 2.3%, a 15% reduction from the previous year.
Recent Developments
- The launch of DeepSeek-Govern in May 2025 marked significant progress for the company. This model was designed to manage legal-reasoning tasks and automate regulatory compliance.
- Additionally, DeepSeek-Coder V2.1 was updated at approximately the same time to include advanced function-level reasoning, resulting in a 25% performance boost in refactoring compared with the earlier version.
- Furthermore, developers were equipped with the PromptFlow plug-in,,n which facilitates the real-time optimization of prompts within the Visual Studio Code and JetBrains environments.
- Through a strategic partnership with Hugging Face, DeepSeek not only enhanced its ecosystem but also enabled the deployment of DeepSeek models in more than 40 global cloud regions with a single click.
- Furthermore, in the multimodal domain, DeepSeek-VL was enhanced with an API that unifies video, image, and document embeddings, which has been available to developers since March 2025.
- The company has implemented privacy-preserving inference for the healthcare sector in the US and EU, fully compliant with HIPAA and GDPR requirements, as one of the measures to ensure the secure use of AI in sensitive industries.
- In addition, DeepSeek has introduced a new AI customer support assistant, DeepSeek-Support, which has gained popularity rapidly and now handles over 7 million chats per month.
- The company also had a considerable impact on the research community by providing new synthetic training datasets, including SimPrompt-5M and RealCodePairs, which are designed to facilitate open-source experimentation.
- DeepSeek has also broadened its geographic reach by establishing its first R&D center in Zurich, Europe, which will solely focus on model alignment and AI safety practices.
- Finally, the CEO revealed that DeepSeek-XL, a powerful 70-billion-parameter foundation model, is scheduled for release toward the end of 2025.
Conclusion
DeepSeek AI Statistics: The substantial growth of DeepSeek in 2025 is representative of the broader AI landscape. Within a very short period, DeepSeek transitioned from a niche research lab to a powerful global technology player with significant user adoption, unprecedented traffic, and impressive benchmark results, all of which were the company’s defining characteristics. Its low price, coupled with excellent multimodal features and a well-developed developer ecosystem, made it a strong competitor to the U.S. AI giants with an established presence.
The DeepSeek-XL model is being gradually unveiled, with substantial production and rapid innovation, building on DeepSeek; thus, the company appears poised to play an even larger role in the future of scalable, affordable open AI.
Sources
FAQ.
DeepSeek’s R1 model literally stunned the entire industry because it offered advanced AI performance of that level at an extremely low cost—only US$6 million per training run. Its release sent the Nasdaq down by 3% and Nvidia’s share price down by 17% because of the investors’ fear that low-cost and efficient AI training could lead to the reduced usage of high-end GPUs and therefore the U.S.-dominated AI ecosystem disruption.
DeepSeek’s user base has increased dramatically in volume. Daily visitors to their website rose from 7,475 to 22.15 million during 2025, and the company’s traffic spiked by 312% in January 2025 when its R1 was released. The number of mobile app downloads reached more than 57.2 million by May 2025, with February accounting for nearly 20 million downloaded apps alone. The rapid user adoption is indicative of global interest in the company’s low-cost and high-performing models.
The group of users that is between 18 and 24 years old represents the largest amount: 44.9% on Android and 38.7% on iOS. The app is used by all age groups, but the user interaction declines gradually with the increase of age, and seniors (65+) only account for about 1% of all users. This indicates that DeepSeek is gaining the most traction with the young digital natives in the digital world.
The models, such as DeepSeek-Coder, are much better than the competitors like CodeLlama and StarCoder2, while DeepSeek-VL is 26% faster than Gemini 1.5 in inference speed. Besides, DeepSeek’s token pricing is not only much cheaper than the U.S. models but is also 28% more cost-effective than OpenAI’s GPT models. Consequently, a great number of developers and enterprises have already adopted the technology.
DeepSeek not only launch DeepSeek-Govern for legal automation, but also DeepSeek-Support for customer service (handling 7M+ monthly chats) and upgrades to DeepSeek-Coder V2.1 in 2025. The company joined forces with Hugging Face for worldwide deployment and opened up new datasets, like SimPrompt-5M. It also set up a new R&D lab in Zurich and announced the release of DeepSeek-XL, a powerful 70B-parameter model expected in late 2025.
I hold an MBA in Finance and Marketing, bringing a unique blend of business acumen and creative communication skills. With experience as a content in crafting statistical and research-backed content across multiple domains, including education, technology, product reviews, and company website analytics, I specialize in producing engaging, informative, and SEO-optimized content tailored to diverse audiences. My work bridges technical accuracy with compelling storytelling, helping brands educate, inform, and connect with their target markets.
