Qwen AI Statistics By Features, Models, Users, Country, Website Traffic, Adoption, Trends And Facts (2026)

Maitrayee Dey
Written by
Maitrayee Dey

Updated · Jan 23, 2026

Aruna Madrekar
Edited by
Aruna Madrekar

Editor

Qwen AI Statistics By Features, Models, Users, Country, Website Traffic, Adoption, Trends And Facts (2026)

Introduction

Qwen AI Statistics: Qwen AI, developed by Alibaba Cloud, comprises a set of powerful language models. It helps with chatting, writing code, and working in many languages. Moreover, the company provides open-source versions, enabling researchers and developers to test, improve, and quickly build useful real-world apps. The Qwen2.5 range spans small (0.5 billion) to large (72 billion) models, and most versions can handle up to 128K tokens per context for long documents. Its training has also grown huge, using tens of trillions of tokens, which helps it score better on common tests.

Some newer versions can handle up to 1 million tokens at once, making long files, big reports, and many documents easier to work with. Additionally, high download numbers indicate that developers are actively using and improving Qwen. All signs indicate a rapidly growing, reliable model family.

Editor’s Choice

  • According to Grabon.In Qwen, AI models have exceeded 20 million downloads across major platforms, including Hugging Face and GitHub.
  • Qwen AI supports 29+ languages, making it well-suited for users worldwide.
  • On 2025-09-24, Qwen3-Max was releasedand was reported to have more than 1 trillion parameters.
  • In 2025, Iraq leads Qwen AI user share with 27.52%, followed by Brazil at 19.08%.
  • As of December 2025, the Qwen app reached 18.34 million monthly active users in only 2 weeks.
  • According to Similarweb, qwenlm.ai ranks #172,593 globally, up 16,090 positions.
  • In November and December 2025, qwen.ai recorded 32.8 million total visits in each month.
  • Meanwhile, qwenlm.ai’s top traffic sources are Russia (29.96%, +0.14%) and the United States (12.71%, +155.3%).
  • qwenlm.ai’s audience is slightly more female than male, with 51.33% women and 48.67% men.
  • Qwen-Max supports up to 32,768 tokens and costs USD 0.0016 per 1,000 input tokens and USD 0.0064 per 1,000 output tokens.
  • According to Grabon.In Qwen’s AI services, more than 2.2+ million corporate users have been reached via DingTalk, Alibaba’s workplace collaboration platform.

Brief Overview of Qwen AI

Metrics Description
Built by Developed by Alibaba Cloud (part of Alibaba Group).
First beta Beta testing began in April 2023 under the name “Tongyi Qianwen.”
Key milestone In July 2024, media coverage highlighted Qwen’s benchmark standing.
Stable release Qwen2.5-Max launched on January 29, 2025 (later versions were released afterwards).
Headquarters Based in Hangzhou, Zhejiang, China.
Parent company Alibaba Group.
Founder Jack Ma founded Alibaba Group.
CEO Eddie Yongming Wu (Daniel Zhang served previously).
Type Large language model/chatbot service.
Country China
Industry Artificial Intelligence

Alibaba (BABA) Stock Analysis

Alibaba (BABA) Stock Analysis

(Source: investing.com)

  • Latest price: Alibaba (BABA) last traded at USD 165.40, with the most recent timestamp of Jan 17, 2026 (UTC).
  • 52-week range: Over the past 52 weeks, the stock has traded roughly between USD 84 (low) and USD 193 (high).
  • Analyst/market targets: Investing.com summarises the Street view as “Strong Buy”, with an average 12-month target of USD 194.99, a high target of USD 258, and a low target of USD 125.

Key Features of Qwen AI

  • Qwen AI works across multiple formats, including text (writing, summarising, translating, chat), images via Qwen-VL (image understanding, detailed captions, visual Q&A), and audio via Qwen-Audio (speech-to-text, including real-time voice chat in some variants).
  • Newer models support very long context up to 128k tokens, allowing long conversations, full-document reviews, and multi-step problem-solving.
  • Qwen-Coder can generate, debug, explain, and translate code, and assist with documentation.
  • Qwen-Math targets harder math tasks and benchmarks.
  • Qwen supports 119+ languages/dialects.
  • It scales from approximately 0.5 billion to 235 billion parameters, and MoE versions improve efficiency by activating only needed experts.
  • Many models are open source, enabling customisation, fine-tuning, and lower adoption barriers.

Qwen Models Released

Release Date Model/version Key numeric information
2025-01-26 Qwen2.5-VL (Vision-Language) Opened in 3 sizes: 3 billion, 7 billion, 72 billion; can understand videos over 1 hour.
2025-01-28 Qwen2.5-Max (MoE, API model) Pre-trained on 20+ trillion tokens.
2025-03-24 Qwen2.5-VL-32B-Instruct 32 billion parameters; optimised with reinforcement learning (RL).
2025-03-27 Qwen2.5-Omni-7B 7 billion parameters; supports multimodal input (text/image/audio/video) and real-time responses.
2025-04-29 Qwen3 (dense + MoE family) Dense: 0.6 billion, 1.7billion, 4 billion, 8 billion, 14 billion, 32 billion; MoE: 30B-A3B (3 billion active) and 235B-A22B (22 billion active); context lengths 32K / 128K; trained on approximately 36 trillion tokens across 119 languages/dialects.
2025-07-22 Qwen3-Coder-480B-A35B-Instruct 480 billion total parameters, 35 billion active; 256K native context and 1 million with extrapolation; post-training system ran 20,000 environments in parallel
2025-09-15 Qwen3-Next-80B-A3B (Instruct/Thinking) 80 billion total, 3 billion activated; pretraining 15 trillion tokens; context 262,144 native (up to 1,010,000); 512 experts (10 activated).
2025-09-22 Qwen3-Omni Supports 119 text languages, 19 speech input languages, 10 speech output languages; reports SOTA on 22/36 audio/video benchmarks.
2025-09-23 Qwen3-VL-235B-A22B (Instruct/Thinking) Released 235B-A22B variants; native 256K context (expandable to 1 million); OCR supports 32 languages (up from 10).
2025-09-24 Qwen3-Max Reported at >1 trillion parameters.

Qwen AI Users Statistics By Country

Qwen AI Users Statistics By Country

(Source: grabon.in)

  • The top 5 countries by share of Qwen AI users in 2025 are Iraq (27.52%), Brazil (19.08%), Turkey (12.10%), Russia (10.60%), and the United States (6.15%).
  • Overall, Iraq leads in traffic share at 27.52%, while the United States accounts for 6.15%.

Qwen App Statistics

Qwen App Statistics

(Source: awisee.com)

  • As of December 2025, the Qwen app reached 18.34 million monthly active users in only 2 weeks.
  • It also posted a 149% month-over-month (MoM) increase in users.
  • Globally, the app ranked #24 among AI apps.
  • As of 2025, on the consumer side, the Qwen App reports 30 million monthly active users (achieved in 2 weeks).
  • While on the enterprise side, Alibaba Cloud & DingTalk report 90,000+ enterprises deployed.
  • Qwen app users primarily come from mobile web (65.95%), with desktop accounting for 34.05%.
  • Over 2.2 million Alibaba Qwen AI users reach the service through DingTalk.

Qwen.ai Website Traffic Statistics

  • According to Similarweb, qwenlm.ai ranks #172,593 globally, up 16,090 positions.
  • In Russia, its country rank is #19,673, up by 1,402 positions.
  • The site’s bounce rate is 47.21%, increased from last month 8.14%.
  • Users view approximately 3.37 pages per visit.
  • Over the last three months, the global rank improved from 188,683 to 172,593.

By Total Visits

qwen.ai Web Traffic

(Reference: similarweb.com)

  • In November and December 2025, qwen.ai recorded 32.8 million total visits in each month.
  • Moreover, in October, the number of visits remains the highest at 33.5 million.

By Average Visits Duration

Avg. visit duration in Qwen.ai's top domains

(Source: similarweb.com)

  • As of December 2025, the average visit duration remained the lowest at 4 minutes and 52 seconds.
  • In October and November, it accounted for 4 minutes & 47 seconds and 5 minutes & 05 seconds, respectively.

By Country

qwen.ai Web Traffic By Country

(Reference: similarweb.com)

  • qwenlm.ai’s top traffic sources are Russia (29.96%, +0.14%), the United States (12.71%, +155.3%), Norway (10.96%), Argentina (5.91%, -45.58%), and Thailand (5.02%, +83.49%).
  • All other countries account for 35.45% of total traffic.

By Demographics

qwenlm.ai Website Traffic Demographics

(Reference: similarweb.com)

  • In December 2025, qwenlm.ai’s audience is slightly more female than male, with 51.33% women and 48.67% men.
  • The largest visitor group is ages 25-34, accounting for 35.92% of traffic.
  • Other age groups are 18-24 (17.75%), 35-44 (23.62%), 45-54 (13.95%), 55-64 (6.72%), and 65+ (2.05%).

Qwen AI Models Pricing Statistics

Qwen AI Models Pricing Statistics

(Source: godofprompt.ai)

  • Qwen-Max supports up to 32,768 tokens and costs USD 0.0016 per 1,000 input tokens and USD 0.0064 per 1,000 output tokens.
  • Qwen-Plus handles up to 131,072 tokens, priced at USD 0.0004 per 1,000 input tokens and USD 0.0012 per 1,000 output tokens.
  • For maximum efficiency, Qwen-Turbo allows up to 1,000,000 tokens and is priced at USD 0.00005 per 1,000 input tokens and USD 0.0002 per 1,000 output tokens.

Qwen AI 2.5 Adoption Statistics

  • According to Grabon.In Qwen’s AI services, more than 2.2+ million corporate users have been reached via DingTalk, Alibaba’s workplace collaboration platform.
  • By 2025, more than 90,000 enterprises use the Qwen model family.
  • Qwen AI 2.5 supports 29+ languages, and the open-source Qwen lineup spans 0.5 billion to 110 billion parameters.
  • Qwen2.5 Math (72 billion) achieves 84% accuracy on math problem-solving.
  • The newest Qwen 2.5 models are pre-trained on a refreshed dataset of up to 18 trillion tokens.
  • Qwen2.5-Coder is trained on 5.5 trillion tokens and covers 92+ programming languages.

By Models Specialisation

  • Qwen 2.5-VL: A vision-language model that takes images, text, and bounding boxes; it can read Chinese and English text in images, compare visuals, write stories, solve math, and answer questions.
  • Qwen 2.5-Audio: An audio-text model that accepts many audio types (speech, music, natural sounds) and produces text outputs.
  • Qwen 2.5-Max: A Mixture-of-Experts (MoE) model trained on more than 20 trillion tokens and refined with curated SFT; it reports better benchmark results than DeepSeek V3 and Llama 3.1 on Arena-Hard, LiveBench, LiveCodeBench, and GPQA-Diamond.
  • Qwen 2.5-Coder: Supports 128K context and 92 programming languages; strong in code generation, completion, multi-language coding, and repair.
  • Qwen 2.5-Math: Pre-trained and fine-tuned on synthetic data; supports English/Chinese and performs well in CoT, PoT, and tool-based reasoning, beating most 70 billion math models.

Score Comparison Between Qwen2.5-Max vs Qwen2.5-72B

Benchmark Qwen2.5-Max Qwen2.5-72B
MMLU 87.9 86.1
MMLU-Pro 69.0 58.1
BBH 89.3 86.3
C-Eval 92.2 90.7
CMMLU 91.9 89.9
HumanEval 73.2 64.6
MBPP 80.6 72.6
CRUX-I 70.1 60.9
CRUX-O 79.1 66.6
GSM8K 94.5 91.5
MATH 68.5 62.1

Qwen Privacy Analyses

  • Qwen Code (terminal agent): may log anonymised usage data to improve the product, like tool usage (tool name, success/fail, duration), API metadata (model, request time, success/fail), and session settings (enabled tools, approval mode). It says it doesn’t record PII, prompts/responses, or file contents.
  • Qwen Chat (Android): Google Play “Data safety” claims no third-party sharing; it alo collect personal info and device/other IDs; data is encrypted in transit; deletion requests are supported.
  • qwen.org: stores browser, language, referrer, and request timestamps, and may publish aggregated visitor trends.
  • Alibaba / Alibaba Cloud sites: Alibaba logs IP plus browser/language and visit time for traffic metrics and security; Alibaba Cloud also cites retention for research/stat analysis alongside service and legal needs.

Advantages And Disadvantages Of Qwen AI

Advantages Disadvantages
Strong multimodality: text, vision (Qwen-VL), and audio (Qwen-Audio). Performance varies by model size/version; smaller models may lag top-tier rivals.
Long context (up to 128k tokens) helps long chats and document analysis. High-computer models can be expensive to run and tune.
Good coding (Qwen-Coder) and math (Qwen-Math) support. Deployment may require careful safety/guardrail setup for enterprise use.
Broad multilingual coverage (119+ languages/dialects). Global availability can be affected by regional policies, hosting requirements, or compliance needs.
Scales from approximately 0.5 billion to 235 billion params; MoE improves efficiency. Open-source use adds responsibility for updates, security, and model governance.

Conclusion

After concluding the article, it is very clear that Qwen is rapidly transitioning from a research project to a tool used in real-world work. Its test results and language skills are getting better over time. More downloads, more fine-tuning, and more community support show that its ecosystem is growing. Recently, many users have chosen it for coding, Chinese-English tasks, and tool-based tasks.

Many new versions have been released recently, and the key considerations are how well it works, how much it costs per token, and how safe it is when used widely.

FAQ.

Who builds Qwen?

The Qwen team at Alibaba Cloud develops and releases models, along with related tools and research reports.

Is Qwen open-source? What license is it under?

Major parts of Qwen are open source; the official repository is licensed under the Apache 2.0 license. Some specific Qwen2.5 variants have different licensing noted by the Qwen team.

What is Qwen2.5-Math (math model)? How strong is it?

Qwen2.5-Math is the math-specialised series (1.5B/7B/72B). In the Qwen team’s blog, Qwen2.5-Math-Instruct scores 87.8 on the MATH benchmark (72B) using Tool-Integrated Reasoning (TIR).

How can I access Qwen models?

Access Qwen via Alibaba Cloud Model Studio APIs, or download open-source weights from Hugging Face/GitHub for local inference, fine-tuning, and deployment in your stack today.

Is there a Qwen consumer app?

Yes, Reuters reports Alibaba launched and upgraded a Qwen AI app, positioning it as a consumer assistant and adding task-style features (with frequent updates reported through 2025-2026).

Maitrayee Dey
Maitrayee Dey

Maitrayee Dey has a background in Electrical Engineering and has worked in various technical roles before transitioning to writing. Specializing in technology and Artificial Intelligence, she has served as an Academic Research Analyst and Freelance Writer, particularly focusing on education and healthcare in Australia. Maitrayee's lifelong passions for writing and painting led her to pursue a full-time writing career. She is also the creator of a cooking YouTube channel, where she shares her culinary adventures. At Smartphone Thoughts, Maitrayee brings her expertise in technology to provide in-depth smartphone reviews and app-related statistics, making complex topics easy to understand for all readers.

More Posts By Maitrayee Dey