Alibaba launches Qwen 3.5 to push agentic AI for developers and enterprises
Alibaba unveiled Qwen 3.5 aimed at agentic, multimodal tasks, promising higher efficiency and broader tool use for developers and enterprises.

Alibaba announced on Feb. 16, 2026, the release of Qwen 3.5, a new generation of its Qwen family the company says is "built for the agentic AI era" to handle multi-step tasks and operate more independently across applications. "Built for the agentic AI era, Qwen3.5 is designed to help developers and enterprises move faster and do more with the same compute, setting a new benchmark for capability per unit of inference cost," the company said in its launch materials.
The company positioned Qwen 3.5 as a toolbox for developers and enterprises focused on agentic coding, browser interaction, tool use and multimodal workflows. Alibaba highlighted a developer-oriented command-line interface called Qwen Code that it is open-sourcing to let engineers delegate tasks to the model with natural language. The firm also described post-training enhancements using long-horizon reinforcement learning, which it said improves the model's performance in extended, multi-step interactions with external environments.
Technical specifications vary across Qwen 3.5 variants. Alibaba's product pages list a Qwen3-Coder-480B-A35B-Instruct model with 480 billion total parameters and an activation pattern that uses roughly 35 billion parameters per token, with a default context window of 256,000 tokens extendable to 1 million tokens. The company also published an open-sourced Qwen3-235B and a machine translation variant named Qwen-MT. A cloud model listing for a Qwen-3.5 Plus variant marked available on Alibaba Cloud's Model Studio described performance "on par with state-of-the-art leading models" and a 1 million token context window, while a separate Model Studio entry labeled Qwen-3.5-Open-Source carried a 397 billion parameter figure and a 256,000 token window. Alibaba's product page adds that Qwen3-Coder "achieves competitive results against leading state-of-the-art (SOTA) models across key benchmarks" in agentic coding and tool usage.
Alibaba released company-published benchmarks claiming Qwen 3.5 outperforms its predecessor and several named U.S. models, including GPT-5.2, Claude Opus 4.5 and Gemini 3 Pro. The company also touted efficiency gains, with summaries of its announcement stating the new models can be substantially cheaper to run and better at processing large workloads than earlier versions: one summary cited Alibaba's claim that using Qwen 3.5 is "60 percent cheaper" and "eight times more capable at processing large workloads" than the previous release.

The launch arrives amid a wave of Chinese model releases in mid-February. Competing domestic offerings include a new release from ByteDance, which industry reporting places at nearly 200 million users, and other fast-moving startups. Alibaba has sought to accelerate adoption through promotions earlier this month; a coupon campaign produced a seven-fold increase in active users for the Qwen chatbot, though some users reported technical glitches during the spike.
Alibaba framed the rollout as part of a broader open-source and developer strategy, saying the Qwen family has produced more than 140,000 derivative models and about 300 named variants worldwide. The company stressed that the different 3.5-series models have distinct parameter counts and context windows, and it presented benchmark comparisons tied to particular variants rather than a single universal specification.
Know something we missed? Have a correction or additional information?
Submit a Tip

