As AI consultants based in Rosenheim, Germany, we recommend Qwen models for enterprises looking for an open-source alternative with Apache 2.0 license. With the Qwen 3.6 generation (April 2026), Alibaba focuses on agentic real-world workflows – from efficient MoE models to proprietary flagships. For GDPR-compliant usage in the DACH region (Germany, Austria, Switzerland), we recommend self-hosting or AWS Bedrock in Frankfurt.
Qwen 3.6: Agentic Real-World AI (April 2026)
In April 2026, Alibaba released the Qwen 3.6 generation – focused on real-world agent workflows:
Qwen3.6-35B-A3B – Our New Top Pick
The most efficient model in the family uses MoE architecture (36B total, only 3B active) and outperforms Google Gemma 4 in benchmarks. The Apache 2.0 license enables unrestricted self-hosting.
Qwen3.6-27B – Dense Alternative
With 28 billion parameters, this dense model delivers strong performance for applications that don’t support MoE architecture.
Qwen3.6-Plus and Max-Preview
The proprietary models are only available via API and reflect Alibaba’s strategic shift toward commercial offerings. Qwen3.6-Plus focuses on real-world agents – autonomous AI that executes real tasks like app control and document editing.
Note: With Qwen 3.6, Alibaba introduces proprietary models for the first time that are not available as open source. For self-hosting, we recommend Qwen3.6-35B-A3B (Apache 2.0).
Qwen 3.5: The Next Generation
Native Multimodality
Qwen 3.5 unifies text, image, and video in one architecture:
- Video Analysis: Understands up to 2 hours of video in a single prompt
- Timestamp-Precise: Identifies events at second-level resolution
- Long Context: Up to 1 million tokens (entire books, large codebases)
- Flexible Input: URLs, local files, frame sequences
Agentic AI & Automation
Qwen 3.5 can execute autonomous workflows:
- App interaction on smartphones
- Document editing and email management
- Travel booking and process automation
- Multi-step tasks with tool use
Efficiency through Mixture-of-Experts
The Qwen3.5-397B-A17B model uses:
- 397 billion parameters total
- Only 17 billion active per inference
- 60% lower costs than predecessors
- 8-19x higher throughput than Qwen3
New: Qwen 3.5 Small Model Series (March 2026)
In March 2026, Alibaba released a new series of compact models for edge and mobile applications:
| Model | Parameters | Use Case |
|---|---|---|
| Qwen3.5-9B | 9B | Edge servers, rivals 30B+ models |
| Qwen3.5-4B | 4B | UI navigation, document analysis |
| Qwen3.5-2B | 2B | Mobile devices |
| Qwen3.5-0.8B | 0.8B | IoT, smartphones |
All small models are natively multimodal and agent-capable. They are particularly suited for on-device AI where privacy is ensured through local processing.
Qwen3.5-Max-Preview
With Qwen3.5-Max-Preview, Alibaba leads the Chinese AI rankings on LM Arena and achieves 5th place globally in math reasoning.
Key Strengths
Open Source & Apache 2.0
- Full Control: Model runs in your infrastructure
- No License Costs: Commercial use permitted
- Customizable: Fine-tuning on your own data possible
- GDPR-Friendly: No data leaves your company
Expanded Multilingual Support
Qwen 3.5 supports 200+ languages and dialects:
- Chinese (outstanding)
- European languages (very good)
- Expanded coverage: South Asia, Africa, Oceania
- Competitive with Western models
Text-in-Image Generation
Qwen-Image is leading in:
- Complex text layouts
- Multilingual text rendering
- Paragraph-level semantics
- Fine detail work
Availability
AWS Bedrock (EU)
Qwen3 models are now available on AWS Bedrock in Frankfurt:
- Fully managed and serverless
- EU data residency (GDPR-compliant)
- Integration with AWS services
- Qwen3-32B, Qwen3-235B, Qwen3-Coder available
Self-Hosting
All Qwen models can be operated in your own infrastructure - this way all data remains under your control.
Benchmarks & Performance
| Benchmark | Score | Model |
|---|---|---|
| MMLU | 90.6% | Qwen3-235B VL |
| HumanEval | 93% | Qwen3-32B |
| GSM8K | 79.3% | Qwen3-32B |
| C-Eval (CN) | 85.6% | Qwen3-32B |
Qwen 3.5 competes with GPT-4-class models and clearly outperforms other open-source alternatives.
Hardware Requirements (Self-Hosted)
| Model | VRAM | Recommended GPU |
|---|---|---|
| Qwen3.5-397B-A17B | 80+ GB | H100/MI300X |
| Qwen3-235B-A22B | 48+ GB | A100/H100 |
| Qwen-Image 20B | 48+ GB | A100/H100 |
| Qwen3 (smaller variants) | 16-24 GB | RTX 4090 |
Integration with CompanyGPT
Qwen models can be integrated into CompanyGPT as a self-hosted option or via AWS Bedrock - full GDPR compliance guaranteed.
Our Recommendation
Qwen 3.6 sets new standards for agentic AI with real-world workflows. For DACH enterprises, we recommend:
- Qwen3.6-35B-A3B: New top pick – extremely efficient (3B active), Apache 2.0, outperforms Gemma 4
- AWS Bedrock Frankfurt: For managed solution with EU data residency
- Self-Hosting: For maximum data control and customizability
- Qwen3.5-397B-A17B: For highest multimodality requirements
