InternLM 2
Developed by Shanghai AI Laboratory
Key Capabilities
- 200K extended context window
- Strong Chinese and English bilingual performance
- Excellent math and reasoning (InternLM2-Math)
- Tool use and agentic capabilities (InternLM2-Agent)
- Apache 2.0 for full commercial freedom
VRAM Requirements by Quantization
Choose the right GPU based on your performance and quality needs.
| Model / Quantization | VRAM Required |
|---|---|
| 7B FP16 | 14GB |
| 20B FP16 | 40GB |
| 20B Q4 | 12GB |
Use Cases
InternLM 2 (1.8B, 7B, 20B) can be deployed for enterprise AI applications including document processing, code generation, data analysis, and conversational AI. License: Apache 2.0.
Run InternLM 2 with Petronella
PTG deploys InternLM 2 for organizations needing Chinese-English AI with a 200K context window. Apache 2.0 and strong agentic capabilities make it ideal for automated business workflows.
Recommended Hardware
| Model Size | Recommended GPU |
|---|---|
| 7B | RTX 5080 (16GB) |
| 20B | RTX 5090 (32GB) or RTX PRO 5000 (48GB) |
Deploy InternLM 2 On-Premises
Our team builds GPU-accelerated systems configured and optimized for InternLM 2. Private, secure, and fully under your control.