Open-Source AI Model

MiniMax M2.7

Developed by MiniMax

Local AI Deployment Experts 24+ Years IT Infrastructure GPU Hardware In Stock

Key Capabilities

  • SWE-bench Verified 78%, nearly matching Opus 4.6 at fraction of size
  • 100+ tokens/second — 3x faster than Opus
  • 97% skill adherence on 40+ complex tasks (2000+ tokens)
  • Native support for Claude Code, Cline, Cursor tool scaffolding
  • Self-hostable at only 10B parameters — smallest Tier-1 model

VRAM Requirements by Quantization

Choose the right GPU based on your performance and quality needs.

Model / QuantizationVRAM Required
FP1620GB
Q48GB

Use Cases

MiniMax M2.7 (10B activated (smallest Tier-1 model)) can be deployed for enterprise AI applications including document processing, code generation, data analysis, and conversational AI. License: MiniMax Open Model License (permissive, commercial use allowed).

Run MiniMax M2.7 with Petronella

PTG deploys MiniMax M2.7 for organizations needing Tier-1 AI coding and agentic capabilities at a fraction of the cost. At only 10B parameters, it self-hosts on a single GPU while matching models 50x its size on software engineering benchmarks — ideal for air-gapped development environments.

Recommended Hardware

Model SizeRecommended GPU
FP16RTX 5080 (16GB) or RTX PRO 4000 (24GB)
Q4Any GPU with 8GB+ VRAM

Deploy MiniMax M2.7 On-Premises

Our team builds GPU-accelerated systems configured and optimized for MiniMax M2.7. Private, secure, and fully under your control.