Previous All Posts Next

VMware to Proxmox Migration: Complete Step-by-Step Guide (2026)

Posted: March 5, 2026 to Technology.

Migrating from VMware to Proxmox VE involves exporting virtual machines from VMware vSphere or ESXi as OVA/OVF files, converting the virtual disk format from VMDK to QCOW2 or raw using qemu-img convert, importing the converted disk into Proxmox, and reconfiguring network and storage settings to match the new environment. The entire process can be completed with zero data loss and minimal downtime when planned correctly. Organizations are making this switch primarily because Broadcom's acquisition of VMware in late 2023 led to the elimination of perpetual licenses, mandatory subscription bundling, and price increases of 200 to 1,200 percent for many customers.

I have personally overseen VMware-to-Proxmox migrations for dozens of businesses ranging from 5-server environments to 200-plus VM deployments. This guide walks through the complete process based on real-world experience, including the planning decisions, technical steps, compatibility pitfalls, and performance tuning that make the difference between a smooth migration and a painful one.

Why Organizations Are Leaving VMware in 2026

Broadcom completed its $61 billion acquisition of VMware in November 2023 and immediately restructured the product line in ways that impacted virtually every VMware customer:

Perpetual licenses eliminated. Broadcom discontinued all perpetual VMware licenses, forcing every customer onto subscription pricing. Organizations that had invested heavily in perpetual vSphere licenses found that their ongoing costs jumped dramatically at renewal time.

Product bundling. Individual VMware products like vSphere, vSAN, and NSX were consolidated into two bundles: VMware Cloud Foundation (VCF) and VMware vSphere Foundation (VVF). Customers who only needed vSphere were now required to purchase bundled products they did not want or need.

Price increases of 200 to 1,200 percent. The combination of subscription-only pricing and mandatory bundling resulted in cost increases that ranged from double to twelve times previous costs. Mid-market companies with 3 to 10 hosts were hit particularly hard because per-CPU licensing changed to per-core licensing.

Partner ecosystem disruption. Broadcom terminated thousands of VMware partner agreements and restructured the channel program. Many managed service providers and VARs that built their businesses around VMware lost access to competitive pricing and support resources.

Licensing uncertainty. Ongoing changes to licensing terms, audit practices, and renewal processes created an environment of uncertainty. Many IT leaders decided that the business risk of continued VMware dependency outweighed the cost and effort of migration.

These factors have driven what is arguably the largest virtualization platform migration in enterprise IT history. Proxmox VE has been the primary beneficiary, with downloads and subscription purchases increasing substantially throughout 2024 and 2025.

What Is Proxmox VE

Proxmox Virtual Environment (Proxmox VE) is an open-source server virtualization platform built on Debian Linux. It combines KVM (Kernel-based Virtual Machine) hypervisor for full virtualization with LXC (Linux Containers) for lightweight containerization, all managed through a web-based interface. Proxmox has been in active development since 2008 and is backed by Proxmox Server Solutions GmbH, an Austrian company.

Key capabilities of Proxmox VE include KVM-based full virtualization supporting Windows, Linux, BSD, and other operating systems, LXC containers for lightweight Linux workloads, a web-based management interface accessible from any browser, built-in clustering with up to 32 nodes, live migration of running VMs between cluster nodes, built-in backup solution (Proxmox Backup Server integration), software-defined storage with Ceph integration, ZFS support for advanced storage features, firewall management integrated into the platform, REST API for automation, and support for PCIe and GPU passthrough.

Proxmox VE is free to download and use without any license restrictions on features. Paid subscription plans ranging from approximately $110 to $510 per socket per year provide access to the enterprise repository (stable, tested updates), technical support, and the Proxmox customer portal.

Proxmox vs. VMware Feature Comparison

Feature VMware vSphere/vCenter Proxmox VE
HypervisorESXi (proprietary)KVM (open source)
ContainersNo native supportLXC containers included
Web managementvSphere ClientBuilt-in web UI
ClusteringvCenter required (additional cost)Built-in, no additional cost
Live migrationvMotion (Enterprise Plus)Included in base
High availabilityvSphere HA (Enterprise)Built-in HA
Software-defined storagevSAN (separate license)Ceph and ZFS included
BackupRequires Veeam or similarProxmox Backup Server (free)
GPU passthroughSupportedSupported
APIREST/SOAP APIREST API
Cost (10 hosts)$50,000-$200,000+/year$0-$5,100/year
SupportIncluded in subscriptionCommunity or paid subscription
Maximum cluster size96 hosts per cluster32 nodes per cluster
Distributed switchingVDS (Enterprise Plus)Open vSwitch or Linux bridge

Prerequisites and Planning

A successful migration starts with thorough planning. Complete these steps before touching any production systems:

1. Inventory your VMware environment. Document every VM including operating system, CPU and memory allocation, disk sizes and types (thin vs. thick provisioned), network configuration (VLANs, static IPs, DNS), dependencies between VMs, and any VMware-specific features in use (DRS affinity rules, resource pools, distributed switches).

2. Verify hardware compatibility. Proxmox VE runs on standard x86-64 hardware. Check that your servers have Intel VT-x or AMD-V hardware virtualization support enabled in BIOS, sufficient storage capacity for the migration (you will temporarily need space for both source and converted disks), network interface cards compatible with Debian Linux, and any hardware RAID controllers supported by the Linux kernel.

3. Plan your storage architecture. Proxmox supports multiple storage backends. Decide between local storage (ZFS, LVM, ext4, XFS), shared storage (NFS, iSCSI, Ceph, GlusterFS), or a combination. If you currently use vSAN, the closest Proxmox equivalent is Ceph, which provides distributed, replicated storage across cluster nodes.

4. Plan your network architecture. Map your current VLAN configuration to Proxmox networking. Proxmox uses Linux bridges by default but also supports Open vSwitch for more advanced networking. Plan IP addressing, bonding, and VLAN tagging before you begin.

5. Establish a migration sequence. Migrate in order of risk: development and test VMs first, then non-critical production workloads, then critical production systems last. This approach builds experience and confidence before touching your most important systems.

6. Back up everything. Take full backups of all VMs before starting any migration work. Verify that backups are restorable. This is your safety net if anything goes wrong during conversion.

Step-by-Step Migration Process

The core migration process follows these steps for each virtual machine:

Step 1: Install Proxmox VE on the target host. Download the Proxmox VE ISO from proxmox.com, write it to a USB drive, and install it on your target server. The installation takes approximately 10 minutes and includes the base Debian operating system, KVM hypervisor, and web management interface. After installation, access the web UI at https://your-server-ip:8006.

Step 2: Export the VM from VMware. In the vSphere Client, right-click the VM and select Export OVF Template. Choose the OVA format for a single-file export or OVF for a folder with separate files. For VMs with large disks, the export can take considerable time. Alternatively, you can directly access the VMDK files on the ESXi datastore via SSH or SCP.

Step 3: Transfer the exported files to Proxmox. Use SCP, rsync, or a shared NFS mount to transfer the exported VM files to the Proxmox host. Place them in a temporary directory with sufficient free space.

Step 4: Extract the OVA (if applicable). An OVA file is a tar archive containing the OVF descriptor and VMDK disk files. Extract it with: tar xvf vm-export.ova

Step 5: Convert the disk format. Convert the VMDK disk to a format Proxmox can use. For QCOW2 (recommended for most cases): qemu-img convert -f vmdk -O qcow2 source-disk.vmdk target-disk.qcow2. For raw format (better performance, no snapshots): qemu-img convert -f vmdk -O raw source-disk.vmdk target-disk.raw. Conversion time depends on disk size but typically runs at 100 to 300 MB per second on modern hardware.

Step 6: Create a new VM in Proxmox. In the Proxmox web UI, click Create VM. Configure the VM ID, name, OS type, CPU, and memory to match the original VMware VM. For the hard disk, select "Do not use any media" since you will import the converted disk. Set the SCSI controller to VirtIO SCSI (best performance) or LSI Logic (better initial compatibility).

Step 7: Import the converted disk. Import the disk to the VM's storage using: qm importdisk VMID /path/to/target-disk.qcow2 local-lvm. Replace VMID with your VM's numeric ID and local-lvm with your target storage name. After import, go to the VM's Hardware tab in the web UI, find the unused disk, double-click it, and add it to the VM.

Step 8: Configure boot order. In the VM's Options tab, set the Boot Order to include the imported disk. Enable the SCSI disk and disable any other boot devices.

Step 9: Install VirtIO drivers (Windows VMs). If migrating a Windows VM, you will need to install VirtIO drivers for optimal performance. See the Windows VM section below for detailed instructions.

Step 10: Start and verify the VM. Start the VM and verify that it boots correctly, network connectivity works, all services start properly, and storage is intact and accessible.

Migrating Linux VMs

Linux VMs are the easiest to migrate because the Linux kernel includes VirtIO drivers natively. After converting and importing the disk, most Linux VMs will boot on Proxmox without any driver changes.

Post-migration steps for Linux:

1. Update network interface names. VMware uses interface names like ens192 or eth0 while Proxmox with VirtIO uses ens18 or similar. Update /etc/network/interfaces or your NetworkManager configuration accordingly. Remove any VMware-specific interface persistence rules in /etc/udev/rules.d/.

2. Remove VMware Tools. Uninstall open-vm-tools or VMware Tools: apt remove open-vm-tools (Debian/Ubuntu) or yum remove open-vm-tools (RHEL/CentOS). These are unnecessary on Proxmox and can occasionally cause issues.

3. Install QEMU Guest Agent. Install the QEMU guest agent for better integration with Proxmox: apt install qemu-guest-agent (Debian/Ubuntu) or yum install qemu-guest-agent (RHEL/CentOS). Enable and start the service. In the Proxmox VM options, enable the QEMU Agent checkbox.

4. Regenerate SSH host keys if the VM was cloned: rm /etc/ssh/ssh_host_* && dpkg-reconfigure openssh-server

5. Verify that all services are running correctly and that application-level functionality works as expected.

Migrating Windows VMs

Windows VMs require additional attention because Windows does not include VirtIO drivers by default. Without these drivers, Windows cannot see VirtIO storage controllers or network adapters.

Option A: Install VirtIO drivers before migration (recommended). While the VM is still running on VMware, download the VirtIO drivers ISO from the Fedora project (https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/stable-virtio/). Mount the ISO inside the Windows VM and run the installer. This pre-installs the drivers so that Windows can boot immediately on Proxmox with VirtIO devices.

Option B: Use IDE/SATA initially, then switch to VirtIO. Create the Proxmox VM with an IDE or SATA controller instead of VirtIO SCSI. Windows will boot using the compatible IDE/SATA driver. Then mount the VirtIO ISO, install the drivers, shut down the VM, change the controller to VirtIO SCSI, and restart. This approach is slower but works when you cannot modify the VM before migration.

Option C: Add a temporary VirtIO disk. Create the VM with the imported disk on IDE/SATA and add a small secondary disk (1 GB) on VirtIO SCSI. Boot the VM. Windows will detect the VirtIO device and prompt for drivers. Install the drivers from the VirtIO ISO. After installation, shut down, move the main disk to the VirtIO controller, remove the temporary disk, and restart.

Windows-specific post-migration steps: Uninstall VMware Tools from Programs and Features, install the QEMU Guest Agent from the VirtIO ISO, verify Windows activation status (hardware changes may trigger reactivation), check that all device drivers are correctly installed in Device Manager, and verify that Windows recognizes all allocated memory and CPU cores.

Storage Architecture on Proxmox

Proxmox offers more storage flexibility than VMware. Here are the most common configurations:

ZFS (recommended for local storage): ZFS provides built-in RAID, checksumming, compression, snapshots, and replication. Use ZFS mirrors (equivalent to RAID 1) for small deployments or RAIDZ1/RAIDZ2 for larger arrays. ZFS requires adequate RAM, plan for 1 GB of RAM per 1 TB of storage as a starting point, plus ARC cache memory for performance.

LVM-Thin (good default for simple setups): LVM with thin provisioning supports over-provisioning and snapshots. It is the default storage type in Proxmox installations and works well for environments that do not need ZFS's advanced features.

Ceph (distributed storage for clusters): Ceph replaces vSAN for organizations that need distributed, replicated storage across cluster nodes. Ceph requires at least three nodes and a dedicated network for storage traffic. It provides redundancy without shared storage hardware but requires careful planning and adequate resources (dedicated SSDs for OSD journals, 10GbE minimum for Ceph traffic).

NFS/iSCSI (existing shared storage): If you have existing NAS or SAN infrastructure, Proxmox connects to NFS and iSCSI targets natively. This is often the fastest migration path because you can present the same storage to Proxmox without moving data.

Network Configuration

Proxmox networking uses Linux bridges by default, which provide equivalent functionality to VMware standard switches. For more advanced features:

Linux bridges: Created automatically during installation. Support VLAN tagging by enabling VLAN awareness on the bridge. Simple to configure and reliable for most environments.

Open vSwitch (OVS): Available as an alternative for environments that need distributed switching, port mirroring, LACP bonding, or integration with SDN controllers. Install with apt install openvswitch-switch and configure through the Proxmox web UI or CLI.

VLAN configuration: If your VMware environment uses VLANs, configure VLAN-aware bridges in Proxmox. Assign VLAN tags to individual VM network interfaces rather than creating separate bridges per VLAN.

Network bonding: Proxmox supports Linux bonding modes including active-backup (mode 1), LACP 802.3ad (mode 4), and balance-alb (mode 6). Configure bonding at the host level, then create bridges on top of the bond interface.

High Availability and Clustering

Proxmox includes clustering and high availability at no additional cost, functionality that requires vCenter and vSphere Enterprise on VMware.

Creating a cluster: Initialize the cluster on the first node with pvecm create my-cluster. Join additional nodes with pvecm add first-node-ip. Once clustered, all nodes are managed from a single web interface and VMs can be migrated between nodes.

High availability: Proxmox HA uses a fencing mechanism to ensure that a failed node's VMs are restarted on surviving nodes. Configure HA by adding VMs to HA groups. A minimum of three nodes is required for reliable HA operation (for quorum). The HA manager automatically monitors node health and restarts VMs on healthy nodes if a node fails.

Live migration: Running VMs can be migrated between cluster nodes with zero downtime. Live migration requires shared storage or local storage with replicated volumes. Memory contents are transferred incrementally while the VM continues to run, with a brief pause of typically under 100 milliseconds for the final switchover.

Backup and Disaster Recovery

Proxmox Backup Server (PBS) is a dedicated backup solution that integrates tightly with Proxmox VE. It supports incremental backups with deduplication, reducing storage requirements by 60 to 90 percent compared to full backups. PBS supports backup schedules via the Proxmox web UI, incremental backups with client-side deduplication, backup verification for integrity checking, encryption for off-site backup storage, and backup to local storage, NFS, or object storage.

For organizations migrating from VMware with Veeam, the transition to PBS is straightforward. Set up scheduled backups in PBS before decommissioning your VMware backup infrastructure to ensure continuous protection.

Performance Tuning

Proxmox VMs can match or exceed VMware performance when properly configured:

Use VirtIO drivers. VirtIO storage and network drivers provide near-native performance. SCSI or IDE emulation adds overhead and should only be used temporarily during migration.

Enable CPU host passthrough. Set the CPU type to "host" in VM configuration to expose all host CPU features to the VM. This is particularly important for workloads that benefit from AVX-512, AES-NI, or other advanced CPU instructions.

Configure IO threads. Enable IO threads for VirtIO SCSI disks to offload disk operations from the main VM CPU thread. This improves storage performance, especially for I/O-intensive workloads.

Use NUMA topology. For VMs on multi-socket servers, enable NUMA in the VM configuration and align VM memory and CPU assignments with physical NUMA nodes to avoid cross-node memory access penalties.

Storage optimization. Use discard/TRIM with thin-provisioned storage to reclaim unused space. Enable writeback caching for workloads that can tolerate a small data loss window. Use aio=io_uring for the best asynchronous I/O performance on modern kernels.

Common Migration Issues and Solutions

Windows VM blue screens on boot. Usually caused by missing VirtIO storage drivers. Boot with IDE/SATA controller initially, install VirtIO drivers, then switch to VirtIO SCSI. If the blue screen persists, try disabling Secure Boot in the VM configuration.

Network not working after migration. Check that the VM's network configuration matches the new interface name. Remove any VMware-specific network configuration and verify that the Proxmox bridge has the correct VLAN tagging. For Windows, ensure the VirtIO network driver is installed.

VM fails to boot from imported disk. Verify the boot order in VM options. Check that the disk is properly attached, not listed as unused. For UEFI VMs, ensure the Proxmox VM is configured with OVMF firmware, not SeaBIOS.

Poor storage performance. Switch from IDE/SATA to VirtIO SCSI with IO threads enabled. Verify that discard is enabled for thin-provisioned storage. Check that the underlying storage is not the bottleneck with fio benchmarks on the host.

License activation failures. Windows and other licensed software may detect the hardware change and require reactivation. Contact the software vendor with your license keys. Microsoft typically allows reactivation after hardware changes via phone or online activation.

Time synchronization issues. Install the QEMU guest agent and configure NTP inside the VM. Proxmox's default clock source works well for most VMs, but Windows VMs may need the "localtime" hardware clock setting.

Cost Analysis: VMware vs. Proxmox

The following comparison uses a typical mid-market deployment of 10 hosts with 16 cores each:

Cost Category VMware vSphere Foundation Proxmox VE (with subscription)
Hypervisor licensing (annual)$80,000-$150,000$0 (open source)
Enterprise subscriptionIncluded in license$5,100 ($510/socket x 10)
Centralized managementIncluded (vCenter)Included (built-in)
Distributed storage$30,000-$60,000 (vSAN)$0 (Ceph included)
Backup solution$15,000-$40,000 (Veeam)$0 (PBS included)
HA and live migrationIncluded in Enterprise+Included in base
Year 1 total$125,000-$250,000$5,100
3-year total$375,000-$750,000$15,300

Even adding professional migration services at $20,000 to $50,000 and internal labor costs, the three-year savings from migrating to Proxmox typically exceed $300,000 for a 10-host environment. For smaller environments of 3 to 5 hosts, savings range from $50,000 to $150,000 over three years.

When Not to Migrate Away from VMware

Despite the compelling cost argument, Proxmox is not the right choice for every organization:

Heavy vSphere API dependencies. If your automation, monitoring, and management tools depend heavily on VMware's vSphere API, migration requires replacing or reconfiguring those integrations. Proxmox has a comprehensive REST API but it is not vSphere-compatible.

NSX networking requirements. Organizations deeply invested in VMware NSX for microsegmentation and network virtualization will find no direct equivalent in Proxmox. OVS provides some similar capabilities but does not match NSX's feature set.

Vendor support requirements. Some enterprise software vendors only certify their products on VMware. Check with your software vendors before migrating to ensure they support Proxmox/KVM environments.

VMware Cloud Foundation investment. Organizations running the full VCF stack with integrated NSX, vSAN, Aria, and Tanzu may find that the migration complexity and feature gaps outweigh the licensing savings, at least in the short term.

Risk tolerance. Proxmox is production-ready and used by thousands of organizations worldwide, but it does not have VMware's decades-long enterprise track record. Organizations in highly regulated industries may need to document their due diligence more carefully.

Frequently Asked Questions

Can I migrate VMs from VMware to Proxmox without downtime?

Near-zero downtime is achievable but not true zero downtime. The approach involves setting up Proxmox on separate hardware, migrating and testing non-critical VMs first, then scheduling brief maintenance windows (typically 15 to 30 minutes per VM) for production systems to perform the final disk sync, conversion, and cutover. For the most critical workloads, use DNS-based failover or load balancer switching to minimize perceived downtime.

Is Proxmox production-ready for enterprise use?

Yes. Proxmox VE is used in production by thousands of organizations including financial institutions, healthcare systems, government agencies, and technology companies. The KVM hypervisor that Proxmox is built on is part of the Linux kernel and has been hardened by thousands of developers over nearly two decades. With a paid subscription, Proxmox provides enterprise support with guaranteed response times.

How does Proxmox handle Windows licensing?

Windows licensing on Proxmox follows the same rules as any KVM-based hypervisor. Windows Server Datacenter licenses provide unlimited VM rights on licensed hosts. Windows Server Standard licenses cover two VMs per license. Windows Desktop VMs require separate licensing through Volume Licensing (VDA or SA). Your existing Windows licenses transfer to Proxmox, though you may need to reactivate after the hardware change.

What about VMware vSAN? What is the Proxmox equivalent?

Ceph is the closest equivalent to vSAN in the Proxmox ecosystem. Like vSAN, Ceph creates a distributed storage pool from local disks across cluster nodes. Ceph provides replication, erasure coding, and automatic rebalancing. It requires a minimum of three nodes and a dedicated 10GbE network for storage traffic. Unlike vSAN, Ceph is open source and included with Proxmox at no additional cost.

Can I mix VMware and Proxmox during migration?

Yes, and this is the recommended approach. Run VMware and Proxmox side by side during the migration period. Migrate VMs in batches, validate each batch thoroughly, and only decommission VMware hosts after all their VMs have been successfully migrated and tested on Proxmox. This parallel approach reduces risk and allows you to fall back to VMware for any VM that has issues on Proxmox.

How long does a typical migration take?

Migration timeline depends on the number and size of VMs. A small environment of 10 to 20 VMs can typically be migrated in 1 to 2 weeks including testing. A mid-size environment of 50 to 100 VMs takes 4 to 8 weeks. Large environments of 200 or more VMs may take 3 to 6 months with proper planning, testing, and phased rollout. The disk conversion step itself runs at 100 to 300 MB per second, so a 500 GB disk converts in roughly 30 minutes.

Does Proxmox support GPU passthrough for AI workloads?

Yes. Proxmox supports PCIe passthrough for NVIDIA, AMD, and Intel GPUs. This enables GPU-accelerated AI workloads, CUDA development, and machine learning inference inside VMs. Configuration requires IOMMU support in the host BIOS and kernel parameters. PTG uses Proxmox with GPU passthrough extensively for private AI deployments.

Petronella Technology Group has completed dozens of VMware-to-Proxmox migrations for businesses of all sizes. We handle the entire process from planning and hardware assessment through conversion, testing, and production cutover, with guaranteed minimal downtime. Contact us for a free migration assessment to get a cost analysis and migration plan tailored to your environment.

About the Author: Craig Petronella is the CEO of Petronella Technology Group, a cybersecurity and IT infrastructure firm in Raleigh, NC. With over 30 years of hands-on experience managing virtualization environments, Craig helps organizations modernize their infrastructure while reducing costs and improving security.

Need help implementing these strategies? Our cybersecurity experts can assess your environment and build a tailored plan.
Get Free Assessment
Craig Petronella
Craig Petronella
CEO & Founder, Petronella Technology Group | CMMC Registered Practitioner

Craig Petronella is a cybersecurity expert with over 24 years of experience protecting businesses from cyber threats. As founder of Petronella Technology Group, he has helped over 2,500 organizations strengthen their security posture, achieve compliance, and respond to incidents.

Related Service
Enterprise IT Solutions & AI Integration

From AI implementation to cloud infrastructure, PTG helps businesses deploy technology securely and at scale.

Explore AI & IT Services
Previous All Posts Next
Free cybersecurity consultation available Schedule Now