How to Migrate from VMware ESXi to Proxmox VE (Step-by-Step)
Posted: March 5, 2026 to Technology.
How to Migrate from VMware ESXi to Proxmox VE (Step-by-Step)
Migrating from VMware ESXi to Proxmox VE is one of the most common infrastructure transitions happening in 2026. Whether you are escaping Broadcom's pricing changes or simply choosing to invest in open-source infrastructure, this step-by-step guide walks you through the entire process from pre-migration planning to post-migration validation. This guide is based on real migrations we have performed at Petronella Technology Group across environments ranging from single-host deployments to multi-node clusters.
Pre-Migration Planning
Successful migrations begin with thorough planning. Rushing the technical migration without proper preparation is the most common cause of extended downtime and post-migration issues.
Inventory Your VMware Environment
Document every virtual machine in your VMware environment. For each VM, record the operating system, number of vCPUs, RAM allocation, disk sizes and types (thin or thick provisioned), network configuration (VLANs, IP addresses, DNS), any VMware-specific features in use (VMware Tools version, vGPU, RDMs, NPIV), and the VM's role and criticality to business operations.
Export this inventory from vCenter using the VM summary report or use PowerCLI to generate a comprehensive CSV. This document becomes your migration checklist and validation reference.
Assess VMware-Specific Dependencies
Some VMware features do not have direct equivalents in Proxmox and require workarounds or alternative approaches:
- VMware vGPU (NVIDIA GRID): Proxmox supports GPU passthrough (dedicating an entire GPU to a VM) but does not support vGPU partitioning natively. If you use vGPU for VDI, you will need to evaluate whether full GPU passthrough or alternative VDI solutions meet your needs.
- vSAN: If your storage depends on vSAN, you will need to plan for alternative storage. Proxmox supports Ceph for distributed storage, ZFS for local redundant storage, and traditional shared storage via NFS and iSCSI.
- NSX micro-segmentation: Proxmox uses standard Linux firewall rules (iptables/nftables) and supports Open vSwitch. You will need to recreate NSX security policies using these tools.
- Distributed Resource Scheduler: Proxmox does not have an equivalent automated load balancer. Manual live migration and HA fencing handle most of the same use cases.
Plan Your Storage Architecture
Before migrating VMs, your Proxmox storage needs to be configured and tested. Common storage configurations include local ZFS pools for single-host or small cluster deployments, Ceph for distributed hyper-converged storage across three or more nodes, NFS shares for shared storage from a NAS or dedicated file server, and iSCSI targets for SAN-based storage.
For most VMware migrations, we recommend ZFS for environments with local storage and Ceph for environments that previously used vSAN. Both provide data redundancy, snapshots, and the performance characteristics needed for production workloads.
Phase 1: Install and Configure Proxmox VE
Download the latest Proxmox VE ISO from the official Proxmox website. Boot your target server from the ISO and follow the installation wizard. Proxmox installs on a dedicated disk and configures the network interface during installation.
After installation, access the Proxmox web interface at https://your-server-ip:8006. Complete the initial configuration including setting up your storage pools, configuring network bridges for your VLANs, configuring DNS and NTP, and applying any Proxmox enterprise repository settings if you have a support subscription.
If you are building a cluster, install Proxmox on all nodes first, then join them into a cluster from the web interface or command line using pvecm.
Phase 2: Export VMs from VMware
There are several methods to export VMs from VMware, depending on your environment:
Method A: OVF/OVA Export from vCenter
In the vSphere Client, right-click the VM, select Export, and choose OVF format. This creates an OVF descriptor file and one or more VMDK disk files. This method works well for VMs that can be powered off during export.
Method B: Direct VMDK Copy from ESXi Datastore
For VMs that you want to migrate without using the vSphere Client, you can SSH into the ESXi host and copy the VMDK files directly. Locate the VM's datastore path (typically /vmfs/volumes/datastore-name/vm-name/) and use SCP to transfer the VMDK files to your Proxmox host or an intermediate storage location.
Method C: VMware Converter (for running VMs)
VMware vCenter Converter can create a copy of a running VM as a set of VMDK files. This is useful for VMs that cannot be powered off during the export phase, though the resulting copy should be validated carefully.
Phase 3: Convert and Import Disk Images
Proxmox uses QCOW2 or raw disk formats natively. VMware VMDK files need to be converted before they can be used in Proxmox. The qemu-img tool, which is pre-installed on Proxmox, handles this conversion.
To convert a VMDK to QCOW2, SSH into your Proxmox host and run: qemu-img convert -f vmdk -O qcow2 source-disk.vmdk destination-disk.qcow2
For ZFS or LVM storage backends, convert to raw format instead: qemu-img convert -f vmdk -O raw source-disk.vmdk destination-disk.raw
The conversion process duration depends on disk size and I/O speed. For large disks, plan for several minutes to hours per disk.
Phase 4: Create VMs in Proxmox
For each migrated VM, create a new VM in Proxmox with matching specifications. In the Proxmox web interface, click Create VM and configure the VM ID and name, OS type (match the guest operating system), CPU model (set to host for best performance, or specify a compatible model), memory allocation matching the VMware configuration, and network interface on the appropriate bridge and VLAN.
For the disk, you have two options. You can either create the VM with a placeholder disk and then replace it with your converted disk file, or create the VM without a disk and import the converted disk using the qm importdisk command: qm importdisk VM_ID /path/to/converted-disk.qcow2 storage-name
After importing, attach the disk to the VM through the Hardware tab and set it as the boot disk in the Options tab.
Phase 5: Post-Import Configuration
Replace VMware Tools with QEMU Guest Agent
VMware Tools should be uninstalled from each migrated VM and replaced with the QEMU Guest Agent. For Linux VMs, install qemu-guest-agent via your package manager (apt, yum, or dnf). For Windows VMs, download and install the VirtIO drivers and QEMU Guest Agent from the Proxmox VirtIO driver ISO.
The QEMU Guest Agent provides filesystem freeze for consistent snapshots, proper shutdown signaling, IP address reporting to the Proxmox interface, and time synchronization.
Update Virtual Hardware Drivers
For Windows VMs, you may need to install VirtIO drivers for the disk controller (VirtIO SCSI), network adapter (VirtIO Net), and balloon memory driver. The Proxmox VirtIO driver ISO contains all necessary Windows drivers. Attach this ISO to the VM and install the drivers through Device Manager.
For Linux VMs, VirtIO drivers are typically included in the kernel and will load automatically. Verify that the correct drivers are active after the first boot.
Adjust Boot Configuration
Some VMs may not boot on the first attempt due to differences in virtual hardware. Common fixes include changing the disk controller from IDE to VirtIO SCSI (with appropriate driver installation), switching the network adapter type, adjusting the BIOS/UEFI boot settings, and regenerating the initramfs on Linux VMs if the root disk controller changed.
Phase 6: Validation and Testing
After each VM is migrated and booting, perform thorough validation. Verify that the operating system boots cleanly without errors. Confirm network connectivity, DNS resolution, and access to required services. Test application functionality end-to-end. Verify backup integration with Proxmox Backup Server. Check that monitoring agents are reporting correctly. Validate that any scheduled tasks or cron jobs are executing properly.
Run the migrated VMs in parallel with the original VMware VMs (if possible) for a validation period before decommissioning the VMware environment.
Phase 7: Configure Proxmox Features
Once your VMs are validated, take advantage of Proxmox features that may not have been available in your VMware environment. Set up Proxmox Backup Server for automated, deduplicated backups. Configure HA for critical VMs so they automatically restart on surviving nodes during a host failure. Set up Ceph or ZFS replication for additional data protection. Configure the Proxmox firewall for network security policies.
Common Migration Challenges
Through our migration experience at Petronella Technology Group, we have identified the most common challenges.
Windows activation: Changing virtual hardware may trigger Windows re-activation. Ensure you have license keys accessible before migration.
Network configuration: Static IP configurations reference specific network adapter names that change between VMware and Proxmox. Update these configurations before or immediately after the first boot.
Application licensing: Some software licenses are tied to hardware identifiers that change during migration. Identify these applications in advance and coordinate with vendors.
Performance tuning: Default Proxmox VM settings may not match the optimized VMware configuration. Test I/O performance and adjust disk cache settings (writeback, writethrough, or none) based on your workload.
Getting Professional Migration Support
While the technical steps are straightforward, migrating production environments requires careful planning, testing, and execution to avoid extended downtime. At Petronella Technology Group, we offer comprehensive VMware to Proxmox migration services including environment assessment, migration planning, hands-on migration execution, validation testing, and post-migration support. We run Proxmox on our own production infrastructure, so every recommendation we make is backed by our own operational experience.