Migrate VMware ESXi to Proxmox VE: 2026 Step-by-Step
Posted: December 31, 1969 to Technology.

Broadcom closed the VMware acquisition on November 22, 2023, and the price shock started landing in 2024 renewal quotes. By the time 2026 renewals came around, small and mid-sized businesses in Raleigh, Durham, Cary, and the wider Triangle were quoting us quotes two, five, sometimes ten times the price they paid for the same ESXi footprint the year before. A handful of our managed IT clients had already started the move to Proxmox VE, and the phone has not stopped ringing since.
This is the migration guide we wish existed when we started shepherding VMware refugees off ESXi. We have run this path for healthcare clients living under HIPAA, for CMMC-tracking defense contractors, and for plain old small business shops that were one renewal cycle away from closing the virtualization line item entirely. The playbook that follows is the exact sequence our team at Petronella Technology Group walks through, with the commands that actually work in Proxmox VE 8 and 9, the gotchas we keep hitting, and the cost math you can take back to your CFO.
If you are staring at a Broadcom renewal quote and wondering whether Proxmox can really replace vCenter, vSphere, vMotion, Veeam, and vSwitch in one weekend, you are asking the right question. The short answer is yes, but you need a plan, a parallel run, and realistic expectations. Here is how we do it.
Why this migration, and why now
VMware's perpetual license program is gone. Broadcom's new subscription-only model bundled the product lineup into a handful of SKUs (VMware Cloud Foundation, VMware vSphere Foundation, VMware vSphere Enterprise Plus, and VMware vSphere Standard, with most smaller shops funneled toward VVF or VCF), then hiked per-core pricing and established a minimum core count per CPU. The list of casualties is long: Workstation Player gone free, Fusion free for personal, the Partner program reshuffled and many small resellers cut loose, End User Computing divested to KKR and rebranded Omnissa, Carbon Black divested to Arctic Wolf in 2024, and the free ESXi hypervisor removed from public downloads in February 2024 then quietly returned in a limited form in April 2025.
For a two-socket host that used to cost roughly $700 in vSphere Essentials Plus per year, we are now seeing 16-core minimum subscription pricing that lands north of $3,500 per socket annually, depending on the SKU and channel. For a three-node cluster with 64 cores of compute, it is easy to cross $25,000 per year in software alone, and that does not count vSAN, Aria, or NSX. Triple the quote for a seat at that table is the rule of thumb most of our clients report.
Proxmox VE is the leading open-source replacement. It is built on Debian, uses KVM for the hypervisor and LXC for containers, ships with a clustering stack based on Corosync, integrates Ceph for hyperconverged storage, and ships with ZFS, LVM-thin, and directory-based storage out of the box. Support subscriptions from Proxmox Server Solutions GmbH start at about 115 euros per CPU per year for Community, 355 euros for Basic, 525 euros for Standard, and 1,060 euros for Premium (verify current pricing at proxmox.com). No core counting. No minimum. You can run it unlicensed for free with no feature limits, subscribing a host only enables the enterprise repo for production-stable updates.
That cost gap is why the migration is happening. The rest of this guide is how you actually pull it off.
Part 1: Pre-migration inventory and planning
You cannot migrate what you have not inventoried. Before anything else, open vCenter and export a full inventory. If vCenter is already gone because you are on free ESXi, walk each host in the web UI and capture the same fields. At minimum, you need:
- Every VM: name, OS, guest OS version, CPU count, RAM, disk count, disk size per volume, thin or thick provisioned, disk format (VMDK flat, sparse, or stream-optimized), networks attached, VLAN tag, MAC address if you have pinned DHCP reservations, boot type (BIOS or UEFI), and whether VMware Tools are installed.
- Every datastore: VMFS version, free space, block size, underlying LUN or local disk, replication relationships (SRM pairs, array-based replication), and snapshots.
- Every network: vSwitch name, port group name, VLAN ID, uplink assignment, security policies, NIC teaming mode, and whether it is a Standard switch or Distributed switch.
- Cluster features in use: HA, DRS, Fault Tolerance, vMotion, storage vMotion, DPM, Storage DRS, EVC mode, vSAN, NSX, vRealize or Aria integrations, and Site Recovery Manager.
- Backup stack: Veeam, Commvault, Nakivo, Altaro, VMware's own vDP if you still have it lurking, backup targets, and retention.
- Identity integration: vCenter SSO sources, AD joins, LDAP configuration, certificate sources, and any MFA wiring.
Dump this inventory into a spreadsheet. RVTools (now Dell RVTools) is the fastest way to pull a complete VMware environment snapshot into a workbook. Run it, save the XLSX, and keep it as your source of truth through the cutover.
While you are at it, answer the sizing questions for the new Proxmox environment. How many nodes? Do you need Ceph for hyperconverged storage, or are you landing on an external NFS, iSCSI, or SAN target that Proxmox can import as a shared LVM or directory? Are you planning ZFS local storage with replication between two or three nodes (the "poor man's vSAN" that actually works for SMB workloads)? Do you want a quorum device for a two-node cluster, or is a third lightweight node cheaper than the QDevice hassle?
Our rule of thumb: if the VMware cluster was three nodes running vSAN, the right Proxmox landing zone is three nodes running Ceph with 10 GbE or 25 GbE interconnect. If the cluster was two hosts with a shared SAN, match that with two Proxmox nodes plus a QDevice on a small Raspberry Pi or VM elsewhere, and import the SAN LUNs as shared LVM. If the shop was a single ESXi with local storage, a single Proxmox host with ZFS RAID10 is the direct replacement, and you gain ZFS snapshots and send/receive replication for free.
Last planning item: a rollback plan. Do not decommission ESXi until the Proxmox side has run clean for a week minimum and backups have been verified. We keep ESXi hosts powered on with the migrated VMs powered off for 14 to 30 days as a safety net. License bill gets one more month of pain in exchange for an easy rollback. Worth it.

Part 2: Build the Proxmox VE target
Download the Proxmox VE ISO from proxmox.com, write it to USB with balenaEtcher or dd, and install on the target hardware. Pick your filesystem during install: ZFS RAID for data integrity and built-in snapshots, LVM for maximum compatibility with older hardware, or ext4 for absolute simplicity on single-disk lab boxes. For production, ZFS is almost always the right call unless the underlying disks are hardware-RAIDed behind a controller that does not pass SMART through.
After install, the first three things we change on every host:
-
Repos. The default is the enterprise repo, which requires a subscription. If you are not subscribed yet, switch to the no-subscription repo. Edit
/etc/apt/sources.list.d/pve-enterprise.listand comment the enterprise line, then add a file/etc/apt/sources.list.d/pve-no-subscription.listwithdeb http://download.proxmox.com/debian/pve bookworm pve-no-subscription(adjustbookwormtotrixiefor PVE 9). Same thing for the Ceph repo if you are using Ceph. Thenapt update && apt dist-upgrade -y && reboot. -
Subscription nag. Optional but nice. The "no valid subscription" popup in the web UI can be patched out by editing
/usr/share/javascript/proxmox-widget-toolkit/proxmoxlib.jsand changing theif (res === null || res === undefined || !res || res.data.status.toLowerCase() !== 'active')check. We document this for clients but we also recommend buying at least the Community subscription once the environment is in production, because it funds the project and unlocks the enterprise repo. -
Network. Configure your bridges.
vmbr0is the management bridge by default, bound to one physical NIC. If you came from VMware with a vSwitch carrying multiple VLANs, you wantvmbr0withbridge-vlan-aware yesset in/etc/network/interfaces, and then each VM's virtual NIC is tagged with the VLAN it belongs to. If your VMware shop used a Distributed vSwitch with complicated LACP teaming, replicate that with abond0interface (mode 802.3ad) on the Proxmox side, then bridgevmbr0on top.
Once you have a working node, cluster the second and third nodes with pvecm create <clustername> on node 1 and pvecm add <ip-of-node-1> on nodes 2 and 3. Confirm with pvecm status that quorum is established and all nodes are online. If you are running Ceph, initialize it through the web UI under Datacenter > Ceph, create your monitors on each node, create OSDs on each data disk, and create a pool sized to your replica count (3/2 is the default and the right choice for three nodes).
Part 3: Pick a migration path per VM
There is no single "convert my whole vCenter to Proxmox" button. There are multiple paths, and the right choice depends on the VM. We use three in practice.
Path A: The ESXi import wizard (PVE 8.2 and newer)
Proxmox VE 8.2 shipped in April 2024 with a native ESXi import storage type. This is now the default path for most VMs. It is essentially a thin wrapper around the ESXi API that lets Proxmox reach into a running or stopped ESXi host, see the datastore contents, and pull a VM directly onto a Proxmox storage target while converting the disks on the fly.
To use it, go to Datacenter > Storage > Add > ESXi in the web UI. Supply the ESXi hostname or IP, a username, and a password. The connection is skip-certificate-check by default because most ESXi installs ship self-signed certs. Click Add and the new storage target appears.
Under that storage target you can now browse the VMs on the ESXi host. Right-click a VM and choose Import. The wizard asks for the target node, the target storage for the disks, the VM ID you want on the Proxmox side, which format to use for the resulting disks (qcow2 for thin and snapshotable, raw for performance, or the native ZFS zvol if you picked ZFS storage), and the network mapping. Hit Import and Proxmox does the rest.
Important notes on this path:
- The source VM can be running. Proxmox will read the flat VMDK file while the guest is live, then as a final step you will power the VM off on the ESXi side and re-run the import to catch the delta. For VMs with small disks or low change rate, a single cold import is simpler.
- Bandwidth matters. A 500 GB VM over 1 GbE takes about 90 minutes just to move the bits. Do the math before you schedule the maintenance window.
- The wizard pulls the VMware configuration and tries to translate it, but it will not translate everything. CPU type, hardware version, BIOS vs UEFI, and machine type (q35 vs i440fx) sometimes need manual tweaking after import. We always edit the imported VM's config before first boot.
- The VM will not boot Windows guests correctly on the first try unless you pre-install VirtIO drivers. More on that below.
Path B: OVF export and qm importovf
For VMs that do not live on an accessible ESXi host, or for shops that have already moved VMs to OVF templates, qm importovf is the classic path. On the Proxmox host:
qm importovf 105 /mnt/pve/nfs-staging/my-windows-vm.ovf local-zfs --format raw
That command creates VM 105 from the OVF manifest, landing disks on the local-zfs storage with raw format. You still need to edit the resulting VM after import to adjust BIOS/UEFI, CPU type, and network bridges. The OVF export from vCenter or Workstation gives you a tar-like OVA or a directory of OVF plus VMDK files. Either works.
Path C: qemu-img convert and qm importdisk
When you have raw VMDK files and no running ESXi, this is the workhorse. Copy the VMDK files off the datastore to NFS, USB, or a staging directory on the Proxmox host. Then convert to qcow2 or raw:
qemu-img convert -f vmdk -O qcow2 /staging/WebServer-flat.vmdk /staging/WebServer.qcow2
Or for raw (fastest, but consumes full allocation):
qemu-img convert -p -f vmdk -O raw /staging/WebServer-flat.vmdk /staging/WebServer.raw
The -p flag shows a progress bar. Grab coffee for a 500 GB disk.
Once converted, create a shell VM in the Proxmox UI with the right CPU, RAM, and network, but no disks. Then import the converted disk to that VM:
qm importdisk 110 /staging/WebServer.qcow2 local-zfs --format qcow2
That imports the disk as an "unused disk 0" slot on VM 110. Go to the VM's hardware tab, double-click the unused disk, attach it as scsi0 or virtio0 (virtio is faster, scsi is safer for guests without VirtIO drivers pre-installed), then set the VM's boot order to boot from that disk.
A quick note on which command to use. Very old Proxmox documentation referenced qm importdisk as a two-step process that created the VM first. The current behavior (PVE 7.4 and newer) works as described above: create the shell VM first, then import. In PVE 8.x and 9.x, you can also do this entirely in the web UI via the "Import Disk" button that appeared in the Hardware tab in PVE 8.1, which is cleaner for admins who live in the web console.
Part 4: Fix the Windows guest before first boot
The number one reason a migrated Windows VM will not boot on Proxmox is that the guest does not have VirtIO drivers installed and you set its primary disk to VirtIO or virtio-scsi. The fix is to either:
Option 1 (easiest): Import the disk as SATA or IDE for first boot. Windows has SATA and IDE drivers baked in. Attach the imported disk as SATA, boot Windows once, install VirtIO drivers from the ISO, shut down, change the disk type to SCSI with VirtIO SCSI controller, and reboot. This is the safe default.
Option 2 (slicker): Pre-install VirtIO drivers on the source VM before migration. Download the stable VirtIO ISO from fedorapeople.org (the canonical source maintained by Red Hat), mount it inside the Windows VM while it is still on ESXi, install the drivers with virtio-win-guest-tools.msi, then shut down and migrate. When the VM boots on Proxmox with a VirtIO SCSI disk, the driver is already there.
Either way, after the Windows guest is up and happy on Proxmox, uninstall VMware Tools. Control Panel, Programs and Features, VMware Tools, Uninstall. Reboot. Then install the Proxmox QEMU guest agent from the same VirtIO ISO (look for qemu-ga-x64.msi or qemu-ga-x86.msi). That gives you graceful shutdown, IP reporting in the Proxmox UI, and filesystem-consistent snapshots.
Linux guests are less fussy. Most modern distributions (RHEL/AlmaLinux/Rocky 8+, Ubuntu 20.04+, Debian 11+) have VirtIO drivers in the kernel already. Install qemu-guest-agent via apt or dnf, enable it, and the VM will Just Work. Older Linux guests occasionally need an initramfs rebuild to include virtio_scsi: dracut -f on RHEL-family or update-initramfs -u -k all on Debian-family does the trick.
Part 5: Networks, MAC addresses, and DHCP
When Proxmox boots a migrated VM, it assigns a new randomly-generated MAC to each virtual NIC. That breaks your DHCP reservations. If you have VMs pinned to IPs via MAC reservations on your DHCP server, you have two choices:
-
Preserve the original MAC. Edit the VM config at
/etc/pve/qemu-server/<vmid>.confand setnet0: virtio=00:50:56:AA:BB:CC,bridge=vmbr0,tag=100. The00:50:56OUI is the VMware prefix from the original, and it is fine to keep. Proxmox accepts any valid MAC. -
Update DHCP reservations to the new Proxmox-generated MACs. Cleaner long-term, but a lot of churn if you are moving dozens of VMs.
For VLAN tagging, remember that VMware port groups hid the VLAN ID inside the vSwitch. On Proxmox with a VLAN-aware bridge, each VM NIC has an explicit tag= in the config. Inventory your port groups before migration and map each one to a VLAN tag on the new bridge.
Part 6: Storage migration strategies
If you came from VMFS on a SAN and are moving to Ceph on local disks, you have two options. Option A: copy the disks over the network during the VM-by-VM migration (this is what Path A above does). Option B: stand up Proxmox first, present the old SAN LUNs to Proxmox as shared LVM (Proxmox speaks iSCSI, NFS, and FC just fine), import the VMs in place, then storage-migrate them to Ceph afterward inside Proxmox.
Option B is often faster and safer because you are only doing the network-heavy storage move once, after the VMs are already running on the new hypervisor. The qm move-disk command in the Proxmox CLI does this live, with no downtime for the guest.
A third path, for shops with the budget and the change window, is to buy a completely new set of disks for Proxmox, set up Ceph or ZFS fresh, and migrate VMs over the network from old-SAN-on-ESXi to new-disks-on-Proxmox. This is the cleanest cutover path and is what we recommend for CMMC environments where you want a bright-line separation between the old and new infrastructure anyway.
Part 7: High availability and clustering
Proxmox HA lives in the Datacenter view under HA. Add VMs to HA groups, set priorities, define which nodes are preferred, and the cluster will automatically restart VMs on surviving nodes if a host fails. The backend is a combination of Corosync (for cluster membership and quorum) and pve-ha-lrm / pve-ha-crm services on each node.
For two-node clusters, you need a QDevice to avoid split-brain. A Raspberry Pi 4 running the corosync-qnetd package is $70 of insurance against a catastrophic outage and we install one on every two-node client we run.
For storage-level replication (the Proxmox equivalent of VMware's Storage vMotion or SRM for disaster recovery), the ZFS-based pvesr system replicates VM disks on ZFS between nodes on a schedule you set. Pair it with HA and you have an SMB-grade equivalent of vSphere HA + VMware Replication for zero license cost. For synchronous replication across sites, Ceph with a stretched cluster or external block-level replication (DRBD, ZFS send/receive continuous, or SAN-level sync) covers that case.
Part 8: Backups with Proxmox Backup Server
Veeam does not back up Proxmox VMs natively yet in most stable channels (Veeam announced PVE support in Veeam Backup & Replication 12.2, shipped late 2024, and it is maturing fast, check current compatibility before relying on it). For now, the native answer is Proxmox Backup Server, which is a separate free download from proxmox.com that installs on bare metal or in a VM on its own.
PBS does deduplicated, incremental-forever backups of Proxmox VMs at the disk level. You point your Proxmox cluster at a PBS server, define a datastore on PBS, set a backup schedule per VM or per group, and PBS handles the rest. Retention, pruning, verification, and garbage collection are all built in. Restores are fast because PBS uses content-addressed chunked storage.
For offsite, PBS supports replicating a datastore to a remote PBS over SSH. So the pattern is: primary PBS in the same datacenter as the Proxmox cluster, secondary PBS at a remote site (a colo, a second office, or a cloud VM with block storage), replicate nightly. That gives you the 3-2-1 backup profile without buying Veeam licenses.
If you are CMMC-tracking or HIPAA-regulated and need encrypted backup at rest plus offsite replication plus verified restore testing, PBS does all three. Turn on client-side encryption, document the encryption key escrow, and schedule proxmox-backup-client verify runs on every backup job.
Part 9: Cutover weekend playbook
Here is the weekend playbook we have run for a dozen clients. Adjust node counts and VM counts to your shop.
Friday evening:
- Notify users of the maintenance window.
- Final Veeam or PBS backup of every VM. Verify the backup.
- Power-down non-critical VMs first (dev, test, lab).
- Run
qm importovfor the ESXi Import wizard for each powered-down VM, landing them on Proxmox. - For each imported VM, edit the config: CPU type to
host, BIOS to match source (almost always OVMF/UEFI for Windows Server 2019+, SeaBIOS for older), machine type to q35 for UEFI or i440fx for legacy BIOS, network to the correct bridge and VLAN tag. - Attach the primary disk as SATA for first boot if Windows, VirtIO SCSI if Linux.
- Boot each VM. Verify guest OS health. Install or update guest agent. Change disk type to VirtIO SCSI if you started on SATA, reboot, verify.
- Test login, test application functionality, test network reachability from an external point.
Saturday morning:
- Power-down production VMs in dependency order (edge first, core last, or the opposite depending on your stack).
- Repeat steps 4-8 for production VMs.
- Stand up any infrastructure services on Proxmox (domain controllers, DNS, DHCP, file servers) first, validate, then move the app layer.
- Update any DHCP reservations or DNS records that reference VM MACs or IPs.
- Test HA by powering off one Proxmox node intentionally, watch HA restart a test VM on another node, power the dead node back on.
Saturday afternoon and evening:
- User acceptance testing. Have the client's key users log in and exercise each application.
- Keep ESXi powered on, VMs powered off, for rollback.
- Run the first PBS backup of the whole Proxmox cluster. Verify it.
Sunday:
- Let it soak. Monitor. Fix escalations.
- Do not decommission ESXi yet.
Week 2 through Week 4:
- Confirm application stability, backup success, HA failover behavior.
- Verify offsite backup replication.
- Decide whether to redeploy ESXi hardware as additional Proxmox nodes or return it for trade-in.
Part 10: What the math actually looks like
Here is a real sanitized example. A Raleigh-area client running three Dell R740 hosts with 64 cores total, 384 GB RAM each, and a shared PowerStore SAN received a Broadcom quote at 2024 renewal for approximately $42,000 per year for vSphere Foundation and vSAN. The same shop, post-migration, runs:
- Proxmox VE on the same three Dell R740 hosts, no license fee (or about $1,100/year for three Community subscriptions if they want the enterprise repo)
- Ceph on fresh local NVMe disks in each node, replacing the PowerStore
- Proxmox Backup Server in a VM on a fourth cheap node, replacing Veeam
- Our managed IT services wrapper for monitoring, patching, and on-call response
Total software and subscription spend post-migration: under $2,000 per year. That is $40,000 per year back in the client's P&L. They paid for the migration project in 30 days of savings.
The savings profile gets even more dramatic for smaller shops. A two-host VMware Essentials Plus environment that cost $1,500 per year in renewal becomes $0 in software costs on Proxmox, or $230/year if they add Community subscriptions. For a client staring at a $5,000 Broadcom-era quote on that same footprint, the math is obvious.
Your numbers will vary. Get a real quote from Broadcom for your renewal, get a real quote from Proxmox Server Solutions for subscriptions if you want them, and compare to the cost of the migration engagement itself. In every case we have run, the payback period has been under a year, and usually under six months.
Pitfalls we have hit (so you do not have to)
Thin-provisioned VMDK growth. A VMware disk that reports 120 GB allocated but only 40 GB used on VMFS can balloon during conversion if you pick raw format on the Proxmox side. Use qcow2 or a ZFS zvol with sparse=1 to preserve the thin-provisioned behavior.
UEFI firmware variables. Windows VMs booting UEFI from VMware carry Secure Boot state in nvram. On Proxmox, you need to attach an EFI disk (pve-efi storage, usually 4 MB) to the VM and import the EFI variables or reset them. The symptom of missing this step is a BSOD on first boot or a "Boot Device Not Found" screen. Fix: edit the VM, add EFI Disk, attach, boot. Windows will re-create the boot entry on its own most of the time.
License activation on Windows guests. The hypervisor change triggers hardware-change re-activation in Windows and in some Microsoft Office installs. Have your KMS server or MAK keys ready, or budget a support call to Microsoft.
Disk controller rename breaks Linux fstab. If your Linux VM had /dev/sdb mounted in fstab and the Proxmox import names it /dev/vdb under VirtIO, the mount fails at boot and you drop to emergency mode. Fix: use UUIDs in fstab (blkid to look them up), or boot from SATA, rewrite fstab with UUIDs, then switch to VirtIO.
CPU feature mismatch in clusters. If the Proxmox cluster has mixed CPU generations (say, Skylake and Cascade Lake), setting VM CPU type to host breaks live migration between mismatched nodes. Use a common baseline like x86-64-v2-AES or set EVC-style baselines per-VM. Proxmox does not have a single-checkbox EVC equivalent yet.
Guest agent not enabling. If qm agent <vmid> ping returns an error after install, check that the guest agent service started on the guest, and check that the VM has agent: 1 in its config. Both have to be true.
A note on what Proxmox does not replace (yet)
Proxmox is outstanding at the hypervisor, cluster, and backup layer. It does not replace NSX for advanced software-defined networking. It does not replace Aria Operations or Aria Automation for large-scale lifecycle orchestration. It does not replace Horizon VDI (though there are open-source pairings with FreeIPA and Guacamole that cover simpler VDI needs). If your environment depends on any of those, you have a separate evaluation to run before committing to a full migration.
For the 90% of SMB and mid-market virtualization footprints that are running vSphere plus Veeam plus a SAN, Proxmox replaces the full stack comfortably.
How Petronella Technology Group runs these projects
We have handled VMware-to-Proxmox migrations for managed services clients across the Raleigh, Durham, Cary, Apex, and greater Triangle region over the past two years, spanning healthcare practices, small defense contractors, engineering firms, and general SMB shops. Our typical engagement covers the full lifecycle: inventory and sizing, Proxmox cluster deployment, VM-by-VM migration with a parallel run, Windows and Linux guest conversion, backup system swap to Proxmox Backup Server, network and firewall reconfiguration, cutover weekend, hypercare, and documentation handoff.
We also run Proxmox as the hypervisor layer for our own internal private AI cluster, so our team lives inside PVE 8 and 9 daily and sees the edge cases before most shops hit them. If you are running CMMC-tracked or HIPAA-regulated workloads, we can wrap the migration with compliance-grade documentation, encrypted backups, and evidence artifacts that hold up in an audit.
If you are staring at a Broadcom renewal quote and want a second opinion, or if you have already decided to move and want a partner to run the migration safely, call (919) 348-4912 or use the form at /contact-us/. Mention that you came in from the Proxmox migration guide so we can cut straight to the sizing conversation.
Related reading on Petronella Technology Group
- Managed IT services: the ongoing managed-services wrap that includes Proxmox patching, monitoring, and on-call response after the migration
- Private AI cluster solutions: what we run on top of Proxmox for clients who want on-premise AI without pushing data to a public cloud
- Cyber security services: compliance and threat-response services that pair naturally with a Proxmox-hosted infrastructure
One more closing thought
Migrating off ESXi is not a dark art. It is a methodical project with a well-worn path now that thousands of shops have walked it. The tools are mature, the commands are documented, and the savings are real. What trips people up is almost always the planning phase: not inventorying thoroughly enough, not sizing the Proxmox side correctly, or trying to cut over all at once instead of in stages. Get those three right and the rest is execution.
If you want the short version, here it is: inventory everything, build Proxmox fresh alongside ESXi, migrate VMs in waves using the ESXi import wizard or qemu-img convert, pre-install VirtIO drivers on Windows guests, swap Veeam for Proxmox Backup Server, run a parallel window of two to four weeks, then decommission. A team of two can typically turn a three-host VMware cluster into a three-host Proxmox cluster over two weekends with a week of prep in between, and walk away with a tenth of the annual software bill.
Good luck out there. And if you want a hand, you know who to call.