DGX Spark Cluster Cables in Stock: 0.5m QSFP112 400G DAC for Every GB10 Workstation, $159 Shipped
Posted: May 7, 2026 to AI.
If you bought a NVIDIA DGX Spark or any of its GB10 cousins (ASUS Ascent GX10, Dell Pro Max with GB10, MSI EdgeXpert MS-C931, HP ZGX, Lenovo ThinkStation PGX, Acer Veriton GN100, Gigabyte), you already discovered the punchline: the petaflop sitting on your desk only becomes a real cluster when you can physically connect it to another one. That requires a very specific cable. The right one is hard to find, has been backordered at most distributors since launch, and is selling for $179 or more when it does ship. Petronella Technology Group has the right cable in stock, ready to ship, at $159 with free shipping to the United States and Canada.
Order the 0.5m QSFP112 400G DAC Cable
$159 shipped. NVIDIA-approved spec. Ships next business day from Raleigh, NC.
Questions? Talk to Penny: 919-335-7902
Why the DGX Spark Cluster Cable Has Been So Hard to Get
NVIDIA shipped the GB10 Grace Blackwell desktop systems with two QSFP112 ports on a ConnectX-7 NIC running at 200 gigabits per second. Those ports are the only way to wire two or more Spark-class boxes into a real cluster, and they want a very specific kind of cable. NVIDIA's Spark Stacking documentation lists exactly three approved cables: the Amphenol NJAAKK-N911, the Amphenol NJAAKK0006 (its 0.5 meter sibling), and the Luxshare LMTQF022-SD-R. Resellers sometimes substitute the PNY-branded NJAAKK-0006 or NJAAKKR-0006, which are functionally the same cable in 32 AWG and 30 AWG variants.
The reason availability is brutal is mundane. Volumes are low. The Spark line shipped in modest quantities to developers and small AI shops. Distributors over-ordered the headline boxes and under-ordered the niche cable that turns one box into a cluster. Micro Center sold a small batch in October 2025, ran out, and has been backordered ever since. PNY quotes the cable on request. Provantage shows it as "request a quote." Mainstream e-tailers either have not stocked it at all, or list it through marketplaces at $179 to $229 per cable. Try buying two for a three-node ring at any major retailer right now and you will spend a half hour on the phone.
We bought a pallet. The price is $159 with free shipping to most US and Canadian addresses. If you need volume, talk to Penny at 919-335-7902 and we will quote pricing on five, ten, or twenty units.
What This Cable Actually Is, in Plain English
The cable in your hand is a 0.5 meter QSFP112 400G passive direct-attach copper twinax cable. The form factor is QSFP112, which is the modern successor to QSFP28 and QSFP56. The cable is rated for 400 gigabits per second across four lanes of 100G PAM4 signaling, which is why the marketing literature calls it a "400G" cable. On a Spark, the actual link rate is 200G because that is what the ConnectX-7 NIC negotiates at the host side. The cable itself is fully capable of running 400G if you ever upgrade to ConnectX-8 silicon down the road.
"Passive direct-attach copper" means there is no active electronics in the cable head. No retimers, no DSP, no firmware. The two connectors at each end are joined by twinaxial copper conductors with shielded pairs. Power consumption is essentially zero. Latency is the lowest of any cable option, lower than active optical or active copper variants. For runs of three meters or less, passive copper is the right default. For runs longer than five meters you would need active copper or a fiber-based optical cable, but those are not relevant for desktop cluster builds where the boxes sit next to each other.
The 0.5 meter length (about 19 inches) is what NVIDIA recommends for the standard side-by-side or stacked-tower configuration. It is short enough to keep clean cable management on a desk and long enough to handle two Sparks separated by a small monitor or a Wi-Fi access point. If you need a longer run we can quote a 1m, 1.5m, 2m, or 3m custom build, but the 0.5m is the universal default for nine out of ten Spark cluster builds we have seen this year.
Every GB10 Workstation This Cable Works With
NVIDIA designed the GB10 Grace Blackwell Superchip and licensed the reference platform to seven OEM partners. Every one of them ships with the same ConnectX-7 NIC, the same QSFP112 ports, and the same NVIDIA software stack. That means the same cable works across every machine in this list. We have personally tested mixed-vendor clusters, and so have several developer-forum users.
| System | Maker | Cable Compatibility |
|---|---|---|
| DGX Spark Founders Edition | NVIDIA | Yes, 2x QSFP112 / ConnectX-7 200G |
| Veriton GN100 AI Mini Workstation | Acer | Yes |
| Ascent GX10 | ASUS | Yes |
| Pro Max with GB10 | Dell Technologies | Yes |
| AI TOP ATOM (GB10 desktop) | Gigabyte | Yes |
| ZGX Nano (GB10) | HP | Yes |
| ThinkStation PGX SFF | Lenovo | Yes |
| EdgeXpert MS-C931 / EdgeXpert-11SUS | MSI | Yes |
If your machine is a GB10, this cable connects it to any other GB10. The OEM brand on the chassis does not matter. Mixed clusters of NVIDIA Founders Edition with ASUS GX10, Dell Pro Max GB10, and Lenovo PGX have been tested in the wild and work correctly out of the box once both nodes are running DGX OS or Ubuntu 24.04 with the NVIDIA driver stack.
Cluster Topologies You Can Build with One or Two Cables
Two Sparks, One Cable: The Default Cluster
Plug a single 0.5m QSFP112 cable into the first QSFP port on each Spark. Boot both. Run NVIDIA's discover-sparks.sh helper or the Sparkrun setup wizard. You now have a two-node cluster with point-to-point 200G connectivity, no switch required. NVIDIA's "Connect Two Sparks" playbook walks through the netplan configuration, the SSH passwordless setup, and the NCCL environment variables. The whole thing takes about an hour from unboxing to running a distributed inference job across both boxes.
The win is that you can now run AI models up to roughly 405 billion parameters distributed across the combined 256 GB of unified Grace Blackwell memory, which is the use case NVIDIA explicitly markets. For most teams the more useful payoff is parallel fine-tuning runs, distributed evaluation harnesses, or running production inference on one node while the other trains.
Three Sparks, Two Cables: The Switchless Ring
Each Spark has two QSFP112 ports. With two cables you can wire three boxes into a ring. Spark A, port 1, connects to Spark B, port 1. Spark B, port 2, connects to Spark C, port 1. Spark C, port 2, connects back to Spark A, port 2. Now every node has a direct path to every other node and you have not paid for a switch.
NVIDIA published a "Connect Three DGX Spark in a Ring Topology" sample in April 2026 that confirms NCCL handles the ring topology correctly. Performance scales close to linearly for collective operations, and the pattern works equally well for 4-node, 6-node, and 8-node rings if you are willing to keep adding cables. The community thread on 8x Spark clusters at the NVIDIA developer forums shows real teams running production workloads on switchless 8-node configurations using nothing more than QSFP112 DAC cables.
When You Actually Need a Switch
Once you go past about six nodes the ring latency starts to matter for tightly-coupled training, and you may want a 400G QSFP112 switch to drop into a star topology. The MikroTik CRS812 is the budget option developers have been using ($1,000 to $1,500 territory), and it supports the 400G to 2x200G QSFP56 breakout that some early Spark adopters wired up before the proper QSFP112 cables were available. For most desktop AI shops, two to three Sparks in a ring is the sweet spot, and you do not need a switch at all.
What Two or Three Sparks Actually Lets You Do
The single biggest reason to cluster GB10 boxes is memory pooling. A single Spark gives you 128 GB of unified Grace-Blackwell memory at 273 GB/s. Two Sparks roughly double that to 256 GB. Three Sparks reach 384 GB. That is meaningful because the inference quality of modern frontier-class models scales sharply with how much of the model fits in fast memory rather than spilling to NVMe or to disk.
Specific workloads that get unlocked by a 2-3 node cluster:
- Llama 3.1 405B and Llama 4 inference at FP8 or FP4. Single Spark cannot fit these. Two Sparks can, with sharding via vLLM, sglang, or the NVIDIA NIM runtime.
- Distributed fine-tuning of 70B class models using PyTorch FSDP or DeepSpeed ZeRO-3. The 200 Gbps interconnect is fast enough that gradient sync is not the bottleneck for these model sizes.
- Multi-tenant inference for a small team, where one node hosts the production inference endpoint and the other handles long-running fine-tunes or evaluations without contention.
- Agentic workloads, where one node runs the orchestration and tool-use logic while another runs the heavyweight LLM. NVIDIA's NemoClaw and the broader NVIDIA Agent Toolkit are designed for this split.
- RAG indexing and retrieval at scale, where one Spark handles vector store updates and embedding refresh while the other serves user queries. This is exactly the architecture we use for several PTG private AI deployments.
If you want to stop paying $20-40 per seat per month for OpenAI, Microsoft Copilot, and Anthropic Claude on data your organization considers sensitive, a 2-Spark cluster is the entry point for running those workloads on your own infrastructure. We wrote a deeper take on that in our private Copilot alternative piece.
The Setup Reality: What You Will Hit and How to Skip the Pain
Connecting two Sparks is not plug-and-play, but it is also not difficult. The friction comes from a handful of details that the documentation glosses over.
The cable orientation matters. The QSFP112 connector goes in one way. Force it the wrong way and you can bend a pin. Both Sparks need to be powered down when you insert and remove cables, and you should pull on the connector body, never on the pull-tab cord, when removing.
Both nodes need DGX OS or Ubuntu 24.04 LTS at the same patch level. NVIDIA ships DGX OS preinstalled, but the Founders Edition and the OEM variants sometimes have slightly different driver versions out of the box. The clean approach is to pull the latest DGX OS image from NVIDIA's site and reflash both. ASUS GX10 owners have reported that the Wi-Fi setup wizard fails on first boot, and the fix is to reflash with the NVIDIA DGX OS image.
NCCL version pinning matters. NVIDIA's reference clustering docs target NCCL v2.28.3. Older versions hit a known bug where the bandwidth caps at around 3 GB/s instead of the 200 Gbps the link is capable of. If you see that exact symptom in NCCL tests, the fix is `pip install nvidia-nccl-cu12==2.28.3`.
Both ports on the NIC need IPs. The Spark has two QSFP112 ports presented as `enp1s0f0np0` and `enP2p1s0f0np0` on Linux. Both need addresses in the cluster subnet, and you need NCCL environment variables set to allow it to bond across both interfaces. Skipping this halves your effective bandwidth.
Bonding both ports on a single connection does not double speed. NADDOD's engineers confirmed what some users had hypothesized in the forum: the ConnectX-7 NIC in the GB10 platform is PCIe-bandwidth limited to roughly 2x 128G of total bidirectional throughput. Bonding both physical ports together against a single neighbor will not give you 400G. Use port 1 for connection to neighbor A and port 2 for connection to neighbor B. That is precisely why the ring topology works so well for GB10.
If any of this sounds like more time than you want to spend on your cluster instead of on your model, our AI services team will set the cluster up for you, including DGX OS reflash, network config, NCCL validation, and a working test job. Most installs take us 90 minutes per cluster and we hand you back a documented, monitored deployment.
Why Petronella Technology Group Sells This Cable
We are not primarily a cable distributor. We are an AI-first cybersecurity and managed services firm. The reason we stock this cable is selfish: we run a fleet of GB10 boxes ourselves, we cluster them in production, and we got tired of waiting weeks for distributors to ship the one piece of inventory that turns the box into a useful cluster. We bought enough to keep our own builds running and a surplus to sell to the developers and shops who would otherwise be stuck.
The pricing math is simple. Major distributor channels are quoting $179 to $229 retail. We bought at a price that lets us sell at $159 with free shipping and still make a small margin. We are not trying to win this market. We are trying to remove a real friction point for the AI builder community while we do what we actually do, which is build private AI on our own infrastructure for clients who want their data to stay on their hardware.
If you have not seen what PTG runs internally, our AI hub covers the production stack: 12 autonomous agents, private LLM serving for clients, custom AI development, and an in-house digital forensics practice. We are CMMC-RP certified across the team and we deliver against HIPAA, FTC Safeguards, IRS Pub 4557, and the new state privacy regimes simultaneously. The cable is a small piece of the puzzle, but it is part of the same picture.
Order Details, Shipping, and Order Notification
Every order placed through our Stripe checkout includes:
- One 0.5m QSFP112 400G passive DAC cable (Amphenol NJAAKK0006 spec compatible, 32 AWG)
- Free standard shipping to all 50 US states and Canada (UPS Ground or USPS Priority, 2 to 5 business days)
- Order confirmation email with tracking number within 24 hours of payment
- Compatibility guarantee: works with any GB10 system. If your Spark or GB10 derivative does not light up the link, return for full refund within 30 days.
Stripe collects your shipping address at checkout. We never sell, share, or rent customer data. Shipping addresses are used only for the actual shipping label and are stored only in our order ledger and Stripe's encrypted records.
For volume orders (5 or more units), call Penny at 919-335-7902 for a custom quote. We can typically beat the per-unit price for orders of 10 or more, and we can ship internationally on quote.
Frequently Asked Questions
Does this cable work with the Founders Edition Spark and the OEM versions like ASUS GX10 or Dell Pro Max GB10?
Yes. Every GB10 Grace Blackwell system uses the same ConnectX-7 NIC and the same QSFP112 port specification. NVIDIA designed the platform once and licensed the reference design to Acer, ASUS, Dell, Gigabyte, HP, Lenovo, and MSI. The cable in our store is the NVIDIA-approved spec and works on every machine in that list. Mixed-vendor clusters (for example, a Founders Edition with an ASUS GX10) work correctly once both run the current DGX OS or Ubuntu 24.04 with the NVIDIA driver stack.
Why 0.5 meter? Can I get longer cables?
0.5m is the length NVIDIA explicitly approves in the Spark Stacking documentation and is correct for the standard side-by-side desk setup. If you need 1m, 2m, 3m, or active copper for longer runs, call us. We can custom-source longer DAC cables, but for the typical 2 or 3-Spark desktop cluster, 0.5m is the right answer 90 percent of the time.
Is it 200G or 400G? The marketing materials are confusing.
The cable is a 400G QSFP112 cable, meaning the physical media is rated to carry 400 gigabits per second. The Spark's ConnectX-7 NIC negotiates the link at 200 Gbps because that is the host-side limit. So the practical link rate is 200G, and the cable is overprovisioned for the current generation, which is exactly what you want for forward compatibility with future ConnectX-8 hardware.
Can I cluster three Sparks without a switch?
Yes. Each Spark has two QSFP112 ports. Two cables wired in a ring (A-port1 to B-port1, B-port2 to C-port1, C-port2 to A-port2) gives you a switchless 3-node cluster with full pairwise connectivity. NVIDIA's "Connect Three DGX Spark in a Ring Topology" sample documents this configuration. You can extend the same pattern to 4, 6, or 8 nodes by adding cables.
Will this work with InfiniBand or only Ethernet?
The Spark CX-7 ports are configured for Ethernet only by design (NVIDIA explicitly states this in the documentation). The cable is a passive DAC and works with both Ethernet and InfiniBand on hardware that supports either, but on the Spark you are running 200GbE with RoCE for RDMA, not native InfiniBand.
What if I want to use my own QSFP56 200G cable I already have?
QSFP56 cables (NVIDIA MCP1650-V00AE30 and equivalents) physically fit the QSFP112 cage and several Spark owners have reported success using them. The compromise is that QSFP56 is rated for 200G as its top end while QSFP112 is rated for 400G, so you have no headroom. If you already own QSFP56 cables and they work, save your money. If you are buying fresh, the QSFP112 is the right forward-compatible choice.
How fast does shipping go out?
Orders placed before 2pm Eastern ship the same business day. Orders after 2pm or on weekends ship the next business day. Standard shipping is 2 to 5 business days within the continental US, 3 to 7 days to Canada. You will receive a tracking number by email when the label is generated.
Can PTG help me set up the cluster, not just sell the cable?
Yes. Our AI services team handles full Spark cluster builds, including DGX OS provisioning, network and netplan setup, NCCL validation, NVLink Fabric configuration, distributed fine-tuning environment setup, and ongoing managed support. We also offer AI Academy training for teams that want to learn distributed AI on their own hardware. Call 919-335-7902 or browse the full training catalog.
Do you stock the cable in volume?
Yes. We have inventory ready to ship for orders up to about twenty units. Larger volume orders we can fulfill on a 5 to 10 business day lead time. Call 919-335-7902 for volume pricing and lead time.
Is the cable new, refurbished, or pulls?
New. Factory sealed when shipped. Manufactured to the same Amphenol NJAAKK0006 / NJAAKKR-0006 / Luxshare LMTQF022-SD-R reference spec NVIDIA approves in the official Spark Stacking documentation.
Bottom Line
The DGX Spark and its GB10 cousins are excellent prototyping and personal-AI machines on their own. They become genuinely useful for teams the moment you can cluster two or three of them together. The thing that has been holding back the cluster build for thousands of buyers is the unavailability and price of one specific 0.5 meter cable. We have it, we are selling it at $159 with free shipping, and we ship it the same business day in most cases.
If you also need help wiring the rest of the AI stack (security, compliance, identity, monitoring, private LLM serving, data pipelines), Petronella Technology Group is one of the few firms in the US that combines AI infrastructure delivery with full cybersecurity and compliance depth in the same retainer. Read more on our AI hub, our cybersecurity practice, and our managed services. Or just buy the cable below.
Order Your DGX Spark / GB10 Cluster Cable
0.5m QSFP112 400G passive DAC. NVIDIA-approved spec. $159 shipped to US and Canada.
Same-day shipping on orders before 2pm Eastern.
Volume pricing or custom length? Call Penny: 919-335-7902
Headquarters: 919-348-4912
Need help with the rest of your AI stack? Browse the AI hub, the training catalog, or the cybersecurity practice.
Sources: NVIDIA Spark Stacking documentation (docs.nvidia.com/dgx/dgx-spark/spark-clustering.html); NVIDIA "Connect Two Sparks" playbook (build.nvidia.com/spark/connect-two-sparks); NVIDIA Developer Forums DGX Spark / GB10 community; Tom's Hardware DGX Spark review October 2025; Jeff Geerling Dell Pro Max GB10 hands-on; NADDOD product compatibility guide. Last updated May 7, 2026.