Open-Source Humanoid Robot Buyers Guide 2026
Posted: May 2, 2026 to Robotics.
Short answer: The open-source humanoid robot market in 2026 splits cleanly into three tiers. Desktop educational platforms like the Reachy Mini ($299 to $449) and SO-100 / SO-101 robotic arms (under $500) put real expressive AI hardware on a researcher's desk. Research-bench platforms like the ALOHA bimanual rig, Trossen ViperX arms, OpenArm, and Hope Jr cover the $5,000 to $25,000 manipulation-research segment. Mobile and full-body locomotion platforms like the Unitree G1 ($16,000+), Hello Robot Stretch 3, and the LeKiwi mobile base own the upper tier. This guide picks across all three so a defense PI, university lab director, healthcare innovation team, or self-funded engineer can land on the right hardware in one read. Petronella Technology Group operates a Reachy Mini in our Raleigh, North Carolina lab and runs the surrounding LeRobot stack on a private NVIDIA Elite Partner Channel GPU fleet, so the desktop tier section reflects hands-on experience. Every other platform is treated from published specifications, vendor documentation, and academic literature. We disclose the difference inline so you can weight our perspective accordingly.
TL;DR Decision Tree
- Teaching, conversation AI, perception research: Reachy Mini ($299 / $449) or SO-101 arm
- Bimanual manipulation research: ALOHA-style stack with Trossen ViperX arms
- Mobile manipulation in unstructured spaces: Hello Robot Stretch 3
- Whole-body humanoid locomotion / teleop research: Unitree G1
- Affordable arm for ML papers and labs: SO-100 ($100-class) or Koch v1.1
- Open-source mobile platform with arm: LeKiwi or Earth Rover Mini
- Regulated-industry prototyping (defense, healthcare R&D, university CUI work): any LeRobot-supported platform plus a private GPU fleet for sovereignty
Why Open-Source Humanoid Robotics Matters in 2026
Three years ago, building a research-grade humanoid was a six-figure capital decision dominated by Boston Dynamics, Agility Robotics, Apptronik, Figure, and a handful of Chinese-funded humanoid programs. The platforms were closed, the SDKs were partially documented, and the data plane assumed a vendor cloud. Universities priced themselves out of the field. Defense R&D shops built bespoke testbeds because the closed platforms could not pass an Authority to Operate review against Controlled Unclassified Information rules. Healthcare innovation groups stayed away from anything they could not explain to an Institutional Review Board.
The Hugging Face acquisition of Pollen Robotics in April 2025, the Reachy Mini launch in July 2025, the maturation of the LeRobot library to 12,000+ GitHub stars by the acquisition mark, and NVIDIA's open-sourcing of the GR00T N1 humanoid foundation model in March 2025 collapsed that pricing curve. Today a researcher with a $1,500 budget can stand up a real ML-loop on real hardware. A university lab with $50,000 can run a research program that competes with what cost $500,000 in 2022. A defense-aligned R&D shop can build prototypes on hardware whose firmware, schematics, and data flow are inspectable line by line, which is the prerequisite for sovereign deployment.
Cost is the obvious lever. Customization is the bigger one. Open-source means the servo configuration, the perception pipeline, the policy network, and the teleoperation interface are all yours to modify. Sovereignty is the third lever, and it is the one that matters most for the audience this guide targets: defense Principal Investigators, research-university faculty running NSF or DARPA grants, healthcare innovation teams under HIPAA, and educators teaching the next cohort of robotics engineers. None of those audiences can run a closed platform that phones home to a vendor cloud. The fourth lever is community. The LeRobot Discord, the Hugging Face Hub datasets, the academic paper trail behind ALOHA, DROID, RoboCasa, and SmolVLA all create a learning surface that no single vendor can match.
This guide assumes you are evaluating hardware against a real research or prototyping objective, not browsing. We organize by use-case tier first, then by software ecosystem fit, then by total cost of ownership. We do not pick winners. We give you the criteria and the published specifications and let you reach the right answer for your specific use case.
The Open-Source Robotics Ecosystem in 2026
Three software stacks dominate the open-source humanoid and arm space, and a fourth is rising fast. Understanding which stack runs on which hardware is the most important part of a buying decision because the wrong stack pairing turns a $25,000 research arm into an expensive paperweight.
LeRobot is Hugging Face's open-source robotics library, launched in 2024 and led by Remi Cadene, formerly of Tesla Optimus. By the April 2025 Pollen Robotics acquisition mark, it had crossed 12,000 GitHub stars and grown into a 100-plus-repo ecosystem on the Hugging Face Hub. LeRobot's installation is local, with no mandatory cloud dependency. The official documentation lists supported real-world hardware platforms explicitly: SO-101, SO-100, Koch v1.1, LeKiwi, Hope Jr, Reachy 2, Unitree G1, Earth Rover Mini, OMX, OpenArm, and the ALOHA and ViperX rigs through Trossen Robotics. The library requires Python 3.12 or higher and PyTorch 2.10 or higher. Camera support spans OpenCV (USB, built-in, phone), ZMQ for network cameras, Intel RealSense for depth, and the native Reachy 2 cameras. For self-hosted training and inference, the SmolVLA paper documents that the model can be trained on a single GPU and inferred on consumer-grade GPUs or even CPUs, with remote GPU servers supported for asynchronous inference. That on-prem story is the foundation of the sovereign-AI prototyping pitch in 2026. Source: huggingface.co/docs/lerobot, huggingface.co/blog/smolvla, huggingface.co/blog/hugging-face-pollen-robotics-acquisition.
ROS 2 remains the canonical robotics middleware for industrial, research, and academic deployments, with Jazzy as the current Long Term Support release. ROS 2 is platform-agnostic by design and is the substrate underneath PickNik's MoveIt motion-planning library. Mobile platforms like the Hello Robot Stretch 3 ship with native ROS 2 support. Unitree's G1 has a community-developed ROS 2 bridge layered on top of the proprietary Unitree SDK. ROS 2 has a steeper learning curve than LeRobot but a deeper ecosystem of tested packages for navigation (Nav2), manipulation (MoveIt 2), perception (image_pipeline, depthai), and simulation (Gazebo, Ignition). For research that depends on legacy ROS packages or for industrial-flavored prototypes, ROS 2 is still the right answer.
Vendor SDKs wrap each platform. Pollen Robotics ships a Python SDK for the Reachy series. Unitree publishes its own SDK for the G1 alongside the community ROS 2 bridge. Trossen Robotics provides Interbotix ROS packages for the ViperX, ReactorX, and ALOHA bimanual configurations. Hello Robot ships the Stretch SDK. Each vendor's SDK exposes lower-level joint control and sensor streams that LeRobot or ROS 2 then consume.
NVIDIA Isaac and GR00T are the rising fourth stack. GR00T N1, released in March 2025, is described by NVIDIA as the world's first open foundation model for humanoid robots. Isaac Sim and Isaac Lab provide the simulation layer for sim-to-real workflows. The combination is most relevant when training a whole-body policy that has to transfer from simulation to a $16,000 Unitree G1, where the cost of a bad policy update is steep. The Isaac stack runs on NVIDIA hardware and integrates with LeRobot through the GR00T N1 model weights now hosted on the Hugging Face Hub.
Hugging Face's acquisition of Pollen Robotics matters here for a reason that has nothing to do with the Reachy hardware itself. The acquisition put a Series-funded AI infrastructure company in charge of the largest open-source robotics community on the internet. The Hugging Face Hub now hosts 39 LeRobot models, 181 LeRobot datasets, and the entire Pollen Robotics organization with 18 Spaces apps, 3 models, and 15 datasets including the Reachy Mini Official App Store, the Reachy Mini Dances Library, and the Reachy Mini Emotions Library. The community surface for asking, debugging, contributing, and learning robotics is now the same community surface researchers already use for Transformers and Diffusers, which compresses the time-to-first-result for a new lab. Source: huggingface.co/lerobot, huggingface.co/pollen-robotics, both retrieved 2026-05-02.
Decision Criteria Framework
Before we walk the platforms, the criteria. Six axes carry most of the weight in an open-source humanoid or arm purchase. Run every candidate platform through this list before you commit a purchase order.
Degrees of freedom and reach. DOF is the joint count. Reach is the working envelope. A Reachy Mini has 6 head DOF, body rotation, and 2 antenna DOF for expressive output but no arms, so its working envelope is the volume of a desk surface in front of the camera. A ViperX 300 has 6 arm DOF with a 750 mm reach. The Unitree G1 publishes 23 DOF in its base configuration and 43 DOF in the dexterous-hand variant per general reporting at launch. Match the platform's envelope to the task. A bimanual ALOHA rig with 12 to 14 combined DOF can fold towels. A Reachy Mini can wave at you. Both are "humanoid," neither substitutes for the other.
Payload and dynamics. Stationary arms publish a payload spec. The Trossen ViperX 300 carries roughly 750 grams. Mobile manipulators trade payload against base stability. The Hello Robot Stretch 3 prioritizes safe operation around humans over heavy payload. Bipedal humanoids carry their own mass against ground reaction forces, so payload as published often refers to one-arm carry. Read the specifications carefully and decide whether your task is grams or kilograms.
Software ecosystem fit. Match the platform to the software stack that has the most documentation for your task. If you need an out-of-the-box LeRobot demo, a SO-100 or SO-101 arm or a Koch v1.1 will hit the ground running because the LeRobot tutorial path is built around them. If your task depends on a published academic paper, find the paper's hardware and replicate it. If you need MoveIt 2 motion planning out of the box, a ROS-2-native arm like the Franka Research 3 or a Trossen ViperX with the Interbotix ROS packages saves months. The hardware that "fits everywhere" is rare. Pick the stack first, then the platform.
Open-source posture and license. Read the license. The Reachy Mini hardware is described as open-source with CAD files pending release at launch, and the software is fully open-source on GitHub. SO-100 and SO-101 are open-hardware designs from The Robot Studio. Unitree G1 is closed hardware with a partially documented SDK. ALOHA's bill of materials and CAD are publicly published from the Stanford project. Mixing licenses is fine, but know what you are signing up for. If your funder requires open-hardware deliverables, a closed platform fails the rubric on day one.
Support and community. Vendor support tiers vary widely. Pollen Robotics publishes a sales address for bulk orders and runs an active GitHub presence. Trossen Robotics has a long support history with academic labs. Hello Robot has a research-customer track record and ships clinically aware documentation. Unitree's Western support story is mixed depending on the reseller. The LeRobot Discord and the Hugging Face forums act as a community substrate that absorbs a meaningful fraction of "support" load for any LeRobot-supported platform. Factor that into your total cost.
Budget and procurement vehicle. Capital budget is one number. Total cost of ownership is several. We cover TCO in detail in section six. For procurement, university labs with NSF or DARPA grants typically buy through internal procurement against a quote from the vendor. Defense-side R&D shops add ITAR or DFARS clauses to the purchase order. Healthcare research groups under IRB protocols add data-handling annexes. Open-hardware kits like the SO-100 are often built from a parts list rather than purchased turnkey, which changes the procurement story entirely. Decide early whether you are buying a robot or building one.
Tier 1: Desktop and Educational Platforms
The desktop tier is where most open-source humanoid programs start in 2026. Three platforms stand out: the Reachy Mini, the SO-100 / SO-101 arm family, and the Koch v1.1 arm. Each has a different center of gravity but all three live under $1,000 fully kitted.
Reachy Mini. Pollen Robotics' desktop expressive humanoid, launched in July 2025 by Hugging Face after the April 2025 acquisition. The robot is a head-and-torso form factor with no arms and no locomotion. Active height 28 cm, sleep height 23 cm, width 16 cm, weight 1.5 kg. It has 6 head degrees of freedom, full body rotation, and 2 animated antennas. The sensor suite is one wide-angle camera, four microphones, and a 5 W speaker. The Wireless variant adds an Inertial Measurement Unit and pairs an onboard Raspberry Pi 4 with Wi-Fi and a battery, while the Lite variant is wired and pairs to a Mac or Linux host computer. Pricing is $299 for the Lite and $449 for the Wireless, both ex-tax and ex-shipping, with a delivery window of approximately 90 days at launch. Source: huggingface.co/blog/reachy-mini, retrieved 2026-05-02.
The Reachy Mini's software stack is Python-first with announced JavaScript and Scratch support. The simulation environment is MuJoCo-based and openly available. The hardware is described as open-source with CAD files pending release, and the software is fully open-source on GitHub. Fifteen-plus robot behaviours ship at launch. LeRobot integration is explicit in the launch announcement. Hugging Face Spaces apps for Reachy Mini include a Realtime URL streaming app, a Conversation App, a Skins app, and a Debug & CI Testbench. The data privacy default, quoted verbatim from the launch blog, is "no personal data stored, transmitted, or processed by default; camera/microphone use fully user-controlled."
What we run a Reachy Mini for in our Raleigh lab: conversation-AI prototyping where the camera and microphones feed a private LLM running on our GPU fleet, perception experiments where the wide-angle camera streams over LeRobot to a SmolVLA-style model, and educational demos for clients evaluating whether they want a robotics practice. The robot does not manipulate objects. It expresses, observes, and responds. For an academic lab teaching a graduate course on embodied AI, or for a defense-aligned R&D shop building a sovereignty demo, the Reachy Mini is the lowest-friction entry point published anywhere in 2026.
SO-100 and SO-101. The SO-100 is The Robot Studio's affordable robotic arm, listed at roughly $100 in the Hugging Face × Pollen Robotics acquisition announcement as the "affordable hardware partner" of the LeRobot ecosystem. The SO-101 is the next iteration in the same series and is the platform that anchors the LeRobot getting-started tutorial. Both arms are 6-DOF designs targeted at ML labs, education, and budget-constrained research. The hardware is open-source with full bills of materials, and several vendors stock complete kits. For a researcher who wants to publish on LeRobot's tutorial baseline or replicate the SmolVLA pretraining setup, the SO-101 is the tutorial path of least resistance. Source: huggingface.co/docs/lerobot, retrieved 2026-05-02.
Koch v1.1. The Koch v1.1 is a Dynamixel-based open-source arm referenced in the LeRobot real-world tutorial as a camera-teleoperation example. It costs more than the SO-100 because Dynamixel servos carry a premium, but the trade is reliability, encoder precision, and a longer support history. For labs that have a Dynamixel toolchain in place from a previous project, the Koch v1.1 fits naturally. For greenfield builds where the SO-100 platform's parts cost matters, it is harder to justify. Source: huggingface.co/docs/lerobot, retrieved 2026-05-02.
Tier 1 honest framing: these platforms are research-grade for ML and education work. They are not industrial cobots. They will not lift a kilogram, they will not run unattended for a 24-hour shift, and they will not pass an industrial safety review. They are the right answer when the question is "how do I get a graduate student running real-hardware ML by next week."
Tier 2: Research-Bench Platforms
The research-bench tier covers the platforms that show up in 2024 to 2026 academic papers on bimanual manipulation, dexterous teleoperation, imitation learning, and visuomotor policy training. Four platforms anchor this tier: ALOHA, the Trossen ViperX family, OpenArm, and Hope Jr.
ALOHA. Stanford's ALOHA bimanual teleoperation platform is the reference rig for two-handed manipulation research and the fastest-spreading hardware design in academic robotics since the original Baxter. The published bill of materials uses Trossen Robotics ViperX 300 arms in a leader-follower configuration, with the operator's hands driving two leader arms while the two follower arms execute on the robot side. The platform supports the well-published ALOHA "fold the towel," "sort the candy," and "use the coffee machine" demonstrations. Cost is in the $20,000 to $30,000 range fully built, depending on which servos and which compute rig you choose. ALOHA is an explicit LeRobot platform and the project page at tonyzhaozh.github.io/aloha publishes the full BOM. Source: huggingface.co/docs/lerobot, tonyzhaozh.github.io/aloha, both retrieved 2026-05-02.
For a research lab that wants to publish on imitation learning, ALOHA is the path that has the most published baselines, the most public datasets to compare against, and the most visible community of paper authors. It is the closest thing to a default in the bimanual research segment.
Trossen ViperX, ReactorX, WidowX. Trossen Robotics' Interbotix series covers the $1,500 to $7,000 price range with arms ranging from 4-DOF educational platforms to 7-DOF research arms. The ViperX 300 with 6 DOF and 750 mm reach is the workhorse arm under most ALOHA implementations. The ReactorX is a budget-research arm. The WidowX series fits between. All are ROS 2 supported through the Interbotix packages. Trossen's documentation tradition is solid for academic labs because the company has been selling into university research for over a decade.
OpenArm. OpenArm is a more recently published open-source arm design listed in the LeRobot supported-hardware roster. Where the SO-100 targets the under-$200 segment, OpenArm targets the more-precise mid-budget research segment with open-hardware CAD and an active community. For a lab that has decided LeRobot is the stack but wants more headroom than the SO-100 provides, OpenArm is the natural step up.
Hope Jr. Hope Jr is listed in the LeRobot supported-hardware roster alongside the OMX (a humanoid head platform). Both are emerging platforms that fit the educational-to-research-bench bridge. Hope Jr's documentation surface is smaller than the SO-101's, so it suits a lab that has a specific reason to choose it rather than a default.
Tier 2 honest framing: research-bench platforms are still not industrial-grade. They will run a research demonstration cleanly. They will not pass a manufacturing-line throughput test. The papers published on these platforms are the right comparison set for a lab considering them, and replicating a published paper's hardware is the lowest-risk path to a useful research result.
Tier 3: Mobile and Locomotion Platforms
The mobile tier introduces base motion. Three platforms anchor it: the Unitree G1, the Hello Robot Stretch 3, and the LeKiwi mobile base. The Earth Rover Mini covers the outdoor edge.
Unitree G1. Unitree Robotics' G1 is the most-discussed full-size open(ish) humanoid in the 2025 to 2026 window. Unitree publishes the platform in three trims marketed as Edu, Standard, and higher-DOF Comprehensive variants. Per general reporting at launch, the base configuration carries 23 degrees of freedom and the dexterous-hand variant carries 43 degrees of freedom. Height is approximately 1.27 m, weight approximately 35 kg, and battery runtime is approximately 2 hours on a roughly 9 kg removable battery. Pricing per general reporting starts around $16,000 for the Edu trim and climbs past $40,000 for the higher-DOF tiers. Unitree's own SDK is the primary control surface, with a community-developed ROS 2 bridge widely used in academic labs. The G1 is explicitly listed in LeRobot's supported-hardware roster, which means an off-the-shelf LeRobot policy can drive it on the perception and behaviour layer even when the lower-level joint control flows through Unitree's stack. Source: huggingface.co/docs/lerobot, retrieved 2026-05-02. Source for Unitree-side specs: unitree.com/g1, which any reader should consult directly because point estimates at the time of writing are based on general reporting and the Unitree spec sheet evolves.
The G1 makes sense when the research question is whole-body locomotion, sim-to-real for bipedal control, or full-body teleoperation. It does not make sense for desktop perception research where a Reachy Mini does the same job for fifty times less money.
Hello Robot Stretch 3. Hello Robot's Stretch 3 is the third generation of the company's research-grade mobile manipulator. The platform's design language is explicitly oriented toward research and rehabilitation in human spaces. It is ROS 2 native, ships with a documented SDK, and has a published academic track record in healthcare-adjacent research, accessibility research, and indoor-mobile-manipulation papers. The Stretch 3 is not a humanoid in the bipedal sense; it is a single-arm mobile manipulator with a height-adjustable lift and a telescoping arm. For a healthcare research group exploring assistive robotics under IRB protocols, or for a university lab studying mobile manipulation in clutter, the Stretch 3 has the most relevant published research lineage of any platform in this guide. Source: hello-robot.com/stretch-3.
LeKiwi. LeKiwi is an open-source mobile base referenced in the LeRobot supported-hardware roster. Unlike the Stretch 3, which is a fully-integrated commercial product, LeKiwi is closer to a community-built reference platform that researchers extend with their own arms and perception stacks. For a lab that wants to control its mobile base as deeply as it controls its arm, LeKiwi is the open-hardware path. Source: huggingface.co/docs/lerobot, retrieved 2026-05-02.
Earth Rover Mini. Earth Rover Mini is an outdoor-capable platform listed in the LeRobot supported-hardware roster. Outdoor robotics raises the difficulty curve sharply because perception in unstructured outdoor environments is harder than indoor perception. For a research program that has explicitly chosen an outdoor scope, Earth Rover Mini is one of the few open-source options.
Tier 3 honest framing: mobile platforms add at least a 10x integration cost over stationary arms. A research lab that buys a Unitree G1 will spend the next six months teaching it to walk reliably in the lab's specific space, not running its target research. That is the correct workflow but it is the workflow you are signing up for.
Software Ecosystem Comparison
The fastest mistake in open-source humanoid procurement is buying a platform whose hardware fits but whose software ecosystem does not. The mapping below is the version that survives most academic and prototyping use cases in 2026.
LeRobot-first. If your team's strength is machine learning, your task is imitation learning or vision-language-action policy training, and your timeline is short, LeRobot is the right substrate. The supported-hardware list as published in the LeRobot documentation includes SO-101, SO-100, Koch v1.1, LeKiwi, Hope Jr, Reachy 2, Reachy Mini, Unitree G1, Earth Rover Mini, OMX, OpenArm, ALOHA, and ViperX. Camera support spans OpenCV, ZMQ network cameras, Intel RealSense for depth, and the Reachy 2 native cameras. The Python 3.12 plus PyTorch 2.10 baseline is recent enough that older lab toolchains may need an environment update. SmolVLA, the LeRobot stack's published vision-language-action model, runs on a single GPU for training and on consumer-grade GPUs or even CPUs for inference, with explicit support for remote GPU servers in asynchronous inference patterns. Source: huggingface.co/docs/lerobot, huggingface.co/blog/smolvla, both retrieved 2026-05-02.
ROS-2-first. If your team's strength is classical robotics, your task involves motion planning or navigation, and your stack already includes MoveIt 2, Nav2, or Gazebo, ROS 2 Jazzy is the substrate. Trossen Robotics' Interbotix packages, Hello Robot's Stretch SDK, and the Unitree community ROS 2 bridge all live in this lane. For research that has to integrate with industrial cobots later, ROS 2 carries the legacy package ecosystem that LeRobot does not.
Vendor-SDK-first. If your platform is the Reachy Mini, the Pollen Python SDK is the path the platform was designed for. If your platform is the Unitree G1 and your task is whole-body control, Unitree's own SDK exposes the lowest-level interfaces. Vendor SDKs trade ecosystem breadth for depth on the specific platform.
NVIDIA Isaac plus GR00T. If your task is sim-to-real for a humanoid foundation model, the Isaac plus GR00T stack is the only option in the open-source space in early 2026. GR00T N1, released in March 2025 and described by NVIDIA as the world's first open foundation model for humanoid robots, sits on top of Isaac Sim and Isaac Lab and integrates with LeRobot through the model weights hosted on Hugging Face. The stack runs on NVIDIA hardware. For a defense-aligned R&D program with a private NVIDIA Elite Partner Channel GPU fleet, this is the natural overlap point.
Mixing stacks. Most working labs run a hybrid. LeRobot for the perception and policy layer, ROS 2 underneath for the lower-level joint control, vendor SDK at the lowest level for the platform-specific bits, and NVIDIA Isaac on the simulation side. The trick is keeping the interfaces clean. The platforms that handle this well, like the Unitree G1 with its Unitree-SDK plus community ROS 2 bridge plus LeRobot listing, are the most popular in current research because they let a lab pick the right tool for the right layer without locking out the other layers.
Total Cost of Ownership Beyond Hardware
The sticker price is the smallest line item. The full cost of getting a research-grade humanoid running productively is dominated by four other costs, and most procurement plans underestimate every one of them.
Compute fleet. Training a vision-language-action policy on real teleoperated data takes GPU hours. SmolVLA's published training was on 4 GPUs, with the model also trainable on a single GPU. Inference can run on consumer-grade GPUs. For a research program that wants to retrain weekly on new data, a single workstation with a recent GPU does the job. For a program that wants to run 10 to 50 training experiments in parallel, the math points toward a small GPU fleet. We run our LeRobot work on a private GPU fleet sourced through the NVIDIA Elite Partner Channel because the data we touch in client prototypes cannot leave a controlled environment, but the same architecture also pays back the training cost on its own merits within a year for any lab running more than a few simultaneous experiments. Source: huggingface.co/blog/smolvla, retrieved 2026-05-02.
Training data infrastructure. Imitation learning is data-hungry. Each demonstration is a multi-modal time series of joint positions, camera frames, and force signals. Storage, labeling pipelines, and dataset versioning rapidly become bigger budget lines than the GPUs themselves. The Hugging Face Hub absorbs some of this load by hosting datasets like the 181 LeRobot datasets and the Pollen Robotics datasets including reachy-mini-dances-library and reachy-mini-emotions-library, but a working lab still needs its own data plane for in-house demonstrations. Source: huggingface.co/lerobot, huggingface.co/pollen-robotics, both retrieved 2026-05-02.
Integrator time. The single largest cost for most research programs is integrator time. Standing up a Unitree G1 and getting it walking reliably in the lab's specific space takes months of full-time work even for an experienced robotics engineer. Standing up an ALOHA bimanual rig with custom cameras, a teleoperation interface, and a working policy training pipeline is a multi-quarter project. The corresponding desktop tier is much faster: a Reachy Mini with the published demos plus a custom conversation app can run in days, not months. Pick the platform that matches the integrator hours your program can afford. If your program has $20,000 of capital and 100 hours of engineering, a $20,000 ALOHA rig fails the time budget. A $449 Reachy Mini plus a $1,500 SO-101 plus 100 hours of engineering can deliver real research in the same window.
Compliance overlay. For defense-aligned, healthcare, and regulated-research programs, the data-handling side carries cost too. Demonstration data captured on a humanoid in a CUI environment falls under DFARS 252.204-7012 reporting and NIST SP 800-171 r3 protection requirements. Healthcare research data captured under IRB protocols falls under HIPAA Security Rule technical safeguards. Both regimes drive architecture choices: where the data lives, who can touch it, how policies are trained, where models are stored. A program that ignores compliance until the prototype works is a program that has to redo the integration once compliance becomes a deployment requirement. Plan it in from day one.
Petronella's Lens
Petronella Technology Group is a 23-year-old cybersecurity, compliance, and AI firm in Raleigh, North Carolina. We acquired our first Reachy Mini in spring 2026, around the same time we built out a private GPU fleet sourced through the NVIDIA Elite Partner Channel for AI work. The robotics practice is new for us. The cybersecurity, CMMC compliance, digital forensics, and private-AI infrastructure that sit underneath it are operational and verifiable. We disclose that distinction so a reader weighing our perspective knows where our experience runs deep and where it is recent.
What we run for ourselves: a Reachy Mini in our Raleigh lab, paired to LeRobot, paired to a private GPU fleet for any policy training that touches client data. We use it for conversation-AI prototyping where the camera and microphones feed a private LLM hosted on our infrastructure rather than a vendor cloud. We use it for perception experiments around how a SmolVLA-style model handles small-domain visual prompts. We use it as a demo platform for clients who want to evaluate a robotics R&D engagement before committing capital. The robot does not manipulate objects. It expresses, observes, listens, and responds.
What we recommend for which use case, given the criteria above and our hands-on experience with the desktop tier:
For a research-university lab teaching embodied AI to graduate students, the Reachy Mini Wireless plus a SO-101 arm together cost under $1,000 and cover the perception, conversation, and manipulation surfaces. For a defense-aligned R&D shop building a sovereignty demo, the same hardware plus a private GPU fleet for training delivers the talk-track without the closed-platform compliance risk that Boston Dynamics, Apptronik, or Figure-class platforms carry. For a healthcare innovation team exploring assistive technology under IRB, the Hello Robot Stretch 3 has the deepest published research lineage. For bimanual manipulation research, the ALOHA stack with Trossen ViperX arms has the most paper baselines. For whole-body locomotion research, the Unitree G1 is the only credible open(ish) option in its price range and has explicit LeRobot support, but plan for six months of integration before research begins.
What we do not run, do not recommend, and do not consult on: production manufacturing-line cobot integration, surgical robotics, weapons platforms, autonomous vehicles, and FDA Software-as-a-Medical-Device deliverables. Our wedge is prototyping and R&D for regulated industries on private AI infrastructure, not industrial automation or medical certification.
How to Pick Your Platform: A Decision Framework
Run the following sequence before you place the purchase order. If a step fails, restart from the previous step rather than waiving the criterion.
Step 1: Define the research or prototype question in one sentence. "I want to study bimanual towel folding using imitation learning" is a sentence. "I want to do robotics" is not. The sentence drives every other step.
Step 2: Find three published papers or projects that match the sentence. Read them. Note which platform each used, which software stack each used, and which dataset each trained on. The intersection of those three is your candidate hardware list.
Step 3: Walk the six decision criteria. DOF and reach, payload and dynamics, software ecosystem fit, open-source posture and license, support and community, budget and procurement vehicle. Drop any candidate that fails any criterion at your specific use case.
Step 4: Compute total cost of ownership. Capital, compute fleet, training data infrastructure, integrator time, compliance overlay. Multiply your honest estimate of integrator hours by your engineer's loaded cost. Add it all up. If the number blows your budget by 3x, drop the candidate.
Step 5: Build the smallest possible pilot first. If your end state is an ALOHA rig, start with one ViperX 300 and a SO-101 to prove your team can run LeRobot end-to-end before you commit to two leader and two follower arms. If your end state is a Unitree G1 with whole-body control, start with a Reachy Mini to prove your perception pipeline before you commit to whole-body locomotion. Pilot first, scale second.
Step 6: Plan compliance from day one for regulated work. If the program will eventually run under DFARS 252.204-7012, NIST SP 800-171 r3, or HIPAA Security Rule, design the data plane and the training infrastructure to those requirements before you start collecting demonstrations. Retrofitting compliance after the prototype works is more expensive than building it in.
Step 7: Reserve budget for the second iteration. No first build hits the target. Plan for a v2 hardware refresh six to nine months after v1 is operational. The platforms that sustain a v2 plan, like the open-source Reachy Mini and the LeRobot-supported arms, save the budget that closed platforms consume on every refresh.
Common Pitfalls
The mistakes we see most often in evaluating open-source humanoid hardware fall into four categories.
Overweighting degrees of freedom. A 43-DOF humanoid is not 7x more useful than a 6-DOF arm. The marginal degree of freedom is rarely the bottleneck on a research project; the perception pipeline, the policy training data, and the safety envelope around the robot are. A team with a 23-DOF Unitree G1 and an underspecified perception stack is in a worse position than a team with a 6-DOF SO-101 and a clean visuomotor policy.
Ignoring open-source license fragmentation. Open-source means many things. The Reachy Mini hardware is described as open-source with CAD pending release, and its software is fully open-source on GitHub. The Unitree G1 is closed hardware with a partially documented SDK. SO-100 and SO-101 are open-hardware with public bills of materials. ALOHA's project page publishes the full BOM and code under permissive licenses. Read each platform's actual license before assuming it fits your funder's deliverable requirements. A program that promised an open-hardware deliverable and shipped a Unitree G1 is a program that fails its own grant terms.
Underestimating integrator support burden. The community surface around LeRobot and the Hugging Face Hub is real and helpful, but it is no substitute for an engineer who can read joint encoder data and debug a torque-control loop. Plan for the integrator. The platforms with the deepest documented support paths, like the Trossen ViperX series and the Hello Robot Stretch 3, save weeks of debugging over platforms with thinner support surfaces.
Mistaking developer kits for production platforms. Every platform in this guide is a research, education, or prototyping kit. None of them is a production cobot. The Reachy Mini is a $299 to $449 desktop expressive humanoid, not a manufacturing robot. The Unitree G1 is a research humanoid, not an industrial bipedal worker. Build the prototype on the open-source kit, then evaluate whether the production path requires a different platform class. Confusing the two ends in expensive replacement decisions late in the program.
Bonus pitfall: skipping compliance until late. For defense, healthcare, and regulated-research programs, the compliance overlay is not a final-mile detail. It shapes the data plane, the training infrastructure, and the deployment architecture. Programs that schedule compliance review for the last 5 percent of the build cycle routinely discover they have to rebuild 30 percent of the system to satisfy a control they could have designed in at the start. Plan for it on day one.
Frequently Asked Questions
What is the cheapest open-source humanoid robot in 2026? The Reachy Mini Lite at $299 is the lowest-priced open-source desktop humanoid published as of this writing. It pairs to a Mac or Linux host computer rather than running its own onboard compute. The Reachy Mini Wireless at $449 adds a Raspberry Pi 4, Wi-Fi, and a battery. Both prices are ex-tax and ex-shipping. Source: huggingface.co/blog/reachy-mini, retrieved 2026-05-02.
Is the Unitree G1 truly open-source? Partially. The Unitree G1 hardware is closed. The Unitree SDK is partially documented. A community-developed ROS 2 bridge sits on top, and the platform is explicitly listed as a LeRobot-supported hardware target, which means the policy and perception layers can be open-source even when the lower-level joint control is not. Programs that need full open-hardware deliverables should not select the G1; programs that need a credible bipedal humanoid platform with community support can.
Can I run LeRobot without a cloud account? Yes. LeRobot installation is local, Python 3.12 and PyTorch 2.10 are the minimum requirements, and the SmolVLA paper documents both single-GPU training and consumer-grade-GPU or CPU inference. The Hugging Face Hub is the recommended distribution path for models and datasets but is not a runtime dependency. For sovereignty-sensitive research, the local-first posture is the entire point. Source: huggingface.co/docs/lerobot, huggingface.co/blog/smolvla, both retrieved 2026-05-02.
What is the difference between the Reachy Mini and the Unitree G1? The two robots are not direct substitutes. The Reachy Mini is a $299 to $449 desktop expressive humanoid with 6 head DOF, a wide-angle camera, four microphones, and no arms or locomotion, optimized for conversation AI, perception research, and education. The Unitree G1 is a $16,000-plus full-size bipedal humanoid with 23 to 43 DOF depending on trim, optimized for whole-body locomotion and full-body teleoperation research. Use both if your research program covers both layers; pick one if your research question lives in one layer.
Which open-source robot has the best community in 2026? By GitHub stars and Hugging Face Hub presence, the LeRobot ecosystem is the largest open-source robotics community in 2026, with 12,000-plus stars at the April 2025 acquisition mark, 39 LeRobot models on the Hub, 181 LeRobot datasets, and a 100-plus repo ecosystem. Any of the LeRobot-supported platforms, including SO-100 / SO-101, Koch v1.1, Reachy Mini, Reachy 2, Hope Jr, OMX, OpenArm, LeKiwi, Earth Rover Mini, ALOHA, ViperX, and the Unitree G1, sit inside that community surface. Source: huggingface.co/lerobot, huggingface.co/blog/hugging-face-pollen-robotics-acquisition, both retrieved 2026-05-02.
Can I use these platforms in a defense or healthcare prototype? Yes, with planning. Open-source posture makes the architecture inspectable, which is the prerequisite for sovereign deployment. For defense work touching Controlled Unclassified Information, plan against DFARS 252.204-7012 reporting and NIST SP 800-171 r3 protection requirements from the data-collection step forward. For healthcare research touching protected health information, plan against the HIPAA Security Rule technical safeguards and IRB Common Rule protocol from the same step forward. The wedge for regulated work is private AI infrastructure that the data plane never leaves. Petronella's robotics overview covers the architecture pattern in detail.
How do I choose between LeRobot and ROS 2? If your team's strength is machine learning and your task is imitation learning, vision-language-action policy training, or rapid prototyping on a small platform, start with LeRobot. If your team's strength is classical robotics and your task involves motion planning, navigation, or integrating with industrial cobots, start with ROS 2 Jazzy. If your project needs both, run them as layers: LeRobot for perception and policy, ROS 2 for lower-level joint control and motion planning, vendor SDK at the lowest level. The platforms that support that hybrid pattern, like the Unitree G1 and the Trossen ViperX, are the most popular in current research because they preserve flexibility.
What other open-source humanoid options exist in 2026? Beyond the LeRobot-supported list, Hello Robot's Stretch 3 anchors the research-mobile-manipulation segment with native ROS 2 support and a strong academic track record. Boston Dynamics is closed, Figure is closed, Apptronik is closed, and most Chinese-funded humanoid programs are partially or fully closed. The open-source frontier in 2026 is dominated by Pollen Robotics, The Robot Studio (SO-100 / SO-101), Trossen Robotics (ALOHA, ViperX), Hello Robot (Stretch 3), and the Unitree-plus-LeRobot bridge for the G1. New entrants appear quarterly; the LeRobot supported-hardware roster is the best public clearinghouse to track them.
Want to discuss platform selection for your specific use case? Schedule a robotics scoping call or call Petronella Technology Group at (919) 348-4912.
About the Author
Craig Petronella is the founder and CEO of Petronella Technology Group, a 23-year-old cybersecurity, compliance, AI, and robotics R&D firm based in Raleigh, North Carolina. Petronella holds the CMMC Registered Practitioner credential, CCNA, CWNE, and is an NC Licensed Digital Forensics Examiner (DFE #604180). His firm is a CMMC-AB Registered Provider Organization, RPO #1449, verifiable at the Cyber AB member directory. Petronella Technology Group acquired its first Reachy Mini in spring 2026 and runs the surrounding LeRobot stack on a private GPU fleet sourced through the NVIDIA Elite Partner Channel.
The robotics practice at Petronella Technology Group is new. The cybersecurity, CMMC compliance, digital forensics, and private-AI infrastructure under it are operational and verifiable. Both halves of that statement are written into every robotics page on this site so readers can weight our perspective against our hands-on experience tier by tier.
Last updated: 2026-05-02. Reviewed by: the Petronella robotics R&D group.
Citations
- Pollen Robotics, "Reachy Mini," Hugging Face Blog, 2025-07-09. huggingface.co/blog/reachy-mini
- Hugging Face, "Hugging Face acquires Pollen Robotics," 2025-04-14. huggingface.co/blog/hugging-face-pollen-robotics-acquisition
- Hugging Face, LeRobot documentation. huggingface.co/docs/lerobot
- Hugging Face, "SmolVLA: A Vision-Language-Action Model for Affordable Robotics." huggingface.co/blog/smolvla
- Unitree Robotics, G1 product page. unitree.com/g1
- Hello Robot, Stretch 3 product page. hello-robot.com/stretch-3
- ROS 2 Jazzy documentation. docs.ros.org/en/jazzy
- Stanford ALOHA project page. tonyzhaozh.github.io/aloha
- NIST SP 800-171 r3, Protecting Controlled Unclassified Information. csrc.nist.gov/pubs/sp/800/171/r3
- DFARS 252.204-7012, Safeguarding Covered Defense Information. acquisition.gov/dfars/252.204-7012
- Petronella Technology Group, CMMC-AB RPO #1449 verification. cyberab.org/Member/RPO-1449
Related Reading on petronellatech.com
- Custom Robotics Development at Petronella Technology Group
- Reachy Mini Hardware Specifications
- Reachy Mini vs Unitree G1: Which Open-Source Humanoid Fits Your Research Program?
- Private AI Infrastructure
- CMMC Compliance Services
Get the Secure Robotics Development Brief
Tell Petronella Technology Group about your robotics project. We will reply within 4 business hours with a CMMC-RP led scoping conversation and the early-access edition of our Secure Robotics Development Brief covering CUI handling, on-prem AI inference for robotics, and CMMC-aligned development practices. No obligation. No sales pressure.