Petronella Solutions / Robotics Prototyping

Custom roboticsprototyping in weeks

A scoped engagement that takes a regulated-industry research idea, drops it onto a Reachy Mini in our Raleigh lab, runs the training and inference on a private GPU fleet, and hands back a working prototype with the security plan, runbook, and source code that lets your own team take over.

FROM Scoped pilots → Multi-phase programs
2-WEEKtarget
Prototype Cycle
ROS 2+ LeRobot
Stack
OPENhardware
We Ship
Usually respond within 1 hour No hard sell
Since 2002 ROS 2 Sim-to-real NC engineering team
Who this engagement is for

Three buyer profiles, one engagement model

Robotics prototyping has to clear two bars at the same time. The technical bar (does the thing actually work on real hardware) and the regulatory bar (does the data plane, code provenance, and physical lab environment satisfy whoever signs the funding letter). Petronella Technology Group built this engagement specifically for buyers who have to clear both at the same time.

Profile A

Defense Principal Investigators

SBIR Phase I or II teams, university affiliated research centers, prime-contractor research arms, or DoD lab partners working under DFARS 252.204-7012 with controlled unclassified information in the data plane.

  • Need CUI-aware development from the first commit, not retrofitted at audit time
  • Need a workflow that survives an inherited DCMA review or third-party CMMC L2 assessment
  • Need the prototype source, datasets, and trained weights to stay inside a controlled enclave
Profile B

University Research Labs

NSF Foundational Research in Robotics or NRI-funded teams, NIH-funded neuroscience-robotics groups, or campus AI institutes that need a working prototype to anchor a paper, a follow-on grant, or a thesis defense.

  • Want a partner who can ship working code, not a vendor reading from a deck
  • Need the prototype publishable to GitHub or Hugging Face under a permissive license when grant terms allow
  • Need a teleop or sim-to-real pipeline a graduate student can pick up and extend after handoff
Profile C

Healthcare Research Groups

IRB-governed research teams at academic medical centers, hospital innovation offices, biomedical engineering labs, or rehabilitation-research centers exploring robotics for assistive, training, or perception research, where the data plane often touches HIPAA-adjacent content.

  • Need data-handling that satisfies the HIPAA Security Rule and the IRB protocol simultaneously
  • Need a path that explicitly stops short of FDA Software-as-a-Medical-Device classification
  • Need a prototype that is auditable but not regulatory-burdened beyond the research scope

Honest novelty disclosure. Petronella Technology Group has been doing cybersecurity, compliance, and private AI infrastructure work since 2002. Robotics is a new application of that 23-year foundation. We do not have a portfolio of completed robotics client engagements yet. What we do have is a Reachy Mini operating in our Raleigh lab, a private NVIDIA Elite Partner Channel GPU fleet running training and inference for our existing AI clients, CMMC-AB Registered Provider Organization #1449 status, and an engineering team with the cybersecurity depth most robotics shops do not bring to a regulated-industry table. We are scoping our first robotics engagements now.

Engagement phases

Discovery to handoff in four phases

Every engagement runs the same four phases regardless of vertical. The depth of each phase scales with scope, but the sequence is fixed because skipping a phase produces a prototype that either does not work in production lighting or does not survive a compliance review. The HowTo schema mirrors this exact sequence.

Phase 01 / Week 0

Discovery and scoping

Output: Scope memo · Risk register

We sit with your principal investigator, lab director, or research lead and walk through the research question, the success criteria, and the data the prototype will touch. We map the regulatory environment (CMMC level, DFARS clauses, IRB protocol, HIPAA boundary), the hardware constraints (does the prototype run on Reachy Mini alone or does it need a custom mount, additional sensors, or a different platform), and the compute envelope (GPU hours for training, inference latency targets, and whether the prototype must run disconnected from the internet).

  • Stakeholder interviews with PI, lab manager, IT or security counterpart
  • Regulatory landscape mapping cited to NIST, DFARS, HIPAA, or IRB sources
  • Reference hardware bench against Reachy Mini, SO-101, Koch v1.1, Stretch 3, or Unitree G1 from the LeRobot supported-platform list
  • Initial threat model and data-handling boundary diagram
  • Written scope memo with explicit "in scope" and "out of scope" language
Phase 02 / Weeks 1 to 2

Architecture and security plan

Output: Architecture doc · SSDF-aligned plan

We design the system. The architecture document covers the perception stack, the policy or controller stack, the simulation environment, the teleoperation path, the data pipeline (collection, labeling, storage, retention), the training cluster mapping onto our private GPU fleet, and the inference path (on-device on the Reachy Mini Wireless Pi 4, or off-board on the operator workstation). The security plan is written against the NIST SP 800-218 Secure Software Development Framework (SSDF) with explicit mapping to your applicable framework (CMMC L2 from NIST SP 800-171 r3, HIPAA Security Rule, or IRB protocol).

  • System architecture document with hardware bill of materials, software stack diagram, data flow, and trust boundaries
  • Secrets-handling plan covering API keys, model weights, and dataset custody
  • Source-code repository topology with branch protection, signed commits, and review gates
  • Initial software bill of materials (SBOM) inventory of every dependency that will land in the prototype
  • Test plan covering unit tests, simulation tests, and bench tests on the physical robot
Phase 03 / Weeks 2 to 6

Prototype build and bench

Output: Working prototype · Demo recording

We build. Code lands in your repository (or ours, mirrored to yours) under SSDF-aligned commit hygiene. Datasets get collected via teleoperation in our Raleigh lab on the Reachy Mini, or via your provided dataset, or via simulation in the open-source MuJoCo SDK that ships with Reachy Mini. Training runs on our private GPU fleet under the data-handling boundary defined in Phase 02. The prototype is benched in our lab against the success criteria from the scope memo, with a recorded video walkthrough you can show your sponsor or your IRB.

  • Iterative prototype builds, demoed weekly to the PI
  • Sim-to-real transfer via the Reachy Mini MuJoCo simulation SDK and our owned compute
  • Continuous integration with simulated and bench tests gating every merge
  • Dataset versioning with provenance tags so the audit trail of what data trained what model survives the engagement
  • Mid-phase security review against the Phase 02 plan, with documented findings and fixes
Phase 04 / Weeks 6 to 8

Handoff and operate-it-yourself

Output: Source · Runbook · Training

The whole point of this engagement is that your team owns the prototype at the end. Phase 04 is the handoff. We deliver the source-code repository, the trained model weights with provenance, the architecture document, the security plan with control mapping, the runbook for re-running training and inference on your own GPU fleet, and a live training session for the graduate students, engineers, or staff who will operate it after we leave. We do not lock anything behind a license you have to renew.

  • Final source repository with documented build, train, evaluate, and deploy commands
  • Operator runbook including teardown, restart, calibration, and known-issue triage
  • Final SBOM matching the deployed prototype, suitable for supply-chain attestation
  • Live training session (typically 90 minutes, recorded for replay) for your team
  • Optional 30-day support window for issues encountered during the first month of self-operation
Deliverables matrix

What you receive at handoff

A robotics prototype that comes with no documentation is a science demo your team cannot reproduce. The Petronella deliverables matrix is built so your team can rebuild, retrain, redeploy, and audit the prototype after we leave. Every row in the table is a real artifact, not a slide.

Robotics prototyping engagement deliverables, format, and purpose
Deliverable Format Why It Matters
System architecture document Markdown in repo + PDF export Hardware bill of materials, software stack diagram, data flow, trust boundaries, sensor topology, training cluster mapping, inference path, sim-to-real strategy. Survives team turnover and grant continuation reviews.
Hardware list and procurement notes Markdown spreadsheet Every part with manufacturer, part number, quantity, current price, lead time, and any sourcing caveats. Includes Reachy Mini SKUs (Lite at $299, Wireless at $449 per the Pollen Robotics published pricing), workstation specs, network gear, and any custom mounts.
Source code repository Git repo with signed commits Full commit history with SSDF-aligned branch protection, mandatory review, and signed-commit policy. Repo lives in your GitHub, GitLab, or self-hosted Gitea organization or is mirrored to your environment at handoff.
Trained model weights with provenance Model files + dataset manifest The deployed model checkpoint plus a manifest pointing to the exact training dataset, training script, hyperparameters, and GPU configuration that produced it. Provenance is the difference between a reproducible result and a research artifact you cannot defend.
Security plan and control mapping SSP-style document Written against NIST SP 800-218 SSDF and mapped to your applicable framework (NIST SP 800-171 controls for CMMC L2, HIPAA Security Rule safeguards, or IRB protocol requirements). Suitable as input to a body-of-evidence package.
Software bill of materials (SBOM) SPDX or CycloneDX JSON Per CISA SBOM guidance. Every direct and transitive dependency with version, license, and source. Supports supply-chain attestation if the prototype later moves into a production-adjacent program.
Operator runbook Markdown in repo Build, train, evaluate, deploy, calibrate, teardown, restart, and known-issue triage. Written for the graduate student or engineer who has to keep this running after the principal investigator moves to the next paper.
Demo recording MP4 + transcript Bench-test walkthrough you can show your sponsor, your IRB, your DCMA reviewer, or your continuation-funding committee without having to recreate the demo on demand.
Live handoff training 90-minute recorded session Your team running the prototype themselves with our engineer on the call answering questions in real time. Recording goes into the repo so the next person who joins the project can come up to speed without scheduling a meeting.
Hardware and compute

The lab environment we run prototypes on

A robotics prototyping engagement only works if the lab behind it is real. Petronella Technology Group runs a fixed set of hardware and a private GPU fleet that we already operate for our existing AI infrastructure clients. That same fleet is what your prototype trains and inferences on, which is how the data-sovereignty story stops being a slogan and starts being a verifiable boundary.

Robot platform

Reachy Mini in Raleigh

Pollen Robotics open-source desktop humanoid (acquired by Hugging Face in April 2025) operating in our Raleigh, NC lab. 28 cm tall, 1.5 kg, 6-DOF head, full body rotation, 1 wide-angle camera, 4 microphones, 5 W speaker. Lite variant tethers to a Mac or Linux host; Wireless variant has an onboard Raspberry Pi 4 and Wi-Fi. We use both. See the Reachy Mini hardware specs page for the full breakdown.

Software stack

LeRobot, Python SDK, MuJoCo

The open-source Hugging Face LeRobot library is our default training and policy stack (Python 3.12+, PyTorch 2.10+). The Pollen-published Python SDK drives Reachy Mini directly. Simulation runs in the MuJoCo-based Reachy Mini SDK, also open-source. ROS 2 community bridges may be used downstream where the project warrants it, but we lead with the LeRobot stack because the entire Hugging Face ecosystem feeds it.

Training compute

Private GPU fleet

Petronella owns and operates a private GPU fleet sourced through the NVIDIA Elite Partner Channel. The fleet runs on a tenanted private network behind our compliance-aligned data plane. Training jobs for client prototypes are scheduled into a tenant we provision per engagement. Your dataset, your model weights, and your training logs do not leave that tenant.

Workstations

RTX PRO and DGX-class hosts

Engineer workstations and tethered training rigs run NVIDIA RTX PRO GPUs (and DGX-class hosts where the workload demands it) sourced through the same channel. Stack details and specifications are documented on the AI workstations page.

Network

Segmented dev VLANs

Lab network is segmented. Robotics dev VLANs are isolated from administrative networks, do not egress to the public internet by default, and are observed by the same SOC tooling we run for our managed-services clients. The robot itself sits behind that boundary during teleoperation and bench tests.

Storage

Tenant-isolated datasets

Datasets, video recordings, and trained checkpoints land in tenant-isolated storage with documented retention. Dataset provenance is tracked from collection through training to handoff, so the manifest your team receives at Phase 04 is the same manifest the audit trail follows.

Security and compliance overlay

CMMC-aligned development, by default

Almost no robotics shop builds prototypes with the assumption that the work might land in a CMMC L2 audit, an OCR review, or an IRB inspection. Petronella does, because that is the customer profile we serve every day in our cybersecurity practice. The compliance overlay is not bolted on at the end of the engagement. It governs the scope memo on day one.

NIST SP 800-218 SSDF as the default development standard

Every robotics engagement follows the four practice groups of NIST SP 800-218 SSDF (Prepare the Organization, Protect the Software, Produce Well-Secured Software, Respond to Vulnerabilities). Branch protection, signed commits, mandatory code review, vulnerability scanning on dependencies, and incident-response procedures for the development environment are configured before the first feature commit lands.

NIST SP 800-171 r3 mapping for defense work

For defense PI work, the security plan maps controls to NIST SP 800-171 r3 directly, anchored against the CMMC Final Rule (32 CFR Part 170). Petronella holds CMMC-AB Registered Provider Organization #1449 (verifiable at cyberab.org) and our team is CMMC-RP certified. We do not certify your environment ourselves (that is the C3PAO role), but the prototype hands off in a state that is ready to be assessed.

DFARS clause-aware data handling

Where the engagement touches CUI, data handling is governed by DFARS 252.204-7012 (Safeguarding Covered Defense Information and Cyber Incident Reporting). Datasets, trained weights, and source code that may carry CUI live in tenant-isolated storage on the private GPU fleet, never on a public cloud bucket, never on a shared developer laptop. Our team is briefed on incident-reporting timelines before the engagement starts.

Secrets handling and key custody

API keys, model-registry tokens, ssh keys, and any third-party service credentials live in a managed secrets vault, not in the source repository, not in shared chat. Pre-commit hooks scan for accidental secret leakage. Key custody for any encryption keys (storage, model registry, simulator state) is documented in the security plan with rotation policy and revocation procedures.

Software bill of materials (SBOM) on every deliverable

An SBOM is generated at build time using either SPDX or CycloneDX format per CISA SBOM guidance. Every direct dependency, every transitive dependency, every license, every upstream source URL. The SBOM ships with the final artifact at Phase 04 and is the foundation for any later supply-chain attestation if the prototype moves into a production-adjacent setting.

HIPAA Security Rule overlay for healthcare research

For healthcare research engagements where the data plane brushes against ePHI or research subject identifiers, the engagement adds a HIPAA Security Rule control mapping (administrative, physical, and technical safeguards) and aligns with the Common Rule (45 CFR 46) where IRB review applies. Data minimization and de-identification are configured into the dataset pipeline before collection starts, not after.

Audit trail across the engagement

Every commit, every training job, every dataset version, every model checkpoint is logged. The log is queryable at handoff. If your sponsor, your IRB, your DCMA reviewer, or your continuation-funding committee asks "what data trained this model and who reviewed the code that deployed it," the answer is a query against the audit log, not an archaeological expedition.

What we are not

Petronella is a CMMC-AB RPO, not a C3PAO. We help you reach assessment readiness; the formal certification audit is performed by an independent C3PAO. We are not a clinical regulatory affairs firm; the FDA SaMD pathway is explicitly outside this engagement (see Section 9). We do not represent ourselves as authorized partners of any robot vendor, and we do not claim Pollen Robotics or Hugging Face partner status because no public partner program from either organization currently grants it.

Engagement templates we offer

Six prototype templates we can scope today

These are templates Petronella offers as starting points for a Phase 01 scoping conversation. They are not completed client engagements (this is a new robotics practice; we do not have a client portfolio yet). Treat each one as a worked example of how Petronella would approach a recognizable research question, what hardware it would land on, and where the compliance edges sit. Real scopes always shift after we sit with your PI.

Template 01

Sovereign teleoperation research bench

A teleop bench where a human operator drives the Reachy Mini through a manipulation or perception task while a logging pipeline collects state, camera, and command tuples for downstream policy training. Runs entirely inside a tenant-isolated network with no public-cloud egress on the data plane.

  • Hardware: Reachy Mini Wireless plus operator workstation
  • Stack: LeRobot data-collection harness, Pollen Python SDK, MuJoCo for sim parity
  • Compliance edges: Suitable for CUI-aware research with NIST SP 800-171 mapping
Template 02

Conversational research assistant on Reachy Mini

An expressive desktop research assistant that uses the Reachy Mini head and microphone array for a conversational interface running fully on-prem, no public cloud language-model API. Speech-to-text, intent classification, and text-to-speech all hosted on the private GPU fleet. Useful for university research demos and accessibility-focused IRB studies.

  • Hardware: Reachy Mini Wireless plus on-prem inference workstation
  • Stack: LeRobot orchestration, on-prem speech models, custom intent layer
  • Compliance edges: HIPAA-Security-Rule overlay if healthcare research
Template 03

Sim-to-real perception transfer study

A study that trains a perception model in the MuJoCo Reachy Mini simulator, transfers the policy to the physical robot, and measures the sim-to-real gap. Useful for graduate research papers, follow-on grant evidence, or pre-publication reviewer responses on a SmolVLA-style result.

  • Hardware: Reachy Mini Lite plus training rig
  • Stack: MuJoCo simulation SDK, LeRobot policy training, evaluation harness
  • Compliance edges: Open-source-publishable when grant terms allow
Template 04

Air-gapped CUI-aware imitation-learning pipeline

An imitation-learning pipeline where teleoperated demonstrations are collected on a sensitive task, training runs on the private GPU fleet behind a CUI boundary, and inference deploys back to the robot inside the same enclave. Built for SBIR Phase II teams or DoD university affiliated research center groups.

  • Hardware: Reachy Mini Wireless inside isolated lab segment
  • Stack: LeRobot imitation-learning algorithms (ACT, diffusion policies), encrypted dataset store
  • Compliance edges: NIST SP 800-171 r3 control mapping plus DFARS 252.204-7012 data handling
Template 05

Comparative evaluation harness across LeRobot-supported platforms

An evaluation harness that runs the same LeRobot-trained policy against multiple supported hardware platforms (Reachy Mini, SO-101, Koch v1.1, or Stretch 3 depending on availability) to benchmark transfer and produce a publishable comparison. Useful for academic groups working a measurement-grade paper.

  • Hardware: Reachy Mini in Raleigh plus client-provided second platform
  • Stack: LeRobot evaluation framework, standardized metrics, reproducibility harness
  • Compliance edges: Publishable; provenance-tracked for reproducibility
Template 06

Healthcare research observation study

A non-clinical observation study where the Reachy Mini hosts a conversation, training, or perception interaction inside an IRB protocol. Data minimization and de-identification configured into the pipeline before any subject session begins. Explicit boundary: this is research-grade work, not a medical device, and the engagement ends short of any FDA pathway.

  • Hardware: Reachy Mini Wireless inside IRB-approved research environment
  • Stack: LeRobot perception layer, de-identification pre-processor, audit-grade logging
  • Compliance edges: HIPAA Security Rule plus Common Rule plus IRB protocol mapping

These templates exist to make a scoping conversation faster. None of them is a deliverable in itself. Your scope memo is the deliverable, and it always reflects your research question, your funding terms, and your regulatory environment, not a template lifted off a website.

Pricing model

How engagements are priced (and why we do not list a number)

Petronella Technology Group prices robotics prototyping engagements case-by-case. We do not publish per-week rates because every variable that matters (regulatory framework, hardware availability, dataset complexity, compute envelope, handoff scope) shifts the number. We do publish the structure of how we price.

Two pricing structures, picked at scope memo

Time and materials (T&M). Used when the research question is open-ended, the success criteria are exploratory, or the dataset is being collected during the engagement. Hours billed against named engineering and research roles, GPU hours billed at our published internal rate, hardware billed at cost. T&M engagements run with a written ceiling and weekly burn reports so nobody is surprised.

Fixed-scope phases. Used when the four phases above can each be scoped to a specific deliverable list and a defined acceptance criteria. Each phase is its own fixed-fee unit. You can stop at the end of any phase if the prototype hits a hard pivot or if the funding situation changes. We do not run multi-phase fixed-fee programs because they punish honest scope discovery.

First-engagement pricing is established case-by-case. This is a new robotics practice for Petronella. We are calibrating engagement scope and pricing against real client requirements rather than against a rate card written in a vacuum. Founding-customer engagements come with extra access to senior engineering attention as a tradeoff for being the first cohort to run the engagement template.

What we do not do. We do not bill against arbitrary licensing schemes. We do not lock the prototype source behind a license you have to renew. We do not require recurring managed-services subscriptions to keep the prototype operational. The handoff is a real handoff.

How we estimate at scope memo. The Phase 01 scope memo includes a written estimate broken out by phase, by named role, and by GPU hours, so you can compare against your funding terms or your sponsor allowable rates. Where the engagement is funded by a federal grant or an SBIR award, we line up the estimate against the grant allowable cost categories rather than asking you to translate the invoice yourself. Where the engagement is funded by departmental discretionary budget, we work to a written ceiling that does not require new approvals to honor.

What changes the price up. A second hardware platform alongside the Reachy Mini, custom mechanical adapters, datasets that have to be collected from scratch in a regulated environment, multiple operator workstations, on-site travel beyond a single Phase 01 working session, and any extension of Phase 04 support past the included 30-day window. We surface every one of these at scope memo time so the number on the page is the number on the invoice.

What we will not include in an estimate. Numbers we cannot defend. If a phase depends on data we have not seen yet, on a regulatory framework we have not finished mapping, or on hardware we have not yet held in our lab, we will say so in writing and price that phase as time and materials with a ceiling rather than guess at fixed fee. The principle here is the same as the no-fabrication rule that governs everything else on this page.

Out of scope

What this engagement explicitly does not cover

Robotics is a wide field. Petronella Technology Group has a sharp wedge inside it. The clearer we are about what we do not take on, the faster a scoping call goes and the less likely a misaligned engagement gets onto the calendar.

We do not take on these engagements

  • Production-line industrial integration. We do not integrate cobot arms into manufacturing cells, do not write PLC code, do not own factory floor uptime SLAs, and do not commission production robotics deployments. PickNik Robotics, Cardinal Peak, and integrators specializing in industrial automation are the right phone call.
  • FDA Software-as-a-Medical-Device (SaMD) clearance work. We do not pursue 510(k), De Novo, or PMA pathways. Healthcare research engagements stop at the research-grade boundary and do not cross into clinical decision support, diagnostic claims, or therapeutic intent. See FDA SaMD guidance for the formal pathway.
  • Weapons systems and lethal autonomy. We do not build, integrate, or advise on weapons platforms. We do not work on lethal autonomous targeting, lethal autonomous decision systems, or any system whose intended use is the application of force. Defense research engagements are scoped strictly to non-lethal CUI-aware research.
  • Autonomous vehicles. We do not build or advise on self-driving cars, autonomous trucks, autonomous drones for delivery or surveillance, or any other Level 3+ autonomous mobility system. Autonomous-vehicle engineering is its own discipline with its own safety case framework that we do not staff.
  • Surgical robotics. We do not build, integrate, or advise on surgical robots, robot-assisted surgery platforms, or any robotic system in the operating-room data plane. Surgical robotics carries clinical regulatory and patient-safety obligations that fall well outside our scope.
  • Cellebrite, EnCase, or Graykey-style mobile forensics extraction. Our forensics specialty is BYOD and corporate-mobile breach response, not device extraction. Robotics engagements are not a vector for mobile forensics work either way.
  • Humanoid manufacturing or contract assembly. We do not build robot hardware, assemble units, or run a contract-manufacturing line. We use the Reachy Mini as Pollen Robotics ships it and we extend it with software, never with mechanical modifications.
FAQ

Robotics prototyping engagement questions

How long does a typical engagement run?
A first prototype usually runs six to eight weeks across the four phases, with the actual build happening in weeks two through six. Single-phase engagements (just discovery and scope memo, or just architecture and security plan) run one to three weeks. Multi-phase research programs can stretch across a quarter or longer. The 2-week prototype cycle target on the hero is a per-iteration cycle inside Phase 03, not the total engagement length.
Can the prototype run inside our network without ever talking to your cloud?
Yes. The default deployment pattern at handoff is a self-contained prototype that runs on hardware you own, on a network you control, with model weights stored locally and inference happening on your GPU. The Hugging Face LeRobot stack is fully open-source and runs end-to-end on a self-hosted Linux + GPU environment. We do not require any phone-home licensing or cloud heartbeat.
Do you require us to use the Reachy Mini, or can the prototype target a different robot?
Reachy Mini is our default platform because we run one in our Raleigh lab and the LeRobot stack treats it as a first-class citizen. LeRobot also supports SO-101, SO-100, Koch v1.1, LeKiwi, Hope Jr, Reachy 2, Unitree G1, Earth Rover Mini, OMX, and OpenArm directly. Engagements that target one of those alternative platforms are scoped Phase 01 with the platform constraint baked in.
Who owns the source code and the trained model weights at the end?
Your organization. The handoff includes the full source repository, the trained model weights, the dataset manifest, the SBOM, the architecture document, and the security plan. We retain no usage license over your prototype after Phase 04 closes. Where we contribute generic library improvements upstream to LeRobot, we do that against open-source projects, not against your engagement IP.
What if our institution requires a specific compliance framework we did not list?
Bring the framework to Phase 01. NIST SP 800-218 SSDF is the development-side baseline regardless. We have written control mappings for NIST SP 800-171, the CMMC framework, HIPAA Security Rule, and the Common Rule for IRB. For frameworks we have not mapped before (FERPA, GLBA, state privacy regimes), we add the mapping during architecture in Phase 02 rather than declining the engagement.
Will this engagement help us with a CMMC L2 assessment?
It helps with the development-environment portion of the body of evidence. The security plan, the SBOM, the audit log, and the SSDF-aligned development practice are inputs that an assessor would expect to see for the system-development boundary. We are an RPO, not a C3PAO, so we cannot conduct the formal certification audit ourselves. We can connect you to C3PAOs if you do not already have one engaged.
Do you work on a fixed price or by the hour?
Both, depending on how clean the scope is. Fixed-fee per phase when the deliverables and acceptance criteria can be written down up front. Time and materials with a ceiling when the research question is exploratory and the success criteria need to be discovered along the way. The mix is decided at the scope memo, not after work has started.
What do you actually have at the lab right now versus on the roadmap?
Operational today: a Reachy Mini, a private NVIDIA Elite Partner Channel GPU fleet running training and inference for our existing AI clients, RTX PRO and DGX-class workstations, a segmented dev VLAN, and the cybersecurity and compliance practice that has been running since 2002. New: this is our first robotics offering and we do not have completed client robotics engagements yet, which is why founding-cohort pricing is established case-by-case. We are deliberate about that distinction because the no-fabrication rule matters more than a marketing line would.
Is the engagement remote, on-site at our facility, or in your Raleigh lab?
Mostly hybrid. Phase 01 discovery is usually a mix of video calls and one on-site working session at your facility (we will travel, this is included in the engagement fee). Phase 02 architecture is mostly remote. Phase 03 build runs on the Reachy Mini in our Raleigh lab unless you have your own unit. Phase 04 handoff is hybrid with at least one on-site or live-video training session for your team.
How do we start a conversation?
Use the contact form at /contact-us/ or call (919) 348-4912. The first call is a 30-minute scoping conversation, no charge. We will ask about the research question, the regulatory environment, the hardware constraint, and the timeline. If we are the right fit, the next step is a paid Phase 01 discovery and scope memo. If we are not the right fit, we will tell you who is.

Looking for a different angle on robotics?

This page is the deliverable view of the prototyping engagement. The robotics pillar covers the practice as a whole, the Reachy Mini hardware page covers platform specifics, and the defense-robotics industry page covers the buyer-identity and threat side for federal research.

Start a scoped engagement

Ready to scope your prototype?

30 minutes on the phone, your research question, our engagement model, the right answer about whether we are a fit. Petronella Technology Group has been serving North Carolina since 2002.

5540 Centerview Dr., Suite 200, Raleigh, NC 27606
Petronella Robotics / Lead

Get the Secure Robotics Development Brief

Tell Petronella Technology Group about your robotics project. We will reply within 4 business hours with a CMMC-RP led scoping conversation and the early-access edition of our Secure Robotics Development Brief covering CUI handling, on-prem AI inference for robotics, and CMMC-aligned development practices. No obligation. No sales pressure.

CMMC-RP team. We reply within 4 business hours. Privacy policy. Or call (919) 348-4912.