HIPAA-Compliant Private LLMs: 5 Architectures
Posted: May 5, 2026 to AI.
Last reviewed by Petronella Technology Group on May 5, 2026. Reviewed for the proposed HIPAA Security Rule NPRM (December 2024) and 2026 OCR enforcement focus.
If you are a covered entity or a business associate, you already know the painful version of the rule: PHI cannot leave a covered boundary unless every entity downstream is under a written BAA, and your organization can document, in detail, what happens to that data at each hop. The rise of large language models has not changed this rule. It has only multiplied the number of places where the rule is quietly broken, often by well-meaning clinical staff pasting de-identified case notes into a free chatbot, and often by IT teams enabling a "GenAI" feature that quietly proxies prompts through a vendor that has never signed a BAA.
This guide walks through the five architectures that actually work in 2026, what each one costs you in operational complexity, where each one fits in the OCR audit narrative, and the common configuration mistakes that turn a "HIPAA-eligible" service into a HIPAA violation. Petronella Technology Group, a Raleigh-based cybersecurity and forensics firm registered with The Cyber AB as RPO #1449, has been advising healthcare organizations on AI architecture, HIPAA risk analysis, and incident response since 2002. Our team holds CMMC-RP credentials and works with healthcare customers ranging from small specialty practices to research-oriented health systems.
What changed in 2026: the why-now framing
Three things shifted between late 2024 and the first half of 2026 that make this discussion urgent rather than theoretical.
The HIPAA Security Rule NPRM. In December 2024, the Department of Health and Human Services published a proposed rule to update the HIPAA Security Rule for the first time in more than two decades. The proposed rule is currently listed on the HHS regulatory agenda for finalization in May 2026. The most important change for AI architecture is the elimination of the "addressable" versus "required" distinction in the Security Rule. Under the proposed rule, controls that organizations have historically treated as optional, including encryption of electronic PHI at rest, multi-factor authentication for systems that handle ePHI, vulnerability scanning, and asset inventories, become unambiguously required. For AI systems that means logging, encryption, MFA, and inventory of model artifacts and training data are no longer "we will address that later" line items. They are baseline.
OCR enforcement priorities shifted from Risk Analysis to Risk Management. In April 2026, OCR Senior Advisor Nick Heesters publicly stated that enforcement focus is moving from confirming that organizations have completed a risk analysis to confirming that organizations are actually executing the risk management plan that follows from it. In plain terms: it is no longer enough to have a binder. You have to be patching, training, monitoring, and rotating the controls the analysis prescribed.
The open-weight LLM wave is real. By the first half of 2026, Meta Llama 4, Alibaba Qwen 3.5, Mistral Large 3, and Google Gemma 4 had all shipped with permissive enough licenses, strong enough quality, and small enough memory footprints to be deployed inside a healthcare organization own infrastructure. At the same time, AWS Bedrock, Azure OpenAI Service, and OpenAI enterprise tier had all matured their BAA-eligible configurations, with audit logging, customer-managed keys, and regional isolation. The result is that a private LLM is now a real architectural option, not a research project.
One more data point worth keeping in mind for the rest of this article: a recent peer-reviewed analysis of healthcare ransomware incidents found that 33.8% of incidents involved a Business Associate. The vendor surface, not the hospital itself, is increasingly where breaches originate. Any AI architecture decision that adds a new business associate to the data flow is a decision that materially changes the breach probability.
The core constraint: PHI must not leak
HIPAA Privacy Rule and Security Rule combine to create the constraint that drives every architectural decision. Protected Health Information cannot be disclosed except under specific permitted uses, and any entity that creates, receives, maintains, or transmits PHI on behalf of a covered entity is a Business Associate. Business Associates require a written BAA. Most consumer LLM APIs are not Business Associates. Free-tier ChatGPT, free-tier Claude.ai, free-tier Gemini, the public Perplexity API, and most "AI features" baked into off-the-shelf SaaS tools do not come with a BAA. Sending PHI through them is a reportable breach.
The BAA gating principle is the cleanest way to think about this. Before a single byte of PHI moves through your AI system, walk the data flow on a whiteboard. At each hop, ask: is this entity covered (you, your workforce) or under a BAA? If the answer is no at any hop, the design is wrong. Not "needs a control added", not "we will get a BAA later". Wrong, today, in a way that will surface in your next risk analysis update.
The five architectures below are organized around how they answer that question.
The five architectures: comparison table
| Architecture | PHI Handling | BAA Required From | Hardware Footprint | Latency | Compliance Posture | Best For |
|---|---|---|---|---|---|---|
| 1. On-premises GPU cluster | Stays inside your network. Never traverses the public internet. | None for inference. Your hardware vendor and your colocation provider, if any. | NVIDIA Elite Partner Channel sourced workstations or DGX-class systems. Typically 2-8 GPUs for a clinical-scale workload. | Lowest. Sub-second for typical generation tasks once warm. | Strongest. You are both covered entity and operator. No third-party inference dependencies. | Large health systems, defense-adjacent healthcare, organizations handling unusually sensitive PHI such as genomic, behavioral health, or substance use disorder data. |
| 2. AWS Bedrock with BAA | Stays inside your AWS account in a HIPAA-eligible region. AWS does not use prompts for training under the BAA configuration. | AWS (existing AWS BAA covers Bedrock for HIPAA-eligible accounts). | None on premises. Bedrock is fully managed. | Low. Regional latency, typically 200-800 ms for first token. | Strong. Single BAA, single audit trail, IAM-aligned access controls, CloudTrail logging. | Organizations already standardized on AWS that need Anthropic Claude, Mistral, or Llama without operating GPUs. |
| 3. Azure OpenAI with BAA | Stays inside your Azure tenant. Microsoft does not use customer data to train OpenAI models under the Azure OpenAI service. | Microsoft (existing Microsoft BAA covers Azure OpenAI for healthcare customers). | None on premises. | Low. Regional latency. | Strong. Tenant-level isolation, customer-managed keys, integration with Microsoft Purview for data governance. | Organizations already on Microsoft 365 plus Azure who want GPT-class models with the lowest integration friction. |
| 4. Private cloud + open-weight models | Stays inside a dedicated VPC at a HIPAA-eligible host such as Atlantic.net, Aptible, or ClearDATA. | The hosting provider provides the BAA. You provide the model and configuration. | Provider-managed GPUs sized to your workload. | Low to moderate, depending on provider and instance type. | Good. Single BAA with the host. You own the model, weights, and prompts. | Smaller organizations who want open-weight flexibility without buying or operating hardware. |
| 5. Air-gapped enclave | Network isolated. No outbound internet. Inference accessed only through internal services. | None for inference. Hardware vendor only. | On-prem GPU cluster sized to enclave workload. | Lowest. No internet round trip is even possible. | Strongest possible. Removes an entire class of exfiltration risk. | Research hospitals handling federal grants with CUI overlap, defense health, or use cases where no outbound traffic is acceptable. |
The table is a starting point. The architecture deep-dives below cover the parts that matter when you actually have to deploy one.
Architecture deep-dives
1. On-premises GPU cluster
This is the architecture for organizations whose risk tolerance, data sensitivity, or regulatory environment demands that PHI never leave the building. In practice it means a GPU cluster sized to your workload sitting inside your existing network, fronted by an internal LLM gateway that handles authentication, audit logging, and prompt-level safety filtering.
Hardware sizing. A small to mid-size practice running drafting and triage assistance for under fifty concurrent users typically needs two to four GPUs in the H100 or RTX PRO class. A health system running clinical documentation and ambient scribing for hundreds of concurrent users typically needs an HGX or DGX class system with eight GPUs and high-bandwidth interconnect. The exact sizing depends on the model, the quantization strategy, and the throughput you need. Petronella sources hardware through the NVIDIA Elite Partner Channel and works with healthcare customers on capacity modeling before purchase.
BAA logistics. The clean argument for this architecture is that there is no inference-time business associate. You are running open-weight models on hardware you own. Your hardware vendor is not a business associate as long as they do not have access to your data. If you place the hardware in a colocation facility, the colo provider is a business associate and needs a BAA, but most healthcare-focused colos sign these routinely.
Deployment timeline. Realistic timelines are 90 to 180 days from kickoff to production for a first deployment. The work is not dominated by software. It is dominated by network design, identity integration with your existing IdP, audit logging, the LLM gateway, and the operational runbooks for patching the GPU drivers and the model serving stack on a regular cadence.
Common pitfalls. Underestimating the operational burden of running a model serving stack. Putting the GPU cluster on the same VLAN as clinical systems and creating a new high-value target. Skipping prompt-level audit logging because "the model is internal anyway", which leaves your incident response team with no evidence in a breach scenario. Treating the LLM gateway as a pure passthrough rather than a control plane that can enforce per-role rate limits, content filtering, and prompt-level PHI redaction policies.
Audit considerations. Be ready to show, on demand: the asset inventory entries for every GPU node and model artifact, the access control policy for the gateway, an audit trail that ties prompt to user to model version, the incident response runbook that covers the LLM-specific failure modes such as prompt injection and data leakage through long contexts, and the change management record for model updates.
2. AWS Bedrock with BAA
AWS Bedrock is Amazon managed multi-model service. It exposes Anthropic Claude, Mistral, Meta Llama, Cohere, AI21, and Amazon own Nova and Titan models behind a single API. AWS Bedrock is included in the AWS BAA for HIPAA-eligible accounts, which means PHI may be processed by Bedrock as long as the account is configured as HIPAA-eligible and the workloads run in HIPAA-eligible regions.
Hardware sizing. None on your side. Bedrock is fully managed. You pay per million input and output tokens, with provisioned throughput available for predictable workloads.
BAA logistics. If you already have an AWS BAA in place, no new contract is needed for Bedrock. If you do not, AWS will sign one, but the negotiation process can take weeks at larger organizations because Legal will need to review the data processing terms.
Deployment timeline. Realistic timelines are 30 to 90 days from BAA signature to production for a first deployment. The work is dominated by IAM, VPC endpoint configuration, KMS customer-managed keys, CloudTrail and Bedrock logging integration, and the application-layer integration of the Bedrock API into your clinical or operational workflow.
Common pitfalls. Forgetting that the AWS BAA does not retroactively cover account activity that occurred before BAA signature. Routing data to a non-HIPAA-eligible region to chase a model that is only available there. Disabling CloudTrail logging "for cost", which strips your audit trail. Failing to enable Bedrock invocation logging, which is separate from CloudTrail and is what records the actual prompt and response payloads under your KMS key.
Audit considerations. Bedrock invocation logs go to S3 or CloudWatch under your customer-managed key. The audit narrative is clean: every inference is logged, every log is encrypted under a key you control, and every key access is logged. Be ready to demonstrate this end to end during an OCR review.
3. Azure OpenAI with BAA
Azure OpenAI Service exposes the OpenAI GPT family inside an Azure tenant. The Microsoft Online Services BAA covers Azure OpenAI for customers in healthcare. Microsoft has stated that Azure OpenAI does not use customer prompts or completions to train the underlying models, and the service supports customer-managed keys, virtual network integration, and tenant isolation.
Hardware sizing. None on your side. Azure provisions capacity. Provisioned Throughput Units (PTUs) are available for organizations that need predictable latency and throughput.
BAA logistics. Most healthcare organizations on Microsoft 365 already have the Microsoft Online Services BAA in place. Azure OpenAI inherits that coverage when used inside the same tenant.
Deployment timeline. Realistic timelines are 30 to 90 days. The work is dominated by Entra ID integration, private endpoint configuration, customer-managed key setup in Azure Key Vault, diagnostic logging into Log Analytics or a SIEM, and Microsoft Purview integration for data classification if you use Purview.
Common pitfalls. Routing Azure OpenAI through a public endpoint instead of a private endpoint. Forgetting to disable the abuse monitoring data retention feature, which by default retains prompts for a 30-day window for safety review by Microsoft personnel. Healthcare customers should request the abuse monitoring exception through the Microsoft Limited Access form to align retention with their HIPAA documentation. Failing to integrate diagnostic logs with the SIEM that feeds your incident response process.
Audit considerations. Diagnostic logs in Log Analytics give you the inference-level audit trail. Combine with Microsoft Defender for Cloud and Microsoft Sentinel for end-to-end visibility. Be ready to show the abuse-monitoring exception status and the customer-managed-key configuration during review.
4. Private cloud plus open-weight models
This is the architecture for organizations who want the flexibility of running open-weight models without owning the hardware. A HIPAA-eligible cloud provider, such as Atlantic.net, Aptible, or ClearDATA, signs the BAA and operates the underlying infrastructure. You deploy open-weight models, typically Llama 4, Qwen 3.5, or Mistral Large 3, into a dedicated VPC and operate the inference stack yourself.
Hardware sizing. The provider rents you GPU instances sized to your workload. Common sizing for clinical workloads ranges from a single A100 or L40S instance for a small team to multiple H100 instances for production-scale ambient documentation.
BAA logistics. The hosting provider BAA is your primary instrument. Read it carefully. Confirm what it covers, what it excludes, and whether subprocessors are flowed down. Some HIPAA-eligible hosts use upstream cloud providers (AWS, Google Cloud, Azure) under their own BAA chain. That chain needs to be intact and documented.
Deployment timeline. Realistic timelines are 60 to 120 days. You inherit the provider compliance posture for the substrate, but you still own the model serving stack, the gateway, the audit logging, and the clinical or operational integration.
Common pitfalls. Treating the provider HIPAA-eligible designation as full compliance coverage. The provider gives you a compliant substrate. Your configuration determines whether you have a compliant deployment. Mixing PHI workloads with non-PHI workloads in the same VPC. Skipping the encryption of model weights at rest, even though weights themselves are not PHI, on the theory that "weights are public anyway", which weakens your overall control story during audit. Failing to plan for model lifecycle: when Llama 5 ships, what is the upgrade path that does not introduce drift in clinical behavior?
Audit considerations. The audit narrative has two layers: the provider substrate compliance, which the provider documents, and your application-layer compliance, which you document. Be ready to show both.
5. Air-gapped enclave
The air-gapped enclave is the most restrictive architecture. The model and the supporting services run on a network with no outbound internet. Updates are applied through a controlled inbound path, typically a one-way transfer process that the security team manages on a defined cadence.
Hardware sizing. Similar to the on-premises architecture above, sized to the enclave workload. Most enclaves run smaller models that have been carefully evaluated for clinical or research suitability rather than the largest available models.
BAA logistics. No inference-time business associate. The hardware vendor relationship is the only external relationship, and the enclave model means the vendor does not have ongoing access. This removes a class of supply-chain risk.
Deployment timeline. Realistic timelines are 120 to 240 days. The work is dominated by network engineering, the controlled-update process, the operational runbooks for incident response inside the enclave, and the user training that ensures clinical or research staff understand the boundaries.
Common pitfalls. Treating the air gap as a substitute for prompt-level controls. An air gap prevents data from leaving by network. It does not prevent a malicious or careless user from copying a model output to a USB drive and walking out the door. The enclave needs the same data-loss prevention, audit logging, and access control as any other PHI system. Underestimating the cost of operating an air-gapped environment over time. The discipline of the controlled-update process compounds.
Audit considerations. The audit narrative for an enclave is the strongest of the five architectures, but only if it is documented. Be ready to show the network design, the update process, the access control list for physical and logical access, and the incident response runbook that addresses the unique failure modes of an isolated environment.
What about model fine-tuning on PHI?
Fine-tuning a model on data that contains PHI is technically possible and operationally tempting, especially when the goal is to teach a model the language of your specialty or the structure of your clinical documentation. It is also where many of the worst architectural mistakes happen.
The core risk is data leakage through the trained model itself. A model that has been fine-tuned on PHI may regurgitate that PHI in response to a prompt that resembles the training data, even when the prompter has no legitimate access. Membership inference attacks and model inversion attacks have been demonstrated against language models, and the risk grows with the size of the training set and the specificity of the data.
The defensible patterns are narrow. Fine-tuning on de-identified data, where de-identification has been performed under HIPAA Safe Harbor or Expert Determination, is generally acceptable, but the de-identification process itself becomes a compliance artifact that must be documented and defensible. Fine-tuning inside an air-gapped enclave on identified PHI may be acceptable in research contexts where the resulting model is also enclave-bound and never leaves. Fine-tuning a public model on identified PHI for general use is not acceptable.
For organizations considering fine-tuning, the NIST AI Risk Management Framework provides a useful structure for documenting the threat model, the mitigations, and the residual risk. Combined with the HIPAA risk analysis, it produces an audit-ready record of the decision and its safeguards.
NIST and HHS guidance worth citing
Three documents anchor the technical conversation in 2026. Bookmark them and reference them in your risk analysis updates.
- NIST SP 800-66 Revision 2, Implementing the HIPAA Security Rule, A Cybersecurity Resource Guide. The current authoritative crosswalk between Security Rule citations and the NIST Cybersecurity Framework. Use it as the structural backbone of any risk analysis document.
- HHS HIPAA Security Rule NPRM factsheet. The proposed rule whose final form is on the HHS regulatory agenda for May 2026. Drives the elimination of the addressable versus required distinction.
- ONC HTI-1 final rule on information blocking. Although not a security rule per se, the HTI-1 rule shapes how AI-generated decision support outputs interact with the obligation to share electronic health information. Worth reviewing alongside any AI deployment that touches the EHR.
The NIST AI Risk Management Framework (AI RMF 1.0) is also worth folding into your control set, particularly for organizations that want a defensible AI-specific risk narrative beyond the HIPAA Security Rule general technical safeguards.
Petronella stack architecture playbook
Most healthcare engagements with Petronella Technology Group around HIPAA-compliant private LLMs follow a four-phase structure. We are publishing the structure rather than a price because every healthcare environment has a different starting point. The right scope, and the right cost, depend on what we find in Phase 1.
Phase 1: PHI flow mapping and risk analysis update. We sit down with your clinical, IT, and compliance leadership and walk every PHI flow that could plausibly touch an AI system, today or in the proposed architecture. We update or rebuild the risk analysis for the AI surface specifically, anchored to NIST SP 800-66 Revision 2 and the proposed Security Rule changes. The deliverable is a written risk analysis that names the architecture options on the table and the residual risk for each.
Phase 2: Architecture decision and BAA logistics. Working from the risk analysis, we recommend one of the five architectures above, or a hybrid that combines them, and we work through the BAA logistics. For organizations choosing an on-premises or enclave path, we coordinate hardware sourcing through the NVIDIA Elite Partner Channel and the colocation or facility decisions that follow. For organizations choosing a managed cloud path, we work with your existing AWS or Azure relationship and any HIPAA-eligible cloud provider relationships you already have.
Phase 3: Implementation, technical safeguards, and audit logging. We deploy or supervise the deployment, set up the LLM gateway with prompt-level audit logging and per-role access controls, integrate the audit trail into your SIEM, and document the configuration. For organizations subject to CMMC because of federally-funded research, our CMMC-RP team aligns the AI control set with the CMMC Level 1, Level 2, and Level 3 requirements as they apply.
Phase 4: Annual reassessment and OCR readiness. Once a year, we update the risk analysis, refresh the threat model in light of new attack patterns and new model releases, and run a tabletop exercise that includes the AI-specific failure modes. The output is an OCR-ready binder that documents the architecture, the controls, the audit log samples, and the incident response runbook.
If your organization is at the start of this conversation, the right next step is a 15-minute call with our team. Penny, our AI receptionist, can route the call directly to a senior consultant. The number is (919) 348-4912.
Common architecture mistakes (and how to avoid them)
Across our engagements, the same patterns of failure show up. They are worth naming directly so that you can check your own design against them.
Mistake 1: "ChatGPT is fine if we strip names." Free-tier ChatGPT does not come with a BAA. The HIPAA Privacy Rule Safe Harbor de-identification standard requires the removal of 18 specific identifiers, and most "stripping" performed ad hoc by clinical staff does not meet that bar. Even when it does, the absence of a BAA means OpenAI is not a permitted recipient of the data. The fix is policy and tooling: block the consumer endpoints at the network edge, provision an enterprise-tier or BAA-eligible alternative, and train staff on the difference.
Mistake 2: "HIPAA-eligible means HIPAA-compliant." AWS Bedrock, Azure OpenAI, and HIPAA-eligible cloud hosts are eligible substrates. Whether your deployment on top of them is compliant depends on your IAM policy, your encryption configuration, your logging configuration, and your operational practices. The eligibility designation removes a barrier. It does not produce compliance.
Mistake 3: Missing BAA logistics for downstream subprocessors. When the AI system calls into a third-party tool, embedding service, or evaluation service, that downstream relationship is in scope. Confirm BAA coverage for every subprocessor in the data flow and document the chain.
Mistake 4: No audit logging on the LLM gateway. If your gateway does not log who prompted what, when, and what the model returned, you cannot reconstruct an incident, you cannot prove the system was working as designed, and you cannot meet the audit control requirement of the Security Rule. Build the gateway with logging from day one. Retrofitting logging after deployment is painful and creates evidentiary gaps.
Mistake 5: No prompt-level PHI scanning before requests leave the boundary. Even in a HIPAA-eligible cloud configuration, sending more PHI than necessary creates more exposure than necessary. A prompt-level scanner that detects and redacts unnecessary identifiers before the request leaves your environment is a defensive control worth investing in. It also produces a clean record of what was redacted, which is useful in audit.
FAQ
Is ChatGPT HIPAA-compliant?
The free and standard ChatGPT consumer products are not configured for HIPAA and are not covered by a Business Associate Agreement. OpenAI does offer enterprise tiers and a healthcare-focused configuration that can be covered by a BAA. Before sending PHI through any OpenAI product, confirm in writing that a BAA is in place, that the data will not be used for training, and that audit logging is configured. A consumer ChatGPT account is not an acceptable destination for PHI.
Does AWS Bedrock cover HIPAA?
Yes, AWS Bedrock is included in the AWS Business Associate Addendum for HIPAA-eligible accounts when used in HIPAA-eligible regions. The account must be configured as HIPAA-eligible, CloudTrail and Bedrock invocation logging must be enabled, and the workloads must run in HIPAA-eligible regions. Read the current AWS HIPAA-eligible services list before deployment, since AWS adds and occasionally removes services from the list.
Can we run Llama 4 on-prem without a vendor BAA?
Yes. Llama 4 is an open-weight model. When you run it on hardware you own, no inference-time business associate exists. Your hardware vendor and any colocation provider are still in scope, but as long as those parties do not have access to your data, the BAA chain is short and clean. This is the architectural appeal of on-prem and enclave deployments.
Is fine-tuning on PHI ever acceptable?
Fine-tuning on de-identified data, where de-identification has been performed under HIPAA Safe Harbor or Expert Determination, is generally acceptable when documented. Fine-tuning on identified PHI is acceptable only in tightly controlled environments such as an air-gapped enclave, where the resulting model is also enclave-bound. Fine-tuning a model that will be widely deployed on identified PHI is not acceptable.
How does the 2026 HIPAA NPRM affect AI deployments?
The proposed rule, listed on the HHS regulatory agenda for finalization in May 2026, eliminates the addressable versus required distinction in the Security Rule. For AI architecture that means encryption at rest, multi-factor authentication, vulnerability scanning, and asset inventories become unambiguously required for systems that handle ePHI. AI gateways, model artifacts, and training data all fall inside that scope.
What is the audit log requirement for LLM inference?
The Security Rule audit control standard at 45 CFR 164.312(b) requires that systems that handle ePHI implement hardware, software, or procedural mechanisms to record and examine activity. For an LLM, that means recording who prompted what, when, against which model version, and what the model returned. The log itself must be protected, retained according to your retention policy, and accessible to your incident response team during an investigation.
Do we need a privacy impact assessment for a private LLM deployment?
HIPAA does not formally require a separate Privacy Impact Assessment in the way the federal Privacy Act does for federal agencies. However, a structured assessment of the privacy implications, integrated into the HIPAA risk analysis, is good practice and is increasingly expected by sophisticated business associates and downstream auditors. The NIST Privacy Framework provides a usable structure if you want to formalize this.
Where this fits in the broader Petronella stack
HIPAA-compliant private LLM architecture is one piece of a larger AI and compliance practice at Petronella Technology Group. Related work includes our private AI solutions pillar, which covers the full enterprise private AI cluster pattern beyond healthcare. The HIPAA compliance pillar covers the regulatory program, risk analysis cadence, breach response, and OCR readiness for healthcare organizations regardless of whether AI is in scope. The CMMC compliance pillar applies for healthcare research environments and defense-adjacent healthcare that touch federal contracts. Hardware sizing for any on-prem deployment is anchored in the AI workstations page. The defensive program around any AI deployment is the cybersecurity pillar. For organizations that need a part-time security executive to own the program, the vCISO service is the structured entry point.
Ready to talk about a HIPAA-compliant private LLM architecture for your organization?
Call (919) 348-4912. Penny, our AI receptionist, books a free 15-minute call with our team. We will walk through your data flow, the architectures that fit, and the realistic timeline for a deployment that is OCR-defensible from day one.
Petronella Technology Group, RPO #1449, CMMC-RP team, BBB A+ since 2003, founded 2002. 5540 Centerview Dr., Suite 200, Raleigh, NC 27606.