AI Governance & Compliance: Build a Defensible AI Program for EU AI Act, NIST AI RMF, and ISO 42001
Petronella Technology Group helps mid-market and regulated organizations stand up the policies, controls, model inventories, and audit evidence that satisfy boards, regulators, and customers. Our team operates the ComplianceArmor platform and is led by an MIT AI-certified founder, NC Licensed Digital Forensics Examiner, and CMMC Registered Practitioner.
Key Takeaways
- AI governance is the documented program of policies, controls, model inventories, and audit evidence that proves your organization can deploy AI safely under NIST AI RMF, the EU AI Act, ISO/IEC 42001, and sector rules like HIPAA, CMMC, SR-11-7, and SOC 2.
- PTG operates ComplianceArmor — a proprietary documentation engine that automates AI policy generation, model registry maintenance, evidence collection, and continuous monitoring against the major AI frameworks.
- Most mid-market organizations fail their first AI governance review because they have shadow AI usage they cannot inventory, no AI Acceptable Use Policy, and no link between AI risk and existing security controls — not because they lack technology.
- An AI governance certification path (IAPP AIGP, ISACA AAIA, ISO/IEC 42001 Lead Auditor) gives your internal staff the credentials to defend the program; PTG runs accelerated study tracks plus exam reimbursement for client teams.
- Engagements are fixed-fee with monthly milestones, no long-term contracts, and a 30-day promise: measurable progress in the first month or the next month is on us.
AI governance is the operating system for trustworthy AI at scale.
AI governance is the system of policies, accountability structures, technical controls, model inventories, and audit-ready evidence that allow an organization to deploy artificial intelligence safely, lawfully, and in line with stated values. It applies to every model your team builds, fine-tunes, prompts, or embeds — from a public chatbot powered by GPT-class APIs to an internal retrieval-augmented generation (RAG) system trained on protected health information.
A working AI governance program answers six questions for any auditor or executive who asks: which models are in production, what data trained or grounds them, who approved each use case, what risks were identified and mitigated, how is performance and bias monitored over time, and what happens when something goes wrong. In 2026 those questions sit at the intersection of cybersecurity, privacy, and traditional compliance, which is why most organizations are asking a single accountable partner to own the full stack rather than fragmenting the work across consultants.
Petronella Technology Group has been building security and compliance programs for regulated organizations since 2002. Our AI services launched in 2023 with production agents like Penny, Eve, and ComplyBot — we governed them under the same framework we now build for clients. Founder Craig Petronella is MIT AI-certified, a Cyber AB CMMC Registered Practitioner, and author of Beautifully Inefficient, a book on AI, human creativity, and trustworthy automation. The team runs the ComplianceArmor platform, which automates 70% of the documentation burden across CMMC, HIPAA, SOC 2, PCI, and now AI governance frameworks.
Years in Compliance
Clients Protected
Healthcare Audits
Client Breaches
2026 is the year unmanaged AI becomes a reportable risk.
Three forces are converging. Regulators have moved from guidance to enforcement, customers are writing AI clauses into procurement contracts, and boards are asking the CISO and General Counsel for a single answer on AI risk posture. Companies without a documented program are losing deals, facing audit findings, and absorbing breach costs that traditional cyber-insurance is starting to exclude.
EU AI Act enforcement
Prohibited-practice rules took effect February 2025; general-purpose AI obligations August 2025; high-risk system rules August 2026. Fines reach the greater of EUR 35M or 7% of global turnover. Any U.S. company selling AI-enabled products into the EU is in scope.
NIST AI RMF + GenAI Profile
The voluntary U.S. framework has become the de facto answer when a customer asks "show me your AI governance." The GenAI Profile, released in 2024 and updated through 2026, adds 200+ specific actions across govern, map, measure, and manage functions.
Sector pressure
HHS HIPAA Security Rule updates target AI-driven decision support. CMS clarified that algorithmic determinations in clinical care need human review. The Federal Reserve's SR-11-7 model-risk guidance now applies to large-language-model use in regulated banks.
Procurement clauses
Fortune 500 buyers, hospital systems, defense primes, and state agencies are inserting AI questionnaires into RFPs. Without an AI policy, model inventory, and bias-testing evidence, the form sits on a Sales VP's desk and the deal stalls.
Not sure where your AI governance program stands?
PTG runs a complimentary 45-minute AI governance review covering shadow-AI inventory, policy gaps, and the three highest-risk models in your environment.
Book the Free ReviewSix pillars that move you from policy to evidence in 90 days.
Every PTG engagement runs against the same six-pillar framework, mapped one-to-one against NIST AI RMF, ISO/IEC 42001, and the EU AI Act. The deliverables are concrete: a policy library, model registry, risk register, evidence binder, and a continuous-monitoring dashboard inside ComplianceArmor.
Govern
Charter the AI Governance Committee, draft the AI Acceptable Use Policy, define roles (AI Officer, Model Owner, Risk Reviewer), and connect AI decisions to existing security and privacy committees.
Deliverable: governance charter + RACI matrixMap
Discover every AI use case, third-party API, embedded model, and "shadow AI" tool employees are already using. Classify by risk tier under EU AI Act categories: prohibited, high, limited, minimal.
Deliverable: model inventory + risk classificationMeasure
Build the test plan: bias and fairness testing, robustness and adversarial-prompt testing, hallucination measurement, data-leakage probes, and explainability scoring against the model's intended use.
Deliverable: model cards + test evidenceManage
Wire AI risks into the existing risk register, define incident-response runbooks for AI-specific events (prompt injection, model exfiltration, output toxicity), and link AI controls to SOC 2, HIPAA, and CMMC programs.
Deliverable: incident playbooks + control crosswalkMonitor
Stand up continuous monitoring inside ComplianceArmor: model drift, performance degradation, data-quality alerts, and policy-violation detection. Quarterly evidence packets ready for any auditor.
Deliverable: ComplianceArmor AI dashboardMature
Quarterly external review against NIST AI RMF and ISO/IEC 42001 maturity targets. Path to ISO 42001 certification or AICPA AI System Description SOC 2-AI add-on.
Deliverable: maturity roadmap + cert readinessOne program, every regulator.
Most clients face two or three of these at once: EU AI Act for European customers, NIST AI RMF as the customer-facing answer in the U.S., HIPAA or CMMC because of their industry, and ISO/IEC 42001 because procurement is asking for it. PTG builds a single control set that satisfies all of them.
NIST AI RMF 1.0 + GenAI Profile
- Govern, Map, Measure, Manage core functions
- GenAI Profile actions for LLMs and RAG systems
- Crosswalks to NIST CSF 2.0 and NIST 800-53 Rev. 5
- Best fit when answering U.S. customer security reviews
EU AI Act
- Risk classification: prohibited, high-risk, limited, minimal
- Conformity assessment for high-risk systems
- Transparency obligations for chatbots and synthetic content
- Required for any vendor selling AI into the EU
ISO/IEC 42001
- Artificial Intelligence Management System certification
- The first AI-specific ISO standard, published 2023
- Pairs with ISO 27001 (security) and ISO 27701 (privacy)
- Becoming the de facto AI version of SOC 2
HIPAA + AI
- Business Associate Agreements covering AI vendors
- Minimum-necessary rule applied to LLM prompts
- Audit logging for AI-driven clinical decisions
- Backed by Craig's book How HIPAA Can Crush Your Medical Practice
CMMC + AI
- NIST 800-171 controls applied to AI training data and outputs
- CUI handling rules for fine-tuned models
- Authorized data flows for hosted vs. on-prem AI
- Led by Craig (CMMC Registered Practitioner) and our CMMC consultant team
SR-11-7 + Model Risk
- Federal Reserve model-risk-management guidance for LLMs
- Model validation, effective challenge, ongoing monitoring
- OCC and FDIC alignment for community banks
- Ties into SOC 2 Type II for fintech vendors
The tools we deploy — and the ones we replace.
"AI governance tools" usually means a model registry, a risk repository, a policy library, a bias-and-fairness testing suite, and a monitoring dashboard. Most clients arrive with two or three of these scattered across SaaS subscriptions and shared drives. PTG consolidates them into a single ComplianceArmor instance, integrates open-source AI testing tooling, and removes any vendor that duplicates capability you already own.
Model registry
Centralized inventory of every model in development and production: owner, intended use, training data sources, risk tier, last validation date, monitoring status. Integrated with MLflow or built natively in ComplianceArmor.
Policy library
AI Acceptable Use, AI Procurement, Generative AI Content, Data Labeling, Model Validation, Incident Response — ten core policies generated from your operating environment, not generic templates.
Bias & safety testing
Open-source tooling integration: IBM AI Fairness 360, Microsoft Fairlearn, Giskard, Garak, plus PTG-built prompt-injection probes. Reproducible test runs feed evidence into the audit binder.
Continuous monitoring
Drift detection, performance regression, output-quality scoring, and data-quality alerting. Hooks into existing SIEM and ticketing so AI incidents reach the same on-call rotation as security events.
Private AI deployment
For clients that decide hosted AI is too risky for the data, we deploy private LLM solutions on-premise or in dedicated GPU clouds — data never leaves your network.
Custom AI builds
When governance leads to "build, don't buy," our custom AI development team can deliver agents, RAG systems, and fine-tuned models that ship with the model card and risk assessment already attached.
Earn the credentials that defend the program.
A defensible AI governance program needs at least one credentialed practitioner inside your organization. The market has converged on three credentials in 2026: IAPP AIGP for privacy and policy professionals, ISACA AAIA for IT audit and risk teams, and ISO/IEC 42001 Lead Auditor for organizations targeting formal certification. PTG runs accelerated study tracks for client teams and reimburses the exam cost for engagement participants who pass within 90 days.
IAPP AIGP
- AI Governance Professional certification
- Issued by the International Association of Privacy Professionals
- Best fit for Privacy Officers, GC, DPOs
- Covers EU AI Act, NIST RMF, NYC bias law
- PTG-led 12-week study group
- Mock exam + practice prompts
ISACA AAIA
- Advanced in AI Audit credential
- Issued by ISACA, alongside CISA and CRISC
- Best fit for IT auditors, CISOs, risk officers
- Covers AI auditing across the lifecycle
- PTG-led peer cohort + workshops
- Crosswalks to COBIT and SOC 2
ISO/IEC 42001 LA
- Lead Auditor for AI Management Systems
- Required if you intend to internally audit ISO 42001
- Best fit for orgs targeting ISO 42001 certification
- Includes ISO 19011 audit-management training
- PTG arranges accredited training partners
- Maps directly to NIST AI RMF
Three paths to an AI governance program.
Most leaders evaluate three options: build it internally with existing security staff, hire a Big-4 consultancy for a six-month engagement, or partner with a specialized firm like PTG. Here is the honest comparison.
| Capability | DIY (Internal) | Big-4 Consultancy | Petronella Technology Group |
|---|---|---|---|
| Time to first policy library | 3-6 months (after team is hired) | 6-8 weeks | 2 weeks |
| Engagement model | FTE hire + tools spend | Hourly, often six-figure SOW | Fixed-fee, monthly milestones |
| Documentation automation | Manual Word + Excel | Generic GRC tool, additional license | ComplianceArmor included |
| Frameworks covered | 1-2 (whatever the team knows) | Wide but generic | NIST AI RMF, EU AI Act, ISO 42001, HIPAA, CMMC, SR-11-7 |
| Continuous monitoring | "We'll do quarterly reviews" | Add-on managed service | Built into ComplianceArmor |
| Hands-on implementation | Yes (your team's time) | Mostly advisory | Yes, including infra and tooling |
| Connection to security stack | Depends on team | Hands off to integrator | Same team runs SOC, XDR, vCISO |
| Long-term contract required | N/A | Yes, retainer | No — month to month |
| Author / expert witness credibility | Internal only | Anonymous deck | 15-book author + NC Licensed DFE |
| 30-day promise | None | None | Measurable progress month one |
Sector-specific AI governance patterns we have already built.
Healthcare AI
Clinical decision support, ambient scribes, prior-auth automation. Governance focuses on PHI in prompts, BAA coverage with the model vendor, and human-in-the-loop attestations for any decision affecting care. Rooted in 340+ healthcare audits and Craig's HIPAA library.
Defense AI & CMMC
RAG systems searching CUI, fine-tuned models trained on technical data, AI agents acting on protected information. We map 110 NIST 800-171 controls onto each AI use case so the C3PAO assessment passes the same way our CMMC compliance services do.
Legal AI
Document review, contract drafting, e-discovery summarization. Governance addresses confidentiality, attorney-client privilege, model bias in case-law summarization, and bar-association ethics opinions. Backed by Craig's book How Hackers Can Crush Your Law Firm and our expert-witness practice.
Financial Services
Credit decisions, fraud detection, KYC automation. Governance ties SR-11-7 model risk to SOC 2 controls, validates against fair-lending regulations, and documents the explainability needed when a regulator asks why a customer was denied.
Manufacturing & OT
Predictive maintenance, computer-vision QA, autonomous routing in warehouses. Governance covers safety controls, data classification on factory networks, and the OT-segmentation rules that keep AI workloads off production control systems.
Education & Nonprofits
Tutoring chatbots, grant-writing assistance, donor analytics. Governance focuses on FERPA, data minimization, and youth-protection requirements when an AI interacts with students. Budget-conscious build that still produces audit-ready evidence.
Ready to map your AI estate against the governance frameworks that matter?
We will inventory your AI use cases, score them against EU AI Act, NIST AI RMF, and ISO 42001, and hand you a prioritized roadmap inside 14 days.
Schedule the Inventory ReviewAn AI governance partner with security DNA, not a slide deck.
Plenty of firms launched AI governance practices in the last 12 months. Few have lived in regulated environments long enough to know what an auditor will actually accept. Here is what makes PTG different.
MIT-certified leadership
Craig Petronella holds MIT certifications in cybersecurity, AI, blockchain, and compliance. He is also a Cyber AB CMMC-RP and NC Licensed Digital Forensics Examiner (License 604180-DFE).
15 published books
Including Beautifully Inefficient on AI and human creativity, CMMC 2.0 Certification Guide, and How HIPAA Can Crush Your Medical Practice. E-E-A-T signals you can hand to a procurement team.
ComplianceArmor
Our proprietary platform automates AI policy generation, model registry, evidence collection, and quarterly audit packets across NIST AI RMF, ISO 42001, HIPAA, CMMC, and SOC 2 in one workspace.
We run our own AI
Production agents Penny (sales), Eve (emergency response), ComplyBot (compliance chat), and Joe (scheduling) automate 87% of routine work inside PTG. The governance program for these agents is the one we deploy to clients.
Same team for every layer
The same engineers running our 24/7 SOC, Managed XDR, and vCISO services build the AI governance program. AI risks get triaged by people who already know your environment.
24+ years, zero breaches
Founded April 2002. BBB A+ since 2003. 2,500+ businesses protected. Zero client breaches on the managed program. Featured on NBC, ABC, CBS, FOX, and WRAL as a cybersecurity expert.
AI governance: what executives actually ask us.
What does an AI governance certification cost and how long does it take?
The two most relevant credentials in 2026 are the IAPP AIGP (AI Governance Professional) at roughly USD 599 for the exam and the ISACA AAIA (Advanced in AI Audit) at around USD 760 for members and USD 990 for non-members. PTG runs a 90-day study track that combines 12 weeks of cohort sessions, mock exams, and direct coaching from credentialed instructors. Engagement clients receive the exam fee back if they pass within 90 days of completing the cohort. Most candidates with a security or privacy background pass on the first attempt.
What AI governance tools do we actually need to buy?
For most mid-market organizations, the answer is fewer than vendors will tell you. You need a centralized model registry, a policy library, a risk and incident repository, a bias-and-fairness testing suite, and a continuous-monitoring dashboard. PTG consolidates all five inside ComplianceArmor for engagement clients, so the only additional licenses are open-source AI testing tools like Giskard, Garak, IBM AI Fairness 360, and Microsoft Fairlearn. We replace generic GRC duplicates rather than stack them.
Does the EU AI Act apply to my U.S. company?
If you sell, license, or embed AI systems used by people in the European Union — even indirectly through a customer of yours — the EU AI Act likely applies. Prohibited-practice obligations took effect in February 2025; general-purpose AI rules in August 2025; high-risk system rules in August 2026. Maximum fines reach the greater of EUR 35 million or 7% of global turnover. Our review classifies your systems against the four EU risk tiers and identifies which obligations actually attach.
How is AI governance different from regular cybersecurity or privacy compliance?
Cybersecurity protects systems from external attackers. Privacy compliance protects data subjects from misuse of personal information. AI governance protects organizations and the public from the unique risks of probabilistic, learning systems — bias, hallucination, model drift, prompt injection, training-data poisoning, and unexplainable outputs. Most of those risks have no parallel in traditional security or privacy programs, which is why bolting AI onto an existing GRC tool falls short.
Can PTG help with ISO/IEC 42001 certification?
Yes. ISO/IEC 42001 is the first AI Management System standard, published in late 2023, and it has rapidly become the AI equivalent of SOC 2 in procurement conversations. PTG builds the AIMS using our six-pillar framework, runs the gap assessment, prepares the documentation in ComplianceArmor, performs an internal audit, and coordinates the external certification body. Most clients reach certification readiness in six to nine months.
What about HIPAA and AI? Can we use ChatGPT in a medical practice?
You can use AI in a HIPAA-regulated environment, but only with a Business Associate Agreement (BAA) covering the AI vendor, the right configuration to keep PHI from training the model, audit logging on every prompt that touches PHI, and a documented minimum-necessary review of what data the model actually sees. Our 340+ healthcare audits and Craig's book How HIPAA Can Crush Your Medical Practice inform the controls. We have built compliant AI deployments using both hosted enterprise tiers and on-prem private LLM solutions.
How does AI governance interact with CMMC for defense contractors?
Defense contractors with CUI in their environments must extend the same NIST 800-171 controls onto AI systems that ingest, process, or output CUI. That means data classification on training sets, authorized data flows for hosted vs. on-prem models, audit logging for AI-driven decisions, and explicit boundary documentation. Craig's CMMC Registered Practitioner credential and our CMMC consultant team integrate AI governance directly into the CMMC body of evidence so a single C3PAO assessment covers both.
How quickly can PTG stand up an AI governance program?
Engagements run on monthly milestones. Week two delivers the AI Acceptable Use Policy and Governance Charter. Week six delivers the model inventory, risk classification, and first round of bias-and-fairness testing on the highest-risk model. Week ten delivers the ComplianceArmor monitoring dashboard. Day 90 delivers the audit binder, executive summary, and roadmap to ISO 42001 or SOC 2-AI readiness. Our 30-day promise: measurable progress in the first month or the next month is on us.
Do we need an AI Governance Committee, and who chairs it?
Yes — both NIST AI RMF and ISO/IEC 42001 expect a named accountability structure. In a mid-market organization the committee is typically chaired by the CISO or a Chief AI Officer if one exists, with the General Counsel or DPO, the senior data leader, the head of HR, and a business-line owner. PTG drafts the charter, the RACI, and the meeting cadence, then sits in as advisor for the first six months until the committee runs itself.
Keep reading.
Stand up an AI governance program your board, regulators, and customers will accept.
Schedule a free 45-minute review. We will inventory your shadow AI, score your top three models against NIST AI RMF and the EU AI Act, and hand you a prioritized 90-day roadmap.
5540 Centerview Dr., Suite 200
Raleigh, NC 27606
919-348-4912 · info@petronellatech.com
BBB A+ Accredited since 2003 · Featured on NBC, ABC, CBS, FOX, WRAL