CMMC and AI: What Defense Contractors Need to Know About Using AI Tools
Posted: March 9, 2026 to Compliance.
The Department of Defense's Cybersecurity Maturity Model Certification program is now in effect, with the CMMC final rule published in October 2024 and phased implementation beginning in Q1 2025. At the same time, AI tools have become embedded in virtually every business workflow, from email drafting to document analysis to code generation. For defense contractors, these two realities create an urgent question: which AI tools can you use, and which ones put your CMMC certification at risk?
The answer depends on what data you are processing, what CMMC level you are pursuing, and where your AI tools process their data. This guide provides specific, actionable guidance for each scenario.
The Core Problem: CUI and Third-Party AI Services
CMMC Level 2 requires compliance with all 110 security controls in NIST SP 800-171 Revision 2 for systems that process, store, or transmit Controlled Unclassified Information (CUI). The critical question for AI tools is whether they constitute a "system" that processes CUI, and if so, whether they meet all 110 controls.
When you paste a CUI document into ChatGPT, Copilot, Claude, Gemini, or any other cloud AI service, that document is transmitted to and processed on the vendor's cloud infrastructure. That infrastructure becomes part of your CUI boundary. Every NIST 800-171 control now applies to that vendor's systems, your connection to them, and the data flow between you.
Can You Use ChatGPT with CUI Data?
No. OpenAI's standard ChatGPT service does not meet the requirements for CUI processing. Specifically:
- 3.1.1 Account Management: ChatGPT's user management does not support the organizational account management controls required for CUI systems
- 3.1.3 CUI Flow Control: You cannot control how CUI flows within OpenAI's infrastructure after submission
- 3.5.3 Multi-factor Authentication: ChatGPT supports MFA for login but does not enforce it at the organizational level with the granularity NIST 800-171 requires
- 3.13.1 Boundary Protection: OpenAI's multi-tenant infrastructure does not provide the network boundary protections required for CUI
- 3.13.16 CUI at Rest: You have no visibility into or control over how your data is encrypted at rest within OpenAI's systems
OpenAI offers an Enterprise tier and a Government tier, but as of March 2026, neither has achieved FedRAMP High authorization, which is the minimum bar for CUI processing in most DoD interpretations.
What About Microsoft Copilot?
It depends on which Microsoft environment you are using:
Microsoft 365 Commercial (Standard Copilot): No
Standard Microsoft 365 Commercial with Copilot processes data in Microsoft's commercial cloud. While Microsoft's commercial cloud has FedRAMP Moderate authorization, CMMC Level 2 for CUI typically requires FedRAMP High equivalent controls. Standard Copilot does not meet the bar for CUI processing.
Microsoft 365 GCC: Likely No for CUI
GCC (Government Community Cloud) meets FedRAMP Moderate. It is appropriate for Federal Contract Information (FCI) at CMMC Level 1, but most assessors will not accept it for CUI at CMMC Level 2. Copilot availability in GCC is limited as of early 2026.
Microsoft 365 GCC High: Conditionally Yes
GCC High meets FedRAMP High and is the standard environment for CUI processing. Microsoft has been rolling out Copilot features to GCC High on a delayed timeline compared to commercial. As of March 2026, Copilot in GCC High supports Word, Excel, PowerPoint, and Teams, but some features available in commercial Copilot are not yet available. If your entire M365 environment is on GCC High and your System Security Plan documents Copilot as a CUI-processing component, this can work, but expect to pay $12 to $35 more per user per month compared to commercial licensing.
The Private AI Solution
For defense contractors who need AI capabilities for CUI-related work, private AI deployment is the cleanest path to compliance. When you run an LLM on your own infrastructure within your existing CUI boundary, the AI tool inherits the security controls you have already implemented. No new vendor assessment. No new data flow diagrams. No additional FedRAMP authorization to validate.
Specifically, a private LLM deployed within your NIST 800-171 compliant environment satisfies AI-related CMMC requirements because:
- Data never leaves your boundary: All prompts, documents, and outputs stay within your controlled environment
- Existing controls apply: Your access controls, encryption, audit logging, and network segmentation extend to the AI system automatically
- Full audit trail: You control the logging infrastructure and can demonstrate complete visibility to assessors
- No vendor dependency: No third-party SSP (System Security Plan) to obtain, review, and verify
- SSP simplicity: The AI component is documented as an internal system within your existing boundary, not a separate external service requiring its own assessment
CMMC Controls That Apply to AI Systems
The following NIST 800-171 control families are most relevant when documenting AI tools in your SSP:
Access Control (3.1.x)
Who can use the AI system? Are access rights limited based on role and need? Can you demonstrate that users querying the AI with CUI have appropriate clearance and authorization? For cloud AI tools, this extends to the vendor's access to your data.
Audit and Accountability (3.3.x)
Can you log all AI interactions, including prompts submitted and responses generated? Can you retain those logs for the required period? Can you protect the integrity of audit logs? For private AI, this is straightforward. For cloud AI, you are dependent on the vendor's logging capabilities and your ability to export and retain logs independently.
Identification and Authentication (3.5.x)
Does the AI system enforce multi-factor authentication? Does it integrate with your identity provider? Can you uniquely identify each user's interactions? Private AI deployments integrate directly with your existing IAM infrastructure.
System and Communications Protection (3.13.x)
Is all data encrypted in transit and at rest? Are network communications monitored? Is the AI system on a segmented network appropriate for CUI? Can you control data flow at your boundary? This family is where most cloud AI services fail, because you cannot verify or control their internal communications architecture.
System and Information Integrity (3.14.x)
Is the AI system monitored for security-relevant events? Are inputs and outputs screened for malicious content? Is the system patched and updated regularly? Can you detect unauthorized changes to the model or configuration?
Practical Guidance for Defense Contractors
Based on our experience helping defense contractors achieve CMMC certification, here is our practical guidance for AI tool usage:
For CMMC Level 1 (FCI Only)
You can use commercial AI tools (ChatGPT, standard Copilot, Claude) for general business tasks that do not involve FCI. For FCI-related tasks, use AI tools in FedRAMP Moderate or higher environments, or deploy private AI. The controls at Level 1 are less restrictive, but you still need to demonstrate that FCI is protected per FAR 52.204-21.
For CMMC Level 2 (CUI)
Do not process CUI through any AI service that is not within your assessed CUI boundary. Options:
- Private AI within your CUI enclave: The recommended approach. Deploy custom AI solutions on your own infrastructure, documented in your SSP
- Microsoft GCC High with Copilot: Acceptable if your entire environment is on GCC High and you document it properly
- FedRAMP High authorized AI service: As vendors achieve FedRAMP High authorization, they become viable options, but verify authorization status independently through the FedRAMP marketplace, not the vendor's marketing claims
For CMMC Level 3 (CUI + Enhanced Security)
Level 3 adds 24 enhanced security requirements from NIST SP 800-172. At this level, private AI is effectively the only viable option. The enhanced controls around penetration-resistant architecture, cyber resiliency, and advanced threat detection are extremely difficult to satisfy with any external AI service.
What to Tell Your Assessor
During your CMMC assessment, the C3PAO (Certified Third-Party Assessment Organization) will ask about AI tools if they appear in your SSP, your interviews, or your system inventory. Be prepared to:
- Identify every AI tool in use across the organization, including personal use of ChatGPT on company devices
- Document which AI tools are within the CUI boundary and which are not
- Demonstrate that CUI cannot flow to AI tools outside the boundary (technical controls, not just policy)
- Provide audit logs showing AI system access and usage
- Show your policy on AI tool usage, including what data types can and cannot be processed by which tools
The most common finding related to AI tools is undocumented usage, where employees use ChatGPT or similar services for CUI-related tasks without the organization's knowledge or documentation. An AI usage policy backed by technical controls (DNS filtering, DLP, browser restrictions) is essential.
Next Steps
If you are a defense contractor preparing for CMMC assessment and using or planning to use AI tools, start with these actions:
- Inventory: Catalog every AI tool in use, including browser extensions, Copilot features, and personal accounts
- Classify: Determine which tools are used with CUI, FCI, or neither
- Evaluate: Assess whether each CUI-adjacent tool meets all applicable NIST 800-171 controls
- Mitigate: Remove non-compliant tools from CUI workflows and implement technical controls to prevent unauthorized usage
- Document: Update your SSP, policies, and procedures to reflect your AI tool posture
- Deploy compliant alternatives: Implement private AI solutions for CUI-related AI needs
Our team has helped dozens of defense contractors navigate the intersection of CMMC compliance and AI adoption. Contact us for a confidential assessment of your AI tool usage and CMMC readiness.
Frequently Asked Questions
Can defense contractors use any AI tool for CUI processing?
Only AI tools deployed within your CMMC-assessed CUI boundary or hosted in FedRAMP High authorized environments. As of March 2026, this effectively limits options to private AI deployments on your own infrastructure or Microsoft GCC High with Copilot. No major commercial AI service (ChatGPT, Claude, Gemini) has achieved FedRAMP High authorization for their standard consumer or business tiers.
Will using ChatGPT on a company laptop fail a CMMC assessment?
If the laptop is within your CUI boundary and ChatGPT is not documented and approved in your SSP, yes. Even if the employee only uses ChatGPT for non-CUI tasks, the assessor will flag the presence of an undocumented external service on a CUI system. The safest approach is to block access to consumer AI services on all CUI-boundary devices via DNS filtering or browser restrictions, and provide an approved private AI alternative.
How long does it take to deploy a CMMC-compliant private AI system?
A private AI deployment within an existing CMMC-compliant infrastructure typically takes 2-4 weeks for hardware procurement and setup, plus 1-2 weeks for SSP documentation updates and security configuration. If you are building your CMMC compliance program and AI deployment simultaneously, the AI component adds approximately 2-3 weeks to the overall timeline. Our team can accelerate this with pre-configured hardware and deployment playbooks.
Craig Petronella is the CEO of Petronella Technology Group, a CMMC Registered Provider Organization (RPO) with over 23 years of experience helping defense contractors achieve and maintain compliance. His firm specializes in the intersection of cybersecurity compliance and emerging technology adoption.
Get a Free AI Assessment
Need help deploying AI tools that satisfy CMMC requirements? Our team will assess your current AI usage, identify compliance gaps, and design a compliant AI strategy. Schedule your free assessment or call us at 919-348-4912.