ComplianceArmor · SOC 2 for AI Startups

SOC 2 for AI startups. Audit-ready in 45 days.

A done-for-you SOC 2 Type I package built for foundation-model wrappers, agent platforms, copilots, and AI infrastructure companies. Trust Services Criteria scoped for model APIs, prompt logs, and customer data isolation in fine-tuning, with sub-processor disclosures for OpenAI, Anthropic, and Google.

SOC 2 Type I Readiness | For AI & LLM Platforms | 45 Days from Kickoff | BBB A+ Since 2003
Why your buyers ask

Every Fortune 500 procurement team puts SOC 2 in front of the AI deal.

The first generation of enterprise AI deployments turned procurement into an AI-risk function overnight. The vendor security questionnaire is no longer a compliance formality, it is the gate. Enterprise buyers have watched AI vendors leak training data, surface confidential prompts to other tenants, and route customer inputs through model providers without a written sub-processor agreement. Their answer is the document they already use to gate every other SaaS contract: a recent SOC 2 report, scoped correctly, dated within twelve months, with specific language about how AI inputs and outputs are handled.

For an AI startup, that report becomes the price of admission to a Fortune 500 logo, a regulated buyer, or any enterprise procurement office that has updated its third-party risk policy in the last 18 months. Without it, you stay in pilot. With it, the pilot converts. ComplianceArmor builds the SOC 2 Type I package that closes the gate, scoped to your AI architecture, with the language enterprise security reviewers expect to see when a model is in the data path.

This page is for founders, CTOs, and heads of security at AI startups who are facing one of three pressures: a Fortune 500 procurement deal that has stalled on the security review, an enterprise customer whose CISO will not sign without a SOC 2 in hand, or an investor diligence cycle where the lead has flagged the absence of a SOC 2 program as a risk factor in the term sheet. The path through every one of these is the same: a SOC 2 Type I report, scoped to the AI system, in 45 days.

TSC scoping for AI platforms

For AI startups, Confidentiality and Privacy carry the weight that Availability used to.

Security is required on every SOC 2. For AI platforms, the criteria your buyer's procurement team focuses on are Confidentiality (because customer prompts often contain trade secrets, source code, and PII) and Privacy (because model providers, vector indexes, and observability tools all become sub-processors of personal information). Processing Integrity matters when your AI output is used to make a decision. Availability matters less than uptime SLAs imply, but still gets scoped when an enterprise contract names it.

Required · CC1-CC9

Security

Governance, RBAC for model deployment, change management for prompt templates and system prompts, incident response that names prompt-injection and data-poisoning scenarios.

Optional · A1

Availability

Uptime, capacity, model-provider failover. Worth scoping when an enterprise contract has a 99.9 percent SLA. Often deferred to Type II for early-stage AI startups.

Optional · PI1

Processing Integrity

For AI inference services and decision-support platforms: input validation, output reconciliation, evals, and an audit trail when a model output drives or assists a customer-impacting action.

Recommended · C1

Confidentiality

Customer prompts often contain trade secrets, code, and PII. Scoping Confidentiality covers data classification, encryption, retention of prompt and output logs, and tenant isolation for fine-tuning and RAG.

Recommended · P1-P8

Privacy

Notice, choice, consent, and disclosure of every sub-processor in the model chain (OpenAI, Anthropic, Google, vector DB, observability). Maps to your DPA and customer privacy notice.

AI-specific control considerations

The control narratives that make an AI SOC 2 different from a SaaS SOC 2.

A standard SaaS SOC 2 covers logical access, change management, incident response, and vendor management. An AI SOC 2 covers all of that, plus a layer of controls that exist because the model itself is a system component. Enterprise security reviewers know the difference. They will ask, in plain language, how you handle the eight items below. ComplianceArmor writes a control narrative for each one.

  • Foundation-model API risk. Every prompt your platform sends to OpenAI, Anthropic, Google, or a self-hosted model crosses a trust boundary. The control narrative documents which model providers are sub-processors, which contracts opt out of training on your data, what happens on provider outage, and how you isolate per-tenant API keys.
  • Prompt injection logging. Prompt injection is the SQL injection of the LLM era. The control narrative covers detection signals, log retention, who can read full-fidelity prompt logs (which often contain customer secrets), and the alerting threshold that escalates a suspected injection to incident response.
  • Model output PII handling. When a model echoes back PII, it does not stop being PII. The narrative covers redaction at egress, output filtering for prohibited categories, and the response procedure when an output contains data the user did not provide.
  • Training data provenance. Whether you fine-tune, embed, or only call APIs, the auditor will ask where training and embedding data came from, whether it was licensed, whether customer data is excluded, and whether training-eligible data is segmented from inference-only data.
  • Customer data isolation in fine-tuning. If you fine-tune per-tenant or maintain per-tenant RAG indexes, the narrative documents the boundary, the controls that prevent cross-tenant retrieval, and the test you run to verify isolation before each release.
  • RBAC for model deployment. Who can push a new system prompt to production. Who can roll back a model version. Who can change the temperature or top-p of a customer-facing endpoint. These are change-management controls, framed for an AI surface area.
  • Audit trail for AI decisions. When the model output drives a decision (loan approval, claims triage, document classification), the narrative documents the input, the model version, the output, the prompt, and the human override path, with retention sufficient for the longest applicable customer dispute window.
  • Sub-processor disclosure. The vendor register and customer DPA list every sub-processor: model providers, vector databases, evaluation services, observability platforms, and any service that sees a prompt, an embedding, or an output. The list and notification process are part of Privacy and Confidentiality TSC.

Petronella Technology Group has reviewed dozens of AI startup architectures and writes control narratives that match the way the auditor will ask the question. The package ships with the eight narratives above pre-drafted, then scoped to your specific stack during the readiness phase. Learn more about our AI services or read the AI readiness assessment.

What you receive

A SOC 2 Type I package with the AI-specific narratives baked in.

Branded. Editable. Yours forever. No subscription, no platform lock-in. Every artifact below is scoped to your AI architecture and named the way a SOC 2 auditor expects to see it.

System Description (AI-aware)

Section 3 description that names the model, the providers, the data path, and the trust boundary, in language an enterprise reviewer expects.

Model Risk Register

Foundation-model providers, fine-tunes, evaluation tooling, RAG components, with risk scoring and treatment plans.

TSC Control Mapping

Every control mapped to Security, Confidentiality, and Privacy criteria, with point-of-focus coverage notes.

Information Security Policy Set

Access control, change management, incident response, vendor management, BC/DR, plus an AI-acceptable-use policy.

Prompt & Output Log Policy

Retention, encryption, access, redaction, and the handling rules for logs that may contain customer PII or trade secrets.

Sub-processor Disclosure

The customer-facing list of every model provider, vector store, and observability tool that sees a prompt, embedding, or output.

AI Incident Response Playbook

Prompt injection, jailbreak, training-data leakage, hallucination-driven harm, and model-provider outage runbooks.

CPA Evidence Index

Per-control list of artifacts your auditor will request, with the AI-specific evidence (eval results, prompt-test logs) called out.

System Boundary Diagram

Architecture, data flows, and the trust boundary, with model providers and vector indexes drawn as named external systems.

Vendor Risk Register

Every model provider, vector DB, eval service, and AI observability tool with their SOC 2 reports and complementary user controls.

Audit-Readiness Checklist

The day-one punch list before kickoff: open tickets, owner sign-offs, evidence freshness, eval coverage.

Customer-Facing Summary

The one-page enterprise security review summary your sales team can ship under NDA before the full report lands.

SOC 2 Type I · Done-For-You · AI-Aware

SOC 2 Type I from $14,997 in 45 days, scoped for AI startups.

One fixed price covers the readiness program, the AI-aware documentation package, evidence collection support, and walkthrough prep. The independent CPA audit is a separate engagement with your auditor of choice.

  • System description with model providers, prompt logs, and trust boundary called out
  • Confidentiality and Privacy criteria scoped for AI sub-processor chains
  • AI-specific incident response, RBAC, and audit-trail narratives
  • Sub-processor disclosure and customer DPA addendum
  • Audit-Ready Promise: 50% fee refund if a clean Type I cannot be issued because of our work
Independent CPA audit fee disclosed up front: $5,000-$50,000 depending on firm and scope. Paid directly to your auditor. SOC 2 is an attestation, not a certification, and must be performed by a licensed CPA firm. Petronella Technology Group is not a CPA firm and provides readiness, implementation, and evidence-collection services only.
From $14,997 flat fee · 45 days · SOC 2 Type I package
The Audit-Ready Promise

If we missed something, we fix it free.

Every SOC 2 engagement carries the Petronella Technology Group Audit-Ready Promise. If your CPA flags a gap that should have been in the package, we close it at no charge within 30 days. If a clean SOC 2 Type I cannot be issued because of our work, we refund 50% of our fee. The package is yours forever, in editable native formats, with no subscription and no DRM.

Frequently asked

SOC 2 questions AI startup buyers ask before they sign.

Do we need SOC 2 Type I, or do enterprise buyers want Type II?

Most enterprise procurement teams will accept Type I to unblock the deal, then ask for Type II at the renewal or after the audit window has run. The pattern that closes Fortune 500 AI deals fastest is: ship Type I in 45 days to unblock, start the Type II observation window the day Type I is issued, and deliver Type II at renewal. We scope your Type I so the same controls flow straight into a 6 or 12-month Type II window without rework. See the SOC 2 software hub for the full Type I to Type II pathway.

Which Trust Services Criteria should an AI startup scope in?

Security is required. For AI startups we recommend adding Confidentiality and Privacy at minimum. Confidentiality covers customer prompts, embeddings, and outputs that often contain trade secrets, source code, or PII. Privacy covers the sub-processor chain (OpenAI, Anthropic, Google, vector stores, observability tools) and the notice, choice, and disclosure obligations that follow from any of those touching personal data.

Add Processing Integrity if your AI output drives or assists a customer-impacting decision (loan triage, claims, classification, search ranking). Add Availability when an enterprise contract names an SLA. We help you scope the minimum that satisfies the buyer in front of you and expand from there.

How do we handle OpenAI, Anthropic, and Google as sub-processors?

Each model provider is a sub-processor under your customer DPA. The package documents the legal posture (which API tier opts your data out of training), the technical posture (how prompts and outputs are routed and retained), and the customer-facing list (the sub-processor disclosure your enterprise buyers will request). We pre-draft the language for the major providers and scope the rest of your stack during readiness.

Note that "OpenAI does not train on API data" is a legal commitment in their enterprise terms, not a technical one. The control narrative reflects that distinction and explains the contract clauses that bind it.

How do we prove tenant isolation when we fine-tune per customer?

The control narrative documents your isolation boundary (separate base models, separate fine-tune jobs, separate weights, separate retrieval indexes, or a routing layer) and the test you run before each release to confirm cross-tenant data does not surface. The CPA will sample evidence: deployment manifests, isolation test results, change tickets, and the access log for who can read across tenants.

If your isolation strategy relies on prompt-level scoping rather than model-level separation, that is acceptable for SOC 2, but the narrative has to be explicit about the threat model and the controls that compensate.

What about prompt injection? Is that in scope for SOC 2?

Prompt injection is in scope under Security and, when customer data is at risk, Confidentiality. The control narrative covers detection (input filtering, output classification, anomaly signals), logging (with retention and access controls because logs may contain secrets), and incident response (the threshold at which a suspected injection escalates to your IR team).

The CPA does not test the effectiveness of your detection, but they will confirm the controls are designed and in place. The package ships with a prompt-injection runbook that ties detection signals to your incident classification scheme.

Do we have to disclose model output PII handling?

Yes, when Privacy or Confidentiality TSC are scoped. The narrative covers redaction at egress, output classifiers for prohibited categories, the response procedure when an output contains data the user did not provide, and the incident path if a model surfaces another tenant's data. Enterprise security reviewers ask this question explicitly, often as the first AI-specific item in their questionnaire.

How does training data provenance show up in the audit?

The control narrative documents the source of any data used for training, fine-tuning, or embedding: licensed datasets, customer data with explicit consent, public data with legal review, or API-only with no training. The auditor will sample the licensing records, the consent flows, and the segmentation between training-eligible and inference-only data. If you do not fine-tune, the narrative says so explicitly, which closes the question quickly.

How is this different from Vanta or Drata for an AI startup?

Vanta and Drata are evidence-collection platforms. Your team still writes the system description, the policies, and the AI-specific control narratives. ComplianceArmor is done-for-you: Petronella Technology Group writes the documentation for you, scoped to your AI architecture, with the eight AI-specific narratives pre-drafted. If you already use Vanta or Drata for ongoing evidence, the package drops cleanly into them with no rework.

See the ComplianceArmor vs Vanta and vs Drata comparisons for the side-by-side breakdown.

Stop blocking on the security review. Ship the SOC 2.

Schedule a 30-minute SOC 2 demo. We walk through your AI architecture, scope your TSC live, and show you the deliverables your CPA and your enterprise buyer will see on day one.