Confidential Computing: Collaborate Without Leaking Data

Organizations increasingly face a paradox: the highest-value insights lie in joint analysis across data sets that span partners, competitors, and regulators, yet the risk of exposure, misuse, and regulatory breach grows with every copy and movement of data. Confidential computing resolves this paradox by allowing multiple parties to compute over combined data without revealing the underlying inputs to each other or to the infrastructure that runs the computation. Done well, it turns “can we share?” into “how quickly can we collaborate?”—with cryptographic assurance rather than blind trust.

This article explores the foundations, architectures, trade-offs, and practical patterns for using confidential computing to collaborate without leaking data. We will anchor abstract concepts in real-world examples across health, finance, advertising, and AI, and lay out a pragmatic path to adoption.

What Is Confidential Computing?

Confidential computing is a security model and technology set that protects data in use by isolating computation in hardware-enforced trusted execution environments (TEEs) or by using cryptography such as multiparty computation (MPC) and homomorphic encryption (HE). It complements encryption at rest and in transit by ensuring that even during processing, data cannot be read or altered by unauthorized parties—including cloud administrators, hypervisors, and co-tenants on shared infrastructure.

Two broad families power confidential collaboration:

  • Hardware-based isolation: TEEs create protected memory regions where code and data are shielded at runtime. Examples include Intel SGX and TDX, AMD SEV-SNP, Arm CCA, and cloud offerings such as AWS Nitro Enclaves, Azure confidential VMs and containers, and Google Confidential VMs and Confidential Space.
  • Cryptography-first approaches: MPC splits secrets among parties and computes on shares; HE enables calculations on encrypted data without decryption. These techniques reduce trust in hardware but can be computationally heavy.

In practice, many systems combine these approaches with additional privacy techniques (secure aggregation, differential privacy) and rigorous governance to match cost, latency, and risk profiles.

Threat Models and Trust Assumptions

Before selecting a technique, define what you are defending against and who you need to trust:

  • Infrastructure insider risk: Do you need to protect against cloud operators, hypervisors, or host OS access? TEEs and confidential VMs primarily address this.
  • Counterparty curiosity: Do you want to prevent partners from seeing your raw data? MPC/HE and enclave-based joint computation can do this.
  • Regulatory exposure: Are there restrictions on moving data across borders or mixing personally identifiable information (PII)? Confidential execution can keep data in jurisdiction and enforce purpose limitation.
  • Side channels and implementation flaws: Can your threat model tolerate residual risk from microarchitectural side channels? If not, you’ll need algorithmic hardening and conservative designs.

Write these assumptions down. They determine the architecture, vendors you can use, and the evidence you must produce for auditors and partners.

Core Primitives for Collaboration Without Leakage

Trusted Execution Environments and Remote Attestation

TEEs provide a shielded region of memory where the processor enforces confidentiality and integrity. Crucially, they support remote attestation: a cryptographic proof that a specific piece of code is running inside genuine hardware with a known security configuration. Attestation transforms “trust me” into “verify me.”

A common attestation flow for a TEE-backed collaboration service looks like this:

  1. Bootstrapping: The enclave or confidential VM boots with a measured launch. The CPU or firmware records a measurement (hash) of the code and environment.
  2. Quote creation: The TEE produces a signed attestation report (quote) containing the measurement, configuration, and a nonce from the verifier to prevent replay.
  3. Verification: Each data owner verifies the quote against the manufacturer’s root of trust (e.g., Intel, AMD, Arm) or a cloud attestation service, checks that the measurement matches the approved binary, and enforces policy (e.g., minimum microcode levels).
  4. Key release: Only after verification do data owners provision decryption keys or API tokens to the enclave, often via a key management service (KMS) that performs attestation-bound key release.
  5. Secure session: The enclave establishes a mutually authenticated channel (e.g., RA-TLS) so that inputs and outputs stay confidential in transit.

From here, joint computations proceed inside the protected environment, and results are returned with optional integrity receipts (e.g., signed hashes of outputs bound to the attestation report).

Enclaves vs. Confidential VMs and Containers

Enclaves (e.g., SGX, Nitro Enclaves) isolate at the process boundary; confidential VMs isolate the whole virtual machine with memory encryption and integrity protection (e.g., SEV-SNP, TDX). Confidential containers add attestation to container runtimes. Decision factors include:

  • Compatibility: Confidential VMs run unmodified applications; enclaves may require SDKs or LibOS layers (Gramine, SCONE, Occlum) and can have memory constraints.
  • Granularity: Enclaves minimize trust in the guest OS; VMs trust the guest OS but shield from the host.
  • Performance: Enclave context switches and limited EPC (enclave page cache) can create overhead; confidential VMs offer broader memory but rely on CPU features for performance.

Secure Multiparty Computation (MPC)

MPC lets several parties jointly compute a function over their inputs while revealing only the output. Data is secret-shared or garbled, and protocols ensure no single party learns others’ inputs. MPC shines when hardware trust is impossible or politically unacceptable. However, general-purpose MPC can be latency-heavy and bandwidth-intense, so it’s most effective for well-scoped functions like set intersection, simple linear models, or scoring.

Homomorphic Encryption (HE)

HE allows computations on ciphertexts; decryption reveals the same result as if computed on plaintext. Fully homomorphic encryption (FHE) supports arbitrary circuits but is still expensive. Many practical deployments use leveled or partially homomorphic schemes (additive or multiplicative) for targeted tasks such as encrypted aggregation or statistics. HE is often combined with TEEs, letting enclaves handle orchestration while heavy math remains encrypted.

Differential Privacy and Secure Aggregation

Differential privacy (DP) adds calibrated noise to outputs, bounding the leakage of any individual record even after repeated queries. Secure aggregation protocols ensure the aggregator sees only the sum, not per-party values. DP is especially useful in repeated analytics scenarios, while secure aggregation is used in federated learning and telemetry.

Policy-Aware Data Clean Rooms

Data clean rooms provide controlled environments where queries are executed under strict policy. Modern implementations integrate TEEs for runtime protection, MPC for specific joins, DP for output privacy, and attestation for verifiability. The combination enables collaborations like ad measurement or product analytics without exposing raw event streams.

Collaboration Patterns that Avoid Data Leakage

Cross-Organization Analytics Without Raw Data Movement

Imagine two retailers wanting to know the overlap of their customer bases and purchase behaviors without sharing PII. A TEE-based pipeline can:

  • Accept hashed or tokenized identifiers via attested, encrypted channels.
  • Perform private set intersection (PSI) within the enclave, mapping shared customers.
  • Compute aggregated metrics (e.g., category-level spend bands) with DP noise.
  • Return only aggregates, never per-customer rows.

The compute runs in a confidential environment, key release is bound to measured code, and PII never leaves protected memory.

Joint Model Training

Federated learning and confidential training enable multiple data owners to train a shared model without pooling data. Options include:

  • Federated averaging with secure aggregation: Each party trains locally and shares encrypted gradients. The aggregator learns only the sum, possibly with DP to limit memorization.
  • Enclave-based centralized training: Parties upload encrypted data to a TEE that trains the model. Remote attestation and audit logs show what code ran and when.
  • MPC-based training for small models: Useful when hardware trust is off the table and models are simple (e.g., logistic regression).

This pattern appears in healthcare outcomes models, credit risk scoring, and fraud detection across institutions.

Privacy-Preserving Inference

A vendor can offer a model-scoring API that never sees raw customer data in the clear. Clients send encrypted features and receive encrypted predictions. Implementation options include:

  • TEE inference: The service verifies attestation, then decrypts inputs only inside the enclave and returns predictions.
  • HE-based scoring: For linear or tree-based models adapted to HE-friendly operations, compute directly on ciphertexts to avoid any decryption by the service.

Ad Measurement and Attribution

Advertisers and publishers can evaluate campaign performance without sharing user-level logs. A clean room can compute conversions by cohort, overlap metrics, and reach/frequency using MPC for set operations and DP for reporting. The system returns only aggregated outputs with k-anonymity thresholds enforced in code whose measurement is attested.

Financial Crime Detection Networks

Banks are incentivized to share signals on suspicious activity but constrained by confidentiality laws. With TEEs or MPC, institutions can share hashed counterparty identifiers and risk signals to compute network features—like the number of hops between known bad actors—without revealing underlying customer data. Results feed each bank’s internal monitoring models.

Health Research and Drug Discovery

Hospitals and pharmaceutical companies can collaborate on outcome analyses. Patient-level data stays within each hospital or is loaded into an attested enclave in-country. The pipeline performs privacy-preserving joins via privacy-preserving record linkage (PPRL), runs approved statistical analyses, and outputs DP-protected results. Ethics boards review the code measurement rather than a human-operated process, improving reproducibility and compliance.

Architecture Blueprints

TEE-Based Analytics Pipeline

A reference architecture for joint analytics with TEEs:

  1. Trust material: Publish the SHA-256 of the approved container or enclave binary, the expected security configuration (e.g., TDX with specified firmware), and the policy describing allowed queries.
  2. Provisioning: Partners verify attestation, then provision per-partner data keys via an attestation-aware KMS that checks measurements and policy claims (e.g., IETF RATS Evidence/Attestation Results).
  3. Data ingress: Partners upload encrypted files or stream via mTLS bound to attestation. The enclave decrypts inside protected memory.
  4. Compute: The job engine enforces query whitelists, row-level protections, and minimum cohort sizes. Optionally, DP adds noise calibrated to a privacy budget.
  5. Output verification: The enclave signs outputs and emits an attestation-bound receipt for auditors and downstream systems.
  6. Data disposal: After compute, memory is cleared, sealed state is minimized, and data keys are destroyed or rotated.

Key Management and Secret Provisioning

Keys should only be usable by the right code running in the right hardware at the right time:

  • Attestation-gated KMS: Keys are wrapped under a KMS master key and released only if the requester’s attestation report matches policy (measurement, vendor, firmware).
  • Sealed storage: The enclave seals transient state to disk with keys derived from the TEE, binding sealed data to measurement and platform.
  • Per-job data keys: Use ephemeral data keys for each job and rotate aggressively. Encrypt-at-rest remains, but the trust boundary is the enclave.
  • Split knowledge: For highly sensitive projects, separate key shares across organizations or an external HSM, requiring quorum to release.

Auditability and Verifiability

Trustworthy collaboration is not just about preventing leaks; it’s about proving you didn’t leak.

  • Transparency logs: Append-only, signed logs record attestation reports, policy versions, code measurements, and job metadata (inputs, query templates, outputs’ hashes).
  • Attestation evidence: Store the full evidence package and verifier results. Bind outputs to attested identities via signatures.
  • Independent verification: Allow partners or regulators to verify evidence using manufacturer roots and open verifiers. Consider third-party attestation services.
  • Reproducibility: Version data schemas, code, and parameters. If allowed, deterministic runs on synthetic data show that outputs match expected properties.

Data Lifecycle and Retention

Define how long data and keys persist and who can trigger deletion:

  • Ephemeral compute: Prefer stateless job execution where input data is streamed, processed, and discarded.
  • Separation of duties: The team operating the enclave platform should not have unilateral control over data retention; partner keys enforce constraints.
  • Jurisdictional controls: Keep compute in-region, and use attestation claims that include geolocation or cloud region identifiers if supported.

Practical Realities and Trade-offs

Performance and Cost Considerations

Choosing the right technique involves balancing latency, throughput, and compute cost:

  • TEEs: Overhead is typically modest for CPU-bound workloads but can rise with enclave paging or I/O transitions. Confidential VMs often deliver near-native performance, making them suitable for batch analytics and model training.
  • MPC: Communication-heavy protocols can bottleneck on bandwidth and increase latency. Precomputation, circuit optimization, and limiting function scope help.
  • HE: Even with modern libraries and hardware acceleration, HE is slower than plaintext compute. Focus on operations that map well (additions, low-degree polynomials) and use batching where possible.
  • Differential privacy: Noise reduces utility; choose epsilon budgets carefully and monitor cumulative privacy loss across queries.

Plan capacity with honest benchmarks on representative data, and budget for extra engineering to handle edge cases like skewed joins and model drift.

Side Channels and Mitigations

Side channels—timing, cache patterns, page faults—remain a genuine risk, especially for enclaves with shared resources. Mitigations include:

  • Code hardening: Use constant-time cryptography, data-oblivious algorithms, and batch operations to reduce data-dependent branching.
  • Configuration: Disable simultaneous multithreading (SMT) when recommended, isolate cores, and pin workloads to reduce sharing with untrusted tenants.
  • Patching: Track microcode and firmware advisories; define minimum versions in attestation policy.
  • Noise and aggregation: Favor aggregate outputs and DP to reduce the value of any residual leakage.

Developer Tooling and Frameworks

Building for confidential computing is easier with the right stack:

  • Enclave SDKs and LibOS: Open Enclave SDK, Intel SGX SDK, Azure’s DCsv5/ECsv5-ready runtimes, AWS Nitro Enclaves SDK; LibOS frameworks like Gramine, SCONE, and Occlum reduce code changes.
  • Confidential containers and VMs: Kata Containers, Confidential Containers (CoCo) for Kubernetes; AMD SEV-SNP and Intel TDX-backed confidential VMs on major clouds.
  • MPC libraries: EMP-toolkit, MP-SPDZ/SPDZ, SCALE-MAMBA for prototyping; commercial platforms provide optimized PSI and aggregation.
  • HE libraries: Microsoft SEAL, PALISADE/OpenFHE, Lattigo, TFHE, and Zama Concrete for FHE; look for GPU/ASIC acceleration options.
  • Policy enforcement: Embed policy checks with Open Policy Agent (OPA/Rego) or AWS Cedar-style authorization, and bind decisions to attestation claims.

Interoperability and Standards

To avoid lock-in and ease multi-cloud collaboration, observe emerging standards:

  • IETF RATS (Remote ATtestation Procedures): Endorse Evidence formats and Attestation Results, including EAT tokens.
  • Confidential Computing Consortium: Community-driven guidance and reference implementations.
  • Attestation tokens: Vendor-specific formats (e.g., Intel TDX, AMD SEV-SNP, Arm CCA) are converging toward tokenized, verifiable claims.
  • Artifact signing: Use Sigstore, SLSA, and SBOMs to couple code provenance with attestation measurements.

Governance, Compliance, and Contracts

Data Classification and Purpose Limitation

Confidential computing is not a free pass to process anything anywhere. Classify data and limit purposes:

  • Identify PII, PHI, PCI, trade secrets, and model IP; assign processing zones and required controls.
  • Bind purposes to code: Attestation proves a particular binary ran; ensuring that binary enforces policy is crucial.
  • Minimize: Collect only the features necessary; use pseudonymization and salted hashing for linkage keys.

Legal Agreements Enhanced by Cryptography

Contracts define acceptable use; cryptographic controls enforce it:

  • Data use agreements: Reference attested code measurements and permitted query templates. Treat measurement updates as contract amendments.
  • Key escrow and quorum: Require multi-party approval for key release, with auditable thresholds.
  • Geo and retention clauses: Expressed in policy and enforced by attestation (region claims) and time-bound keys.

Audit, Monitoring, and Incident Response

Prepare to demonstrate controls and handle anomalies:

  • Comprehensive logging: Include attestation results, policy IDs, input hashes, query IDs, output signatures, and DP budgets consumed.
  • Continuous verification: Periodically re-attest long-running services and rotate keys.
  • Forensics: While enclaves obscure memory inspection, design for evidence via signed logs, deterministic pipelines, and reproducible builds.

Real-World Scenarios

Pharma-Hospital Outcomes Analysis

Objective: Quantify real-world efficacy of a new therapy across multiple hospitals without exposing patient-level data to the pharma company or between hospitals.

Approach:

  • Each hospital runs a data exporter that maps EHR data to a common model and encrypts it. A country-regional TEE service publishes its code measurement and region guarantee.
  • Hospitals verify attestation, then release per-study keys via an attestation-gated KMS.
  • The enclave performs PPRL to link records across sites, computes adjusted outcomes via pre-registered models, and applies DP to cohort outputs.
  • Pharma receives only aggregate results with signed receipts referencing the attestation report, enabling regulatory submissions without patient re-identification risk.

Impact: Faster, more compliant evidence generation with auditable privacy protection.

Consortium AML Risk Scoring

Objective: Detect cross-bank mule accounts and transaction rings without violating secrecy laws or revealing customer data.

Approach:

  • Each bank generates salted hashes of counterparties and risk indicators. A jointly governed enclave service, with a transparency log visible to all banks, computes graph features and shared risk signals.
  • Policy enforces that outputs are per-entity risk scores without underlying transaction details, and that minimum participant thresholds are met.
  • Outputs feed each bank’s internal models; decisions remain with the bank, limiting joint liability.

Impact: Improved detection of cross-institution fraud with demonstrably privacy-preserving mechanics.

Retailer–Publisher Measurement Clean Room

Objective: Attribute conversions while honoring user consent and platform privacy constraints.

Approach:

  • MPC-based PSI maps ad impressions to conversions using ephemeral identifiers and consent filters.
  • TEE orchestrates query execution and applies DP to reach/frequency metrics, enforcing minimum cohort sizes and a privacy budget.
  • Signed, attested outputs flow to dashboards; raw logs never leave owners’ control.

Impact: Trustworthy measurement and reduced compliance overhead compared to bespoke data-sharing arrangements.

Getting Started Roadmap

Readiness Checklist

  • Define threat model: Who are you protecting against? What residual risks are acceptable?
  • Map use cases: Analytics, training, inference, set intersection—start with high-value, low-function-complexity cases.
  • Inventory data: Sources, classifications, jurisdictions, and existing retention policies.
  • Select primitives: TEE vs. MPC vs. HE—or a hybrid—based on latency, scale, and trust.
  • Choose platforms: Cloud confidential offerings or on-prem hardware, plus SDKs and verifiers.
  • Establish governance: Policies, logging, attestation verification, and change management.

Pilot Project Plan

  1. Scope a narrow, valuable question (e.g., private overlap rate and category aggregates).
  2. Draft a data use policy tied to a code measurement and expected outputs.
  3. Stand up an attested environment with automated verification and RA-TLS.
  4. Integrate KMS for attestation-bound key release; implement per-job keys and rotation.
  5. Build queries or model code with guardrails: fixed templates, DP thresholds.
  6. Run on synthetic data first; validate logs, receipts, and reproducibility.
  7. Invite partner data owners, execute with real data, and conduct a joint security review.
  8. Document learnings, performance, and governance steps for scaling.

Metrics for Success

  • Privacy: Zero raw data exposure events; DP budget adherence; cohort threshold violations blocked.
  • Security: Attestation verification rate; time to remediate firmware updates; coverage of signed logs.
  • Performance: Job throughput, latency, and cost per analysis vs. baseline.
  • Adoption: Number of partners onboarded; time-to-onboard; reduction in bespoke legal review cycles.

Common Misconceptions

“We can’t use the cloud if we need confidentiality.”

Confidential VMs, enclaves, and attestation allow you to treat cloud operators as untrusted while still leveraging elasticity and managed services. Controls must be end-to-end: attestation, key release, minimal trust in orchestration, and strong audit trails.

“MPC or FHE makes TEEs obsolete.”

They are complementary. TEEs deliver general-purpose compute with good performance; MPC/HE reduce hardware trust but often at higher cost and complexity. Many practical systems use TEEs for orchestration and control, MPC for specific joins, and HE for simple encrypted math.

“Attestation is just a checkbox.”

Attestation is a living control. You must verify measurements, enforce minimum firmware, track supply-chain provenance, re-attest periodically, and bind key release to attestation results. Treat measurement updates like code deployments with change control.

“Differential privacy destroys utility.”

When applied thoughtfully—with per-query budgets, aggregation design, and alignment to business objectives—DP can yield highly useful results while bounding risk. The key is to plan analytics around cohorts and stable metrics rather than row-level explorations.

Design Patterns and Anti-Patterns

Good Patterns

  • Zero-trust key release: No key leaves KMS unless code, hardware, and region claims match policy.
  • Output-only architecture: Treat raw data as write-only into the enclave; all egress is aggregates with checks.
  • Reproducible builds and verifiable pipelines: Deterministic binaries, signed artifacts, and SBOMs tied to attestation.
  • Layered privacy: Combine TEE isolation with MPC for joins, DP for outputs, and consent filtering upstream.

Anti-Patterns

  • Human-in-the-loop access to enclave shells or memory dumps.
  • Unbounded ad hoc queries; allowlist queries or use a DSL with verifiable constraints.
  • Permanent retention of sealed raw data “for convenience.”
  • Ignoring operational realities like patching and microcode updates that break attestation policy.

Risk Management and Residual Risks

Assessing Residual Risk

No system is perfect. A mature risk assessment includes:

  • Side-channel risk rating per workload; mitigation strategy and acceptance by stakeholders.
  • Supply-chain risk: Firmware and microcode provenance, update cadence, and rollback plans.
  • Dependency risk: Third-party libraries for MPC/HE and their maintenance posture.
  • Model leakage: Even with secure compute, models can memorize data; use DP in training and membership inference tests.

Response Playbooks

  • Attestation drift: If a firmware update changes measurements, block key release until policies are updated and re-verified.
  • Suspected leakage: Freeze outputs, rotate keys, review logs, and rerun on synthetic data to identify anomalies.
  • Partner breach: Quarantine partner keys and revoke their access without disrupting others.

Operationalizing Across the Enterprise

Organizational Roles

  • Platform security team: Owns attestation verification, KMS policies, and transparency logs.
  • Data stewards: Approve datasets, schemas, and privacy budgets.
  • Legal/privacy: Drafts contracts tied to code measurements and governs jurisdictional restrictions.
  • Engineering: Builds pipelines, integrates SDKs, and instruments monitoring.

Change Management

Treat code measurement changes like regulated releases:

  • Propose changes with a diff of policy, SBOM updates, and performance impact.
  • Run canary jobs with consenting partners.
  • Update allowlists and notify partners of new measurements for key release.

Cost Management

  • Right-size hardware: Choose instance types with sufficient confidential memory to avoid paging.
  • Batch workloads: Aggregate small jobs to amortize attestation overhead.
  • Algorithm engineering: Use sketches (HyperLogLog), approximate queries, and HE-friendly transformations to reduce cost.

Security-by-Design for Confidential Applications

Data-Oblivious Algorithms

Design code that avoids data-dependent control flow and memory access patterns. Techniques include sorting networks for joins, fixed-structure tree evaluation, and constant-time cryptographic primitives. While sometimes less efficient, they harden against side channels.

Minimal Trusted Computing Base (TCB)

Keep the code running inside the TEE small and focused. Push parsing, compression, and complex business logic to the untrusted side when possible, feeding only well-formed inputs into the enclave. Consider DSLs for expressiveness without increasing TCB complexity.

Evidence-Carrying Results

Attach verifiable evidence to outputs: a signature by the TEE key, the hash of the binary, policy version, and a nonce. Downstream consumers can log and verify these as they make decisions, enabling end-to-end assurance chains.

Future Outlook

Hardware Evolution

Next-generation TEEs aim for larger protected memory, better I/O performance, and standardized attestation tokens that are easier to verify. Confidential GPUs and accelerators are emerging, enabling enclave-protected training and inference for deep learning without moving raw data to exposed memory on accelerators.

Usable Privacy and DevEx

Expect higher-level frameworks that hide cryptographic and attestation complexity behind stable APIs, plus policy compilers that produce both enforcement code and human-readable contracts. Observability will mature with verifiable telemetry built into confidential runtimes.

Hybrid Cryptography

Practical deployments will blend TEEs, MPC, and HE intelligently: MPC/HE for linkage and simple math where it’s cheap; TEEs for general-purpose compute and orchestration; DP everywhere outputs leave the system. Hardware acceleration for HE and standardized PSI protocols will expand feasible workloads.

Regulatory Convergence

Regulators are increasingly comfortable with privacy-enhancing technologies. Expect guidance that explicitly recognizes attestation, DP, and secure aggregation as valid controls, reducing the frictions of cross-border research and compliance audits when evidence is robust and machine-verifiable.

Market Dynamics

As confidential computing becomes table stakes, the differentiator shifts to verifiability, policy agility, and partner onboarding speed. Organizations that build reusable confidential collaboration platforms will outpace those negotiating bespoke, slow, and risky data-sharing deals every time they want to learn from combined data.

Taking the Next Step

Confidential computing lets teams collaborate on sensitive data without leaks by pairing TEEs with MPC, HE, and differential privacy under verifiable controls. The real advantage is trust you can prove: attested code and policy-bound keys, minimal TCB, data-oblivious designs, and evidence-carrying results stitched into transparent logs. With clear roles, disciplined change management, and cost-aware engineering, you can operationalize this across partners and onboard new collaborators quickly. Start a focused pilot: inventory candidates, set allowlists, wire KMS to attestation, and track accuracy, cost, and risk as KPIs, then expand. As confidential GPUs, standardized attestations, and friendlier tooling arrive, the organizations investing now will set the pace for secure, data-driven collaboration.

Comments are closed.

 
AI
Petronella AI