AI Agents and Chatbots Built for Regulated Industries
Petronella Technology Group builds AI agents, AI chatbots, and voice AI assistants that actually work inside HIPAA, CMMC, and finance environments. We have a fleet of agents running on this site that you can talk to right now. Penny answers our main line. Peter chats from the corner of this page. Boutique competitors do not have that level of on-site proof.
What we deliver
- AI chatbot development services for lead intake, customer support, internal Q and A, and document Q and A. Web widgets, in-app chat, and embedded assistants on your stack.
- Enterprise AI chatbot development with single sign-on, role-based access control, audit logging, and integration to CRM, ERP, EHR, ticketing, and identity providers.
- Voice AI agents (AI virtual assistants) for inbound phone, outbound qualification, calendar booking, and after-hours triage. Penny on our main line is the live reference.
- LLM-powered application development for retrieval-augmented generation, document workflows, and domain-specific copilots. Built on private infrastructure when the data class requires it.
- Autonomous AI agents that plan, use tools, and act across your business systems with action-level guardrails and human-in-the-loop approval gates.
- Regulated-vertical engineering. HIPAA Business Associate Agreement, CMMC Levels 1, 2, and 3 environments, and audit-ready evidence baked into every deployment.
- Pricing scoped on a discovery call. AI chatbot and agent engagements vary widely by scope, integrations, data class, and deployment target. We do not publish a fixed price for custom builds.
Our AI Agents Are Already Talking to People
Most AI chatbot development companies show you a video. We give you phone numbers and a chat widget. Every agent below was built in-house by Petronella Technology Group.
The fastest way to evaluate a custom AI chatbot development partner is to use one of their products. Our fleet of voice agents and chatbots is in production on this domain and our sister site. Some of these are live customer-facing systems handling real interactions; others are working prototypes we built to demonstrate capability across different use cases. We label which is which because honesty matters more than marketing on a regulated-vertical project.
Penny Live
Voice AI agent on our main inbound line. Greets callers, qualifies, books on the Petronella calendar. Real production traffic, real bookings.
Call 919-348-4912Peter Live
Chat AI on petronellatech.com. Answers visitor questions about services, compliance, and engagement model. Built by Petronella, not a SaaS plugin.
Open chat (bottom-right)ComplyBot Live
Compliance chatbot on our sister site. Answers CMMC, HIPAA, and SOC 2 questions in plain English. Functional demo, no email gate.
Try ComplyBot at petronella.aiEve Demo
Voice AI prototype for emergency response. Demonstrates incident triage and escalation flow. Working prototype, not a live SOC line.
Joe Demo
Voice AI prototype for scheduling assistance. Demonstrates calendar booking and availability negotiation patterns.
Harper Demo
Voice AI prototype for digital safety questions. Demonstrates regulated-vertical Q and A, risk posture explanation, plain-English compliance answers.
Alex Demo
Voice AI prototype for outbound discovery. Demonstrates how a voice agent introduces a service offering and books a follow-up call.
Bob and Paul Demo
Personal voice AI assistants built for two members of the Petronella team. Demonstrates per-person digital twin patterns: scheduling, follow-up, executive coordination.
The point is not that every agent above is in production. The point is that we have built and operated this many AI agents on our own infrastructure. When we scope an AI chatbot or voice agent for your team, we are not extrapolating from a vendor demo. We are quoting from the engineering experience of running the same patterns ourselves.
A 12-second clip of Penny, our voice AI receptionist, currently handling inbound calls on our main line:
AI Chatbot Development Services
Custom AI chatbots for the workflows where a generic SaaS bot is not enough. Built for regulated data, integrated to your systems, deployed where you control them.
An AI chatbot in 2026 is not what it was three years ago. The bar for a useful production chatbot has moved well past scripted FAQ flows and decision trees. Modern AI chatbot development pairs a foundation model with retrieval against your own data, integrations into the systems you actually run on, escalation paths that hand off to a human gracefully, and audit logging that can satisfy a HIPAA, CMMC, or SOC 2 reviewer. Petronella Technology Group builds chatbots in that bracket. We do not build generic widget bots that copy and paste a public model behind a chat icon.
The use cases we build for
The same underlying technology serves several distinct workflows. The use case shapes the data model, the retrieval strategy, the escalation path, and the deployment topology. Common scopes we build for include:
- Lead intake chatbots on marketing or services sites that qualify visitors, collect intent, hand structured data to your CRM, and book a discovery call when the visitor is ready. Penny is the voice version of this pattern; the chat equivalent runs on a website widget.
- Customer support chatbots grounded in your knowledge base, ticketing history, and product documentation. Resolves tier-one questions, escalates to a human when confidence drops, and logs every conversation for QA and training.
- Internal Q and A chatbots for employees: HR policy, IT runbooks, security playbooks, vendor contracts, internal wikis. Reduces the burden on a small ops team by answering the same fifteen questions hundreds of times a week.
- Document Q and A chatbots grounded in a specific corpus: policies, contracts, controlled unclassified information document sets, clinical guidelines, engineering standards. Returns answers with source citations, not vibes.
- Sales-handoff chatbots that gather requirements, prepare a structured brief, and pass the conversation to a human salesperson with full context preserved.
- Compliance-aware chatbots for healthcare, defense, and finance. Refuses to answer questions outside scope, marks personally identifiable information, and produces an audit trail an examiner can read.
Engagement model for AI chatbot development
Custom AI chatbot engagements at Petronella follow the same shape as our other AI work. We start with a discovery conversation to scope the use case, the data, and the integration surface. We then move through a small prototype that confirms the approach against your real or representative content, an instrumented build phase, and a controlled rollout that watches the production telemetry before scope expands. A typical chatbot project ships its first usable version inside a few weeks; complex enterprise chatbots with multiple integrations and tight regulatory scope take longer. We give every engagement a written milestone schedule before work begins.
What you get out of an engagement
- A working chatbot deployed where you specified: web widget, in-app component, internal Slack or Teams interface, or embedded into an existing application.
- Source code, prompts, evaluation harness, and any fine-tuned model artifacts. You own them under a standard engagement letter.
- Telemetry covering latency, throughput, conversation completion rate, escalation rate, accuracy on a held-out evaluation set, and cost per conversation.
- Audit logging that captures the prompt, the retrieval set, the response, and the user identity, suitable for HIPAA, CMMC, SOC 2, and PCI DSS evidence requirements.
- An operations runbook covering how to update the knowledge base, how to add a new intent, how to handle a bad answer, and how to roll back a problematic prompt change.
- An optional managed-operations engagement where Petronella keeps the chatbot tuned, the model updated, and the telemetry monitored on an ongoing basis.
What separates a real AI chatbot from a public-API demo
Five things distinguish production AI chatbot development from gluing a public model behind a chat icon. First, data control: the chatbot runs against your data, in an environment you can audit, with prompts and responses logged where your compliance team can read them. Second, escalation: the chatbot recognizes its own limits and hands off cleanly to a human, preserving conversation context. Third, evaluation: the chatbot is graded against a held-out set of representative questions before launch and re-graded as the knowledge base evolves. Fourth, telemetry: someone is paid to watch the dashboards and respond to drift. Fifth, infrastructure: the deployment is hosted somewhere a Business Associate Agreement is meaningful and a CMMC enclave is technically feasible. A consumer chatbot demo on a public API fails most of those tests on day one.
A short overview of how AI chatbots are reshaping business communication:
Enterprise AI Chatbot Development
When the chatbot has to integrate to identity providers, scale to thousands of concurrent conversations, and survive an audit, the engineering changes substantially.
Enterprise AI chatbot development is not just a bigger version of an SMB chatbot. The architecture, the deployment topology, and the operating model differ in concrete ways. The buyer is an enterprise architect or a CISO, not a marketing manager. The requirements list includes single sign-on, role-based access control, audit logging that survives discovery, multi-system integration, change management, and observability that hooks into the broader enterprise monitoring stack. None of this is optional, and none of it is comfortably handled by the off-the-shelf chatbot SaaS that works fine for a small business.
Single sign-on and identity propagation
An enterprise chatbot has to know who it is talking to. That identity has to come from your existing identity provider, not from a custom username and password. The conversation has to enforce role-based access control on what data the chatbot retrieves and what actions it can take. A user in the finance group sees finance documents; a user in operations does not. We build chatbots against Microsoft Entra ID, Okta, Ping, and other enterprise identity providers, with token validation on every request and identity-scoped retrieval as a first-class architectural concern.
Multi-system integration
Enterprise chatbots usually have to read and sometimes write across several systems: a CRM, a ticketing platform, an ERP, an EHR, a document management system, an HRIS. Each integration has its own authentication, rate limit, schema quirks, and failure modes. We build the integration layer with circuit breakers, retries, idempotency keys where the underlying system supports them, and graceful degradation when a downstream system is unavailable. The chatbot keeps working when one of its integrations is down; it just answers fewer questions until the integration is healthy again.
Audit logging and compliance evidence
Enterprise chatbots produce evidence. Every conversation is logged with the user identity, the retrieved context, the model version, the prompt, the response, and the timestamp. Logs are written to an immutable store and retained per the policy your compliance team specifies. When the chatbot escalates or refuses, those events are first-class log entries, not afterthoughts. When an auditor asks "show me every conversation that touched protected health information last quarter," the answer is a query against the log store, not a manual reconstruction.
Scaling, observability, and change management
Enterprise deployments need to handle peak concurrency without melting and without spending a multiple of the budget. We build chatbots with horizontal scale-out at the application layer, response caching where the use case allows, and a model routing layer that can fall back from a premium model to a less expensive model when the question is simple. Observability hooks into Prometheus, Grafana, Datadog, or whatever your stack uses, surfacing latency distributions, throughput, error rates, model token usage, and cost. Change management is treated as a first-class concern: prompt changes, knowledge-base updates, and model upgrades go through your standard change-control process.
Voice AI Agents and AI Virtual Assistants
Voice AI agents extend the chatbot pattern to the phone. We build them. Penny is the live reference on our main line.
The same engineering that produces a competent chatbot produces a competent voice AI agent. The differences are in the input and output channels: speech-to-text on the way in, text-to-speech on the way out, telephony integration to ring on a real number, and a calendar integration when the agent's job is to book time. The reasoning, the retrieval, the integration, the escalation, the audit logging, and the evaluation discipline are all the same.
The reason voice AI agents are now a real category, not a science experiment, is that the latency floor has dropped to where humans no longer notice the AI. A well-engineered voice agent can hold a conversation that does not feel scripted, ask clarifying questions naturally, and hand off to a human when the conversation calls for one. We have built voice agents for inbound qualification, outbound discovery, after-hours triage, and calendar booking. We have built them for our own line, and we have built personalized digital-twin voice assistants for individual team members.
Penny: the live reference
If you call our main number, Penny answers. She is a voice AI agent we built to handle inbound qualification on the Petronella line. She greets callers, qualifies the conversation, and books on the Petronella calendar when there is a fit. Penny is not a vendor demo and she is not a stunt. She is a working voice AI agent in continuous production on our main inbound line. Calling her is the fastest way to evaluate the engineering quality we ship.
The deeper point is that we run Petronella's own inbound on this technology. We are not selling something we have not deployed for ourselves. Most boutique AI consultancies cannot say that.
Voice AI use cases we build
- Inbound qualification and booking for sales lines, intake lines, and service lines. The voice agent qualifies, gathers structured information, and books the right human's calendar.
- Outbound discovery for warm follow-up to people who have already opted in. The voice agent introduces a service, gathers interest, and either books or releases the call cleanly.
- After-hours triage for teams that cannot staff a 24/7 phone line. The voice agent handles the routine contact and escalates the urgent conversation to a human pager.
- Internal voice copilots for field staff who need a hands-free interface to a specific knowledge base while they are on a job site.
- Personalized digital-twin assistants for individual team members who need a voice channel for scheduling, follow-up, and routine coordination.
We frame voice AI agents the same way we frame chatbots: scoped on a discovery call, prototyped against representative scenarios, deployed with telemetry, and operated with a human-in-the-loop escalation path. The phone is just another input and output channel.
LLM-Powered Application Development
Beyond chat and voice, the same foundation models power a broader class of applications. We build those too.
Not every useful application of a large language model is a chat interface. Many of the highest-leverage AI applications we build are background services and structured workflows: classify a document, route it to the right reviewer; extract twenty fields from a contract and write them to a record; summarize the day's incident logs and write a briefing; convert a freeform request into a structured ticket; generate a draft response for a human to approve. These applications use the same engineering patterns as chatbots, but they live inside business workflows rather than in front of a user.
Build patterns we use
- Retrieval-augmented generation against a curated corpus. The model answers with grounded citations, not from training-data memory.
- Structured output using JSON schema or function calling so downstream systems can consume the model's output reliably.
- Multi-step workflows where the model plans, calls tools, evaluates intermediate results, and produces a final artifact under a human review gate.
- Light fine-tuning when the use case justifies the data preparation and operational overhead. Most use cases do not need it; retrieval and prompting solve more problems than fine-tuning does.
- Hybrid hosting with the routing layer inside your boundary and the model layer either inside the boundary (for regulated data) or behind a private API (for non-sensitive workflows).
Deployment patterns we use
For non-sensitive workflows, we deploy LLM applications on standard cloud infrastructure (your cloud, our cloud, or a hosted private API) with the same observability and change-management discipline as any other production service. For regulated workflows, we deploy the model and the application on Petronella's private AI cluster in Raleigh, NC, with the data path documented for HIPAA Business Associate Agreement coverage or CMMC enclave alignment. Either way, the application is a real piece of software with real tests, real observability, and a real on-call story.
For more on the infrastructure side, see our private AI solutions hub. For prototyping discipline before you scale up an LLM application, read our AI prototyping buyer's guide.
A short overview of the broader Petronella AI practice for context:
AI Agent vs AI Chatbot vs AI Virtual Assistant
The three terms get used interchangeably and they should not be. Each describes a different shape of system, with a different scope of action and a different design center.
The fastest version: an AI chatbot answers questions in a conversation. An AI virtual assistant answers questions and performs a small set of structured tasks across a defined scope. An AI agent pursues a goal autonomously by reasoning, planning, and using tools across multiple systems. A chatbot is reactive. An assistant is task-routed. An agent is goal-driven. Confusing them leads to scoping a chatbot when you needed an agent, or scoping an agent when a chatbot would have been enough.
| Concept | Primary mode | Scope of action | Typical interface | Example |
|---|---|---|---|---|
| AI chatbot | Reactive conversation | Answer questions, sometimes call one tool | Web widget, in-app chat, Slack or Teams | Customer support bot, document Q and A bot |
| AI virtual assistant | Conversation plus task execution | Answer questions and perform a defined set of tasks across a defined surface | Voice phone, web widget, mobile app | Voice receptionist that books a calendar, scheduling assistant |
| AI agent | Goal-driven autonomous execution | Plan multi-step actions, use tools, adapt to results, operate across systems | Background service, internal copilot, workflow automation | Security alert triage agent, contract review agent, multi-agent research pipeline |
The boundary between the three is fuzzy in practice. A sufficiently capable virtual assistant looks a lot like a small agent. An agent that primarily talks back and forth with a user looks a lot like a chatbot. Petronella scopes by the actual capability the use case needs, not by the marketing label. Where a chatbot is enough, we build a chatbot. Where the workflow demands planning, tool use, and adaptive execution, we build an agent.
How We Build Your AI Chatbot or Agent
A four-stage path from "we have an idea" to "it is in production and someone is responsible for it." Pricing is scoped on a discovery call.
Stage 1: Discovery and scoping
A working session, usually one to two hours, that translates "we want a chatbot" or "we want a voice agent" into a concrete scope. We confirm the use case, the user, the data sources, the integration targets, the regulatory frame (HIPAA, CMMC L1, L2, or L3, NIST 800-171, GLBA, or other), the success criteria in measurable terms, and the deployment target. The output is a written scope document and a scoped quote. Discovery is fixed-fee or complimentary depending on engagement size.
Stage 2: Prototype
A short engagement that builds the simplest version of the chatbot or agent that exercises the real risks: real data class, real integration to at least one upstream and one downstream system, real evaluation against a held-out set of inputs. The prototype produces a written go or no-go and a sizing artifact for the production build. For the discipline behind this stage, see our AI prototyping guide and 3-stage methodology.
Stage 3: Production build
The full build. Code, prompts, integrations, security review, observability, audit logging, deployment automation, runbook, and a controlled rollout. We start in shadow mode where the chatbot or agent runs alongside an existing process and its outputs are reviewed but not acted on. We then move to limited rollout with a small user cohort, then to broader release as the telemetry confirms the system is meeting its targets.
Stage 4: Operate
The chatbot or agent is in production and someone is responsible for it. That can be your team, with us providing handoff documentation and on-call escalation; it can be Petronella under a managed-operations engagement; or it can be a hybrid where we own the model and prompt layer and you own the application and integrations. The operate stage is where most AI projects either earn their keep or quietly drift into irrelevance, so the operating model is a first-class part of the scope, not an afterthought.
Timeline ranges
A focused chatbot against well-prepared data and a single integration ships its first usable version in a few weeks. An enterprise chatbot with single sign-on, role-based access control, and three or four backend integrations runs longer. A voice AI agent with a calendar integration and a hand-off to a human is in the same range as a focused chatbot. A full multi-agent system or a heavily integrated LLM application is a multi-month engagement. Every Petronella scope comes with a written milestone schedule before work begins.
Pricing
We do not publish a fixed price for custom AI chatbot, voice agent, or agent development because the cost depends on data state, integration complexity, regulatory scope, deployment target, and the operate-stage commitment. The discovery call produces a scoped quote. For productized AI starter packages on the consumer-facing side of our practice, see petronella.ai. For an enterprise scope, contact us or book a discovery call.
AI Chatbot and Agent Use Cases by Industry
Regulated verticals are where the engineering decisions matter most. The patterns below are where Petronella's compliance grounding shows up.
Healthcare and HIPAA-Covered Entities
HIPAA AI chatbots for patient intake, appointment reminders, plain-English insurance Q and A, and triage routing. Built under a signed Business Associate Agreement, on infrastructure aligned to the HIPAA Security Rule. We do not run protected health information through a public API.
Defense and Aerospace
CMMC AI chatbots and agents for technical document Q and A, controlled unclassified information workflows, and internal compliance copilots. Architected for CMMC Levels 1, 2, and 3 environments under a CMMC-aligned engagement letter, with audit logging an assessor can read.
Finance and GLBA
Compliance-aware AI assistants for client-facing FAQ, internal policy lookup, and document Q and A. Refuses out-of-scope questions, marks sensitive content, produces evidence for examiner review.
Legal and Privileged Workflows
Privilege-aware AI assistants for matter intake, contract Q and A, and internal knowledge lookup. Built with prompt and retrieval boundaries that respect attorney-client privilege and a clear policy on what data leaves the boundary.
Engineering and AEC Firms
Technical documentation AI chatbots grounded in your standards, codes, and project archives. Internal Q and A copilots for engineers who need a fast answer from a deep specification corpus.
MSP and Channel Partners
Wholesale AI chatbot and voice agent builds for MSP partners who want to ship AI to their own clients without building the engineering practice. White-label or co-branded delivery under our partner program.
The common pattern across regulated verticals is that the conversation has to happen inside a boundary you control. Public AI APIs are generally not the right substrate for protected health information, controlled unclassified information, or attorney-client privileged content. Petronella defaults to private deployment on our cluster in Raleigh, NC, when the data class requires it, and to the most cost-effective hosted option when the data class allows it. The decision is documented as part of the scope, not assumed.
Why Choose Petronella for AI Chatbot and Agent Development
Most boutique AI development companies cannot point at a fleet of agents they have built and operated. We can. That is the proof.
The other thing worth saying out loud: when you call us with an AI chatbot or voice agent project, you are talking to an engineering practice that runs its own operations on the same technology. Penny qualifies our inbound. Peter handles our website chat. ComplyBot answers compliance questions on our sister site. Our content engine, our outbound sales pipeline, and our internal automation all use the same building blocks we ship to clients. We are not extrapolating from a vendor demo. We are quoting from operational experience.
AI Chatbot and Agent Development FAQ
The questions buyers ask most often when scoping AI chatbot, voice agent, or agent development.
What does an AI chatbot development engagement cost?
Custom AI chatbot development is scoped on a discovery call because the cost depends on the use case, the data state, the integration surface, the regulatory frame, and the deployment target. A focused chatbot against well-prepared data and a single integration is a smaller engagement than an enterprise chatbot with single sign-on, role-based access control, four backend integrations, and audit logging for a regulated vertical. We give every scope a written quote before any work starts. Contact us or book a discovery call to get a number for your specific scope.
Is the chatbot hosted on your infrastructure or ours?
Either, depending on the data class and your preference. For regulated workloads (HIPAA, CMMC, GLBA, ITAR), we default to deployment on Petronella's private AI cluster in Raleigh, NC, with the data path documented for the relevant compliance framework. For non-sensitive workloads, we can deploy in your cloud, our cloud, or behind a private hosted API. The decision is documented in the scope; it is not made by accident.
Can we use this for HIPAA-protected health information?
Yes, under a signed Business Associate Agreement, and only on infrastructure aligned to the HIPAA Security Rule. We do not run protected health information through a public AI API at any stage of the engagement. Audit logging captures every conversation that touches PHI. The deployment topology is documented for your compliance team's review before launch.
Do you support CMMC L1, L2, and L3 environments?
Yes, all three CMMC levels. CMMC L1 chatbots and agents run inside basic safeguards aligned to FAR 52.204-21. CMMC L2 deployments run inside an enclave aligned to NIST SP 800-171. CMMC L3 deployments operate against the higher bar set by NIST SP 800-172. We are CMMC-AB Registered Provider Organization #1449, the whole team is CMMC-RP, and we sign a CMMC-aligned engagement letter before any controlled unclassified information enters the project boundary.
Can the chatbot escalate to a human?
Yes, and it should. Every production AI chatbot we build includes an escalation path that hands off to a human cleanly, preserving conversation context. The chatbot recognizes its own limits using confidence signals, intent classification, and explicit user requests for a human. Voice agents like Penny escalate by booking a human's calendar or transferring to a live line.
What is the difference between an AI agent and an AI chatbot?
A chatbot answers questions in a conversation. An AI agent pursues an objective autonomously by reasoning about goals, planning multi-step actions, using tools to interact with external systems, and adapting based on results. A chatbot is reactive. An agent is goal-driven. The same underlying model can power either; the difference is in the surrounding scaffolding. See the comparison table earlier on this page for the full breakdown including AI virtual assistants.
Can the chatbot integrate with our CRM, EHR, or ERP?
Yes. Integration is the rule, not the exception, for production chatbots. We build against Salesforce, HubSpot, ServiceNow, Microsoft Dynamics, common EHR systems, and custom internal systems. Each integration includes authentication, retries, circuit breakers, and idempotency where the underlying system supports it. The chatbot keeps working when one integration is down; it just answers fewer questions until the integration recovers.
Do you build voice chatbots and AI phone agents?
Yes. Voice AI agents are a major part of our practice. Penny on our main inbound line is the live reference; she handles real callers, qualifies them, and books on the Petronella calendar. We build voice agents for inbound qualification, outbound discovery, after-hours triage, calendar booking, and personalized digital-twin assistants for individual team members. Call 919-348-4912 to evaluate the engineering quality directly.
How long does AI chatbot development take?
A focused chatbot against well-prepared data and a single integration ships its first usable version in a few weeks. An enterprise chatbot with single sign-on, role-based access control, several backend integrations, and a regulated-vertical compliance scope runs longer. A voice AI agent with a calendar integration and a clean human handoff is in the same range as a focused chatbot. Every Petronella engagement gets a written milestone schedule before work starts.
Can we customize the chatbot's voice, personality, or persona?
Yes. Persona, tone, and branding are part of the scope. Voice agents can use a custom voice; chatbots can use a custom name, avatar, and conversation style. We avoid making the persona claim more than the engineering supports: a voice agent that sounds confident has to actually be competent at the task, or the persona becomes a liability. Persona work happens in the prompt, the prompt-engineering test set, and (for voice) the voice synthesis configuration.
What if the chatbot says something wrong?
Three layers of defense. First, the chatbot is graded against a held-out evaluation set before launch and re-graded as the knowledge base or prompts change; bad answers usually surface in evaluation. Second, the chatbot recognizes its own confidence and escalates to a human or refuses when confidence drops. Third, every conversation is logged so a quality review process can catch problem patterns and feed fixes back into prompts, retrieval, or the evaluation set. No system is perfect; the goal is fast detection and fast correction.
Do you handle production hosting and ongoing operations?
Yes, through a managed-operations engagement. Petronella keeps the chatbot tuned, the model updated, the prompts versioned, the telemetry monitored, and the on-call response staffed. The alternative is a clean handoff to your team with documentation, runbooks, and an on-demand support agreement. Most clients land on a hybrid: Petronella owns the AI layer (model, prompts, retrieval) and the client owns the application and integrations.
Do we own the chatbot code, prompts, and any fine-tuned models?
Yes. Custom-built chatbot code, prompts, evaluation harnesses, and any fine-tuned model artifacts are your property under our standard engagement letter. We do not retain rights to the work product and we do not use your data to train any external model. Specific intellectual property terms are stated in the engagement letter and reviewed before any work begins.
How do we evaluate an AI chatbot quote from a vendor?
Six questions. One, where does the data live during inference and is that location appropriate for the data class. Two, what does the audit log capture and how long is it retained. Three, how is the chatbot evaluated before launch and on what cadence after launch. Four, what is the human escalation path and what triggers it. Five, who is responsible for operating the chatbot after deployment and on what response-time commitment. Six, can the vendor point at a chatbot or voice agent they themselves operate in production. A vendor that scores well on all six is decision-ready.
Talk to Penny, or Talk to Us
Two ways to evaluate Petronella's AI chatbot and agent engineering. Call our line and meet Penny, our live voice AI agent. Or skip the demo and book a scoped discovery call with the human team that built her.
Talk to Penny right now
Call our main line. Penny answers, qualifies, and can book your discovery call directly on the Petronella calendar. The fastest way to evaluate the engineering quality.
Call 919-348-4912 →Book with the human team
Skip the AI front door and book a scoped discovery conversation with the team that builds the chatbots, voice agents, and AI applications.
Book a discovery call →