Pricing
Published ranges so procurement and technical owners can sanity-check budget before a discovery call. Pricing is for assurance reviews, not for implementing or operating the vendor AI product.
Package definitions and deliverables live on Services. Here we focus on what moves cost: category, overlays, evidence depth, interview breadth, and final review depth tied to architecture and implementation conditions.
At a glance
| Rapid Readiness | Full Deployment | Portfolio Program | |
|---|---|---|---|
| Price range | $15K to $25K | $35K to $75K+ | $60K to $150K |
| Timeline | 2 weeks | 4 to 5 weeks | 8 to 12 weeks |
| Tools in scope | 1 tool | 1 tool | 3+ tools |
| Structured review coverage | Critical and high severity only | Full framework across core domains and overlays (as scoped) | Full framework per tool (as scoped) |
| Learn more | Learn more | Learn more |
How pricing works
Fixed-price assurance engagements, not time and materials. Scope is driven by implementation category, overlay-driven expansion, and how much evidence the deployment requires; final review depth is not a mechanical checklist. Published figures are typical ranges so you know whether you are closer to a $25K, $50K, or six-figure decision before investing calendar time.
- AI-native platforms and direct AI services usually sit higher in the band than lighter productivity-style deployments
- Regulated data and deeper evidence needs increase scope
- Broader access to internal systems, workflows, and integrations increases scope
- Simpler SaaS-with-AI patterns often sit lower in the band when overlays are limited
What final scope depends on
- Implementation category and deployment pattern
- RAG, agentic, self-hosted, and integration complexity
- Data sensitivity and regulated content
- Stakeholder count and interview depth
- Evidence requirements for internal or external review
Capability overlay pricing
When a deployment includes agentic execution, RAG, self-hosted models, or similar patterns, structured overlays add review areas, evidence expectations, and effort on top of the base category price.
Worked example
A legal team is deploying an AI-native SaaS tool for internal policy Q&A. The tool uses a RAG pipeline over confidential HR and legal documents. The deployment is not agentic, not self-hosted, and the integration surface is limited. Regulated data is in scope.
| Component | What it covers | Range |
|---|---|---|
| Full Deployment base | AI-Native SaaS category: full structured framework coverage (as scoped), architecture diagrams, stakeholder interviews | $35K to $50K |
| RAG pipeline overlay | Typically adds ~5 structured review areas covering retrieval boundaries, vector store access, PII in ingestion, stale content | +$3K to $8K |
| Regulated data overlay | Typically adds ~2 structured review areas for data classification controls and regulatory evidence expectations | +$2K to $5K |
| Typical total | Full review with two overlays: expanded structured coverage, four architecture diagrams, gate-ready output (exact depth depends on deployment) | $40K to $63K |
Adding an agentic execution overlay to the same deployment would add another +$5K to $10K and extend the timeline by one to two weeks. Final scope is confirmed after the discovery call.
Next step: a short discovery call
We use it to confirm deployment fit, outline review scope, and match you to the right packaged offer. No engagement starts until you decide to proceed.