Cami Roadmap
Where we're headed: validation, speed, and a Cami that thinks natively.
Cami-1 — Validation & Proof
Ship-ready Legal and Clinical with evidence they work in the field and in benchmarks.
- Validate and verify script — Formal validation of core flows (planning, synthesis, MIN, Human+) so behavior is reproducible and regressions are caught before release.
- Field test: 10 lawyers, 10 clinicians, 2 weeks — Real-world usage; feedback on accuracy, tone, disclaimers, and Scribe integration.
- Beat benchmark tests — MMLU / BBH, HumanEval, EQ-Bench 3 (Human+ currently top 5, +132 ELO lift on Sonnet 4.5).
Deliverable: Validated core, field evidence from legal/clinical, benchmark results that meet or beat targets.
In progressCami-2 — Speed, New Services, CS B2B
Hyper-synthesis, three new service modes, Cami Money, and Customer Service as a B2B platform.
- Hyper-synthesis — Continuous, parallel synthesis: 10-20 s perceived latency. Hyper-Parallel Dispatcher, Real-Time Conflict Bus, Coherence Scoring, Streaming.
- Cami Research — Multi-LLM research with novel thinking, bias detection, and conflict resolution. Not retrieval — original analysis.
- Cami Tutor — Scaffolding-first adaptive tutoring. Human+ reads pace and frustration; M.I.N. remembers how you learn.
- Cami Money — Prompt-driven financial rail (ZDR & AEGR): talk money, connect accounts, hold, exchange, settle via natural language.
- Agentic coding — Cami owns planning, tool use, and execution; LLMs remain knowledge sources.
- CS B2B platform launch — Fast-path API (<3 s), embeddable widget, multi-tenant knowledge base, 3-tier model (API / widget / enterprise).
Deliverable: Research, Tutor, Money modes live. Hyper-synthesis at target latency. CS B2B API + widget shipped.
Building nowCami-3 — In-Store Customer Service
Cami in physical retail. Kiosk, tablet, voice-first. The AI that knows the store.
- In-store support — "Where's the cereal?" "Do you have size 10?" "Last time I bought X at your other store, anything similar here?" Real-time inventory and location awareness.
- Cami base model R&D — A model (or family) that encodes multi-perspective synthesis, Human+, and MIN as first-class capabilities. Required for sub-1 s edge inference on kiosks.
- Voice-first via Scribe — Customers speak naturally. Cami listens, understands emotion, responds in real time.
- Store integrations — Inventory APIs, POS systems, loyalty programs. Each store gets its own MIN namespace.
Deliverable: In-store pilot with retail partner, base model prototype, voice interaction at sub-1 s latency.
Ahead on the roadCami-4 — Ready for Bots
Cami as infrastructure. Other AI agents call Cami for emotional intelligence.
- Bot-to-bot protocol — Cami exposes a machine-readable API for other AI agents to query: emotional context, escalation assessment, empathy-aware responses.
- Human+ as a service — Standalone emotional intelligence API. Any AI agent can add consciousness-level emotional understanding by calling Human+.
- Agent marketplace — Cami can delegate tasks to specialized agents and receive delegated emotional assessments from others. The beginning of an AI cooperation layer.
Deliverable: Human+ API public beta, bot-to-bot protocol spec, 3+ external agent integrations.
On the horizon| Cami-1 | Validation script; 10 lawyers + 10 clinicians over 2 weeks; beat MMLU/BBH/HumanEval/EQ-Bench 3. |
|---|---|
| Cami-2 | Research, Tutor, Money modes. Hyper-synthesis, agentic coding, CS B2B platform (API + widget + enterprise). |
| Cami-3 | In-store customer service. Kiosk/tablet + voice-first. Base model R&D for edge inference. |
| Cami-4 | Ready for bots. Bot-to-bot protocol. Human+ as a standalone emotional intelligence API. |
The road continues. Every conversation makes Cami smarter. Every release brings it closer to the AI that understands humans better than any other.