Amplify

Outcomes

Is it working? Proof points show the before-and-after outcomes. Indicators track leading and lagging metrics per pillar.

Proof Points

Builder Amplified Builders 2026-03

88% of code written with AI assistance in Q1 2026

Commitment: Agentic development is how we build.

Before

Engineers wrote code manually with occasional autocomplete assistance. AI was a novelty, not a workflow.

After

85.5% of all code in Q1 2026 was written with AI assistance via Cursor Enterprise. On the Edge platform, it's closer to 95%. Agents handle multi-file changes, background tasks, and cross-repo work.

AI-assisted code rate

~10%

Before

85.5%

After

The shift happened in stages. Cursor POC in March 2025 showed immediate productivity gains. Enterprise rollout followed. The decision to start Edge as a green-field agentic platform proved the model — when there's no legacy constraint, 95%+ AI-assisted code is natural. The Core/Edge fork strategy let us prove agentic development without fighting legacy codebases simultaneously. Human review quality has increased because reviewers focus on architecture and logic instead of syntax.

Adoption Equip 2026-03

Cursor Enterprise: from POC to 27 active users in 6 months

Commitment: Every employee gets a personal AI assistant.

Before

No AI coding tools. Engineers used standard IDEs with basic autocomplete. AI adoption was a conversation, not a practice.

After

27 engineers on Cursor Enterprise with admin tooling, governance, and per-developer analytics. The tool became the default development environment within 3 months of POC launch.

Active Cursor users

0

Before

27

After

The POC launched in March 2025 with 5 engineers. The productivity signal was so strong that word spread organically — engineers on other teams asked for licenses before we'd planned the rollout. By September we moved to Cursor Enterprise for governance and admin tooling. The key lesson: when the tool is genuinely better, adoption drives itself. The challenge was non-engineering roles — solving that is the Claude rollout initiative.

Agent AI Teammates 2026-03

Kai: from zero to 200+ queries/week as company knowledge assistant

Commitment: Kai is the front door to company knowledge.

Before

Company knowledge lived in people's heads, Slack threads, and scattered Notion pages. New employees took weeks to find answers. The same questions were asked repeatedly.

After

Kai answers ~200 questions per week across Slack, searching documentation, Google Drive, and Salesforce knowledge. ~40% of the company uses Kai regularly. Support and PD departments lead adoption.

Kai queries per week

0

Before

~200

After

Kai launched in Q4 2025 as Kaptio's first AI teammate. The name mattered — "Ask Kai" became natural language in the company within weeks. Adoption was uneven: departments with better-documented knowledge saw faster adoption because Kai could give good answers. The biggest lesson: AI agent adoption follows knowledge quality. Where Kai has gaps, people try once and give up. Filling knowledge gaps is now the top priority for scaling Kai adoption across all departments.

More proof points added as commitments progress. Each journey generates outcomes that appear here.

Indicators

Leading indicators (are we adopting?) and lagging indicators (is it working?) per pillar. Baselines before targets. Data before opinions.

1

Equip

Zero untooled roles. Every employee has a personal AI assistant they use daily.

Leading Indicator

% of employees with active AI tool access, weekly active usage rate

Lagging Indicator

Self-reported task completion improvement per department

2

People

Every Kaptio employee demonstrates AI fluency appropriate to their role tier.

Leading Indicator

Tier 1/2/3 distribution across departments, weekly active AI tool usage rate

Lagging Indicator

Self-reported productivity improvement, manager assessment of AI fluency

3

AI Teammates

AI teammates handle defined workflows end-to-end with human oversight.

Leading Indicator

Agent count, deflection rate (questions Kai answers vs human escalations), knowledge gap rate

Lagging Indicator

Trust score (do people act on Kai's answer or verify elsewhere?), hours of human work augmented

4

Delivery Engine

AI amplifies every stage of the customer lifecycle: win, deliver, grow.

Leading Indicator

AI-assisted proposal rate, consultant AI tool adoption, Saga dashboard coverage, AI features shipped

Lagging Indicator

Deal win rate trend, time-to-first-draft, delivery margin, customer NPS on AI-assisted engagements

5

Amplified Builders

AI-assisted work ships at equal or higher quality than unassisted work, across all builder functions.

Leading Indicator

AI code %, cursor rules coverage per repo, Gandalf verdict pass rate, pattern library size

Lagging Indicator

Production incident rate trend, time-to-ship, security audit results