Calibrating Endowed Progress with AI

So far, we have treated endowed progress as a static design: every user gets the same endowment (e.g., 2/10 stamps at signup).

That's already powerful. But it's suboptimal. The right endowment amount, the right narrative, the right number of steps — all of this varies with the user's profile.

AI lets us move from static endowed progress to dynamic, personalized, continuously self-tuned endowed progress. This chapter covers three operational use cases.


Use case 1 — Detecting the right endowment amount per segment

The right number of stamps to offer (2/10? 3/10? 4/12?) depends on variables humans struggle to weigh together:

  • Demographic profile (age, geography, arrival platform)
  • History (new customer vs. reactivated returnee)
  • Acquisition channel (referral, paid ads, organic)
  • Expected usage frequency (B2B daily-use vs. B2C occasional)
  • Journey complexity (1 action or 8?)

With a well-trained AI, you can assign each new user a calibrated endowment that maximizes their completion probability.

Operational prompt — Endowment recommendation

You are an expert in product onboarding design. Here are the characteristics
of a new user who has just signed up:

- Platform: [B2B SaaS / B2C app / e-commerce / online course]
- Estimated persona: [Marketing manager, freelancer, student…]
- Acquisition channel: [organic / paid social / referral / direct]
- Country / language: [...]
- Action already performed at signup: [...]
- Total number of steps in the activation journey: [N]
- Final promised reward: [...]

Recommend:
1. Number of steps to present as "given / already done"
   (between 1 and 3 maximum)
2. The narrative text shown to the user to justify this endowment
   (without revealing the mechanic, without deceiving)
3. Optimal ordering of remaining steps (easiest to most engaging,
   not the reverse)
4. Expected J+7 completion target (realistic %)
5. Risk signal if the user doesn't pass a given step within 24h

Output format: JSON.

The AI will combine:

  • Patterns observed on similar user bases
  • Psychological principles (self-perception, goal-gradient, dopamine)
  • Knowledge of your product type

What you get back: a bespoke endowment per segment, instead of a single design for everyone.


Use case 2 — Dynamically generating the steps and their narrative

The second use case goes further: generating the content of the steps based on profile. The more personalized the onboarding feels, the stronger self-perception fires.

Pattern: the bespoke checklist

For the same product, two personas get different checklists:

B2B Marketing Manager:

  • ✓ Account created
  • ✓ Industry: SaaS selected
  • ☐ Connect your lead source (HubSpot, Salesforce…)
  • ☐ Create your first campaign
  • ☐ Invite your marketing team

Solo freelancer:

  • ✓ Account created
  • ✓ Freelance plan selected
  • ☐ Import your first clients
  • ☐ Create your first invoice
  • ☐ Configure your VAT

Operational prompt — Checklist generation

Generate a personalized onboarding checklist for the profile below:

- Product: [product description in 2 lines]
- Persona: [persona details]
- Final reward target: [what the user will "unlock"]
- Constraints: [e.g., must include the billing step at the end]

Output rules:
- 5 to 6 steps total
- The first 1-2 steps must be marked "✓ already done" with a
  credible (and honest) narrative justification
- Each ☐ step must include: a short title (max 6 words),
  an explanatory sentence (max 15 words), a verbal CTA
- Order must follow increasing difficulty but include a "peak"
  of easy victory at step 3 (dopamine boost)
- Output language: [FR or EN]

Format: JSON with fields title, description, cta, isPreChecked, justification.

You get truly personalized onboarding. Not just a first name inserted into a template.


Use case 3 — Detecting drop-off users in real time

Endowed progress is no guarantee. Some users stall between step 3 and step 4 despite well-calibrated initial endowment. The operational question becomes: how to detect them, and intervene at the right time?

That's the territory of AI engagement agents.

Pattern: real-time drop-off scoring

For each user, compute a risk score (0-100) based on:

  • Time elapsed since the last step
  • Comparison with the cohort median
  • Behavioral signals (clicks, scroll depth, pages visited)
  • Contextual signals (date, hour, device)

When the score crosses a threshold (typically > 60), an automated action triggers:

  • Recap email ("You're at 60 %, 2 more steps!")
  • Push notification with deep link to the next step
  • Offer a 15-minute call (B2B)
  • Friction reduction on the blocking step

Operational prompt — Drop-off analysis

You are a product engagement analyst. Here is a user's data:

- Signup: [date]
- Steps completed: [list with timestamps]
- Current blocking step: [title + time stuck on it]
- Cohort median time to reach next step: [duration]
- Recent signals: [clicks, sessions, pages]

Expected diagnosis:
1. Estimated probability of abandonment within the next 7 days (%)
2. Most likely hypothesis for the block (technical friction,
   missing perceived value, distraction, other)
3. Recommended intervention action (and optimal channel)
4. Exact wording of the intervention message in under 50 words,
   reactivating the progress narrative and self-perception,
   without guilt-tripping, without manipulation

Format: JSON.

The AI does three things a human cannot at scale:

  • Score each user continuously
  • Diagnose the likely block cause
  • Write the right message at the right time, in the persona's voice

Use case 4 — A/B testing endowments at scale

Last wave: using AI not just for prompts, but to drive multi-armed bandits or large-scale A/B tests. Principle:

  1. You test 5 endowment variants in parallel (1/10, 2/10, 3/10, 2/8, 3/12)
  2. AI observes completions per segment
  3. It reallocates traffic toward the variants that overperform for each given segment

Result: within a few weeks, each segment has its optimal endowment, without human intervention. A product team that would have spent 6 months manually comparing variants obtains in 2 weeks a fine strategic map of the effect by segment.

Operational prompt — Weekly A/B test summary

Here are the weekly results of an A/B test on endowed progress:

[data table: variant, segment, completion rate, sample size,
confidence interval]

Expected synthesis:
1. Statistically significant variants (p < 0.05) per segment
2. Recommendation: kill, keep, expand, or continue the test
3. Psychological hypothesis explaining the gaps (Bem?
   goal-gradient? Cialdini?)
4. Next test to launch the following week to go further

Format: structured markdown note.

Ethical guardrails

AI applied to endowed progress is powerful. Like any large-scale persuasion technique, it demands guardrails.

Risk Guardrail
False progress promise (user thinks they have an advantage that doesn't exist) Always verify the endowment is factually justifiable
Discriminatory endowment per segment (a protected group gets less) Audit algorithmic fairness of personalization rules
Manipulation by fake personalization ("referred by your friend" when not true) Forbid fictional justifications in prompts
Over-soliciting drop-off users Set a hard cap on weekly interventions

Deontological rule: AI-assisted endowed progress is legitimate as long as it accelerates a journey the user sincerely wants to complete. It becomes toxic the moment it pushes someone to finish a journey from which they would be better off walking away.


In summary

  • Use case 1: have AI calibrate the endowment per segment → tailored rather than one-size-fits-all.
  • Use case 2: dynamically generate the step content and narrative → truly personalized onboarding.
  • Use case 3: real-time scoring of drop-off-risk users and send the right message at the right time.
  • Use case 4: drive large A/B tests and auto-surface optimal endowments per segment.

In chapter 6 we take the entrepreneurial view: how to design a product around endowed progress from day 1, and which business metrics to monitor as priorities.