AI in the service of the Ben Franklin Effect: identifying the right favor
Industrializing the Ben Franklin Effect without destroying it is a paradox: its power comes from sincerity, its scale comes from automation. Generative AI resolves that paradox — provided it is used to augment human judgment, not replace it. This chapter details the operational workflows.
Why AI is particularly useful for this use case
The success of a Ben Franklin request depends on three fine variables:
- The right angle: what specific expertise of the recipient can be solicited?
- The right wording: tone, brevity, sincerity — varies by profile and culture.
- The right timing: too early (before they know who you are) or too late (after a frontal sales approach) kills the effect.
Now:
- Manual analysis of LinkedIn profiles, articles, podcasts, posts is time-consuming.
- Personalized writing at scale is the other bottleneck.
- Spotting the right timing requires monitoring weak signals (publications, role changes, fundings).
Modern LLMs excel at all three.
Workflow 1 — Identifying a prospect's unique expertise
Objective
For 100 prospects, identify the unique skill or perspective each could contribute to a feedback request, so the solicitation is personalized and flattering to their self-image.
Tested framing prompt
You are a senior B2B sales-intelligence analyst. You will receive
the LinkedIn content (public summary, last 3 articles or posts)
of a prospect. Your mission: identify the unique facet of their
expertise.
STRICT CONSTRAINTS
- No generic flattery. If hesitating between "experienced expert"
and a precise angle, you MUST choose the precise angle.
- The expertise must be NICHE enough that asking their opinion
on it feels like respect, not a standard commercial approach.
- The angle must be ACTIONABLE: it must be possible to ask them
a concrete question on the topic in 90 seconds of reading.
OUTPUT FORMAT
{
"unique_expertise": "<one sentence of 15-25 words>",
"evidence": "<quote or factual reference from the profile>",
"potential_question": "<question of 15-30 words they could
answer in 90 sec>"
}
PROFILE CONTENT:
[…]
Why this prompt works
- "No generic flattery" forces the model down into specificity — that's where the self-image effect plays.
- "Evidence" anchors personalization in verifiable elements, avoiding hallucination and ensuring the sincerity of the final message.
- "Potential question" prepares the raw material of the outreach.
Common mistakes to avoid
- Asking the AI to directly write the message at this stage. Conversely: first extract a structured fact sheet, then write later.
- Stacking too many criteria in the same prompt: extraction quality drops past 4-5 constraints.
- Using raw outputs without human review. The cost of a false positive (a message that rings false) is far higher than the productivity gain.
Workflow 2 — Writing a request adapted to the profile
Objective
From the extracted fact sheet, generate 3 message variants ranging from formal to casual, allowing the salesperson to choose.
Framing prompt
You are a copywriter specialized in sincere, non-aggressive
prospecting. Write 3 variants of an advice-request message to
a B2B prospect.
NON-NEGOTIABLE INSTRUCTIONS
1. Message must never exceed 60 words.
2. No product pitch. No generic flattering hook.
3. The request must be quantified in time ("90 seconds",
"one sentence", "2 minutes") to make the cost visible and
moderate.
4. Include an "explicit permission to refuse" at the end
(e.g., "If no, no problem, please say so frankly").
5. Personalization is MANDATORY: reference to a precise
profile element (from the provided sheet).
REGISTERS TO PRODUCE
- Variant A: very professional tone, formal, senior-executive register.
- Variant B: direct tone, formal, mid-market B2B startup register.
- Variant C: warm tone, casual, tech / creator register.
INPUT SHEET:
[Unique expertise: ...]
[Evidence: ...]
[Potential question: ...]
[My angle: <what I sell>]
Return the 3 variants in JSON output.
Guardrails
- Filter the output to verify no variant contains "we are leaders in..." or "I'd like to present..." — any promotional mention reintroduces the commercial stance and defuses dissonance.
- Never send a variant without re-reading it. The LLM can invent an erroneous attribution of an article. Verify the cited evidence.
- Cap at 30 sends per day maximum. The Ben Franklin Effect is not a mass-automation technique — past a certain volume, you'll hit two recipients who know each other, and the illusion of sincere attention collapses.
Workflow 3 — Detecting favorable moments
Objective
Spot the weak signals indicating that a prospect is in a window favorable to receiving an advice request: new publication, role change, funding round, recent interview, public talk.
Conceptual architecture
Monitored sources
│
┌──────────────┼──────────────┐
▼ ▼ ▼
LinkedIn API Google Alerts Podcasts/Articles
│ │ │
└──────────────┼──────────────┘
▼
LLM classification pipeline
("is this a favorable event?")
│
▼
Prioritized queue
(by profile + opportunity score)
│
▼
Sheet generation + 3 variants
│
▼
Human validation + send
Event-classification prompt
You receive a recent professional event about a prospect
(publication, role change, public talk).
Classify it by these criteria:
- TYPE: { "publication", "promotion", "funding", "interview", "product_announcement", "not_relevant" }
- BENFRANKLIN_ANGLE: Is there a plausible angle to solicit their
opinion within 7-14 days afterward? (yes/no)
- PROPOSED_ANGLE: If yes, formulate the angle in 15-25 words.
- OPTIMAL_DELAY: How many days after the event should the
request be sent? (integer between 2 and 21)
Reply in strict JSON.
EVENT:
[…]
Empirically observed optimal delays (from field feedback)
| Event type | Favorable delay for soliciting opinion |
|---|---|
| Article published | 2-5 days (still fresh, still engaged) |
| Promotion / role change | 7-14 days (out of stunned period) |
| Funding round | 3-7 days (visibility peak, energy available) |
| Podcast interview | 2-4 days (resonance still strong) |
| Product announcement | 5-10 days (after media peak) |
Sent too early (D+1 of a promotion), the message reads opportunistic. Sent too late (D+30), it loses contextual relevance.
Workflow 4 — Measuring and looping
Metrics to instrument
| Metric | Definition | Healthy target |
|---|---|---|
| Response rate to request | Replies / messages sent | 18-30 % |
| Substantive response rate | Replies > 20 words / messages sent | 10-18 % |
| 30-day conversion rate | Meetings within 30d / replies | 25-40 % |
| Reporting rate | LinkedIn reports / 1000 sends | < 1 |
| Salesperson NPS (quarterly survey) | 1-10 scale | > 7.5 |
LLM-assisted dashboard
A recurring workflow consists of asking an LLM, every month, to analyze received replies and group negative feedback into themes:
You analyze 50 replies received to B2B advice requests.
Group negative replies (refusals, complaints, annoyance)
into 3-5 main themes. For each, propose a modification
of the generation prompt.
REPLIES:
[…]
This closed improvement loop is what turns a campaign into durable routine. Salespeople who don't instrument their AI outputs end up degrading their Ben Franklin ratio without noticing.
The special case of intelligent CRMs
New-generation CRMs (HubSpot Breeze, Salesforce Einstein, Pipedrive AI) now embed sequence agents. They are tempted to generate full sequences in pseudo-Ben-Franklin mode ("I'd love 2 minutes of your view on...") applied massively, without deep personalization.
Empirical consequence: rapid saturation of inboxes with generic "I'd love your opinion" messages is rapidly degrading the response rate of the approach. The same wording, which converted at 28 % in 2023, drops, depending on segment, to 12-18 % in 2026.
Resilience strategy
- Strengthen personalization rather than diluting it (no longer limit to "I read your article" but cite a precise, paraphrased sentence).
- Diversify the favor types requested (an opinion today, an intro tomorrow, feedback on a concrete case next time).
- Reduce volume, increase precision: 15 excellent messages beat 150 mediocre ones.
- Document internally the trap-phrases (those now sounding AI-scripted) to forbid them in your house prompt library.
Ethical guardrails of industrialization
The Ben Franklin Effect is a real psychological mechanism. Industrializing it without ethics amounts to weaponizing cognitive biases for unilateral interests, which:
- Constitutes, in certain jurisdictions, deceptive commercial practice (French Consumer Code, art. L121-1).
- Destroys trust in your brand once the manipulation is exposed (and it eventually is).
- Degrades the quality of the commercial ecosystem at large.
Recommended practice framework
| Question | Answer required to remain ethical |
|---|---|
| Would I actually want to hear the answer? | YES |
| Will I seriously process the answer received? | YES |
| Would I be comfortable if the recipient saw my prompt? | YES |
| Would I be comfortable if the press covered my technique? | YES |
| Does my offering stand on its own without this lever? | YES |
If any answer is no, you should not use this technique.
In summary
- AI lets you scale the practice of the Ben Franklin Effect without betraying it, by automating mechanical steps (expertise extraction, variant generation, event classification).
- The standard workflow is: (1) extract unique expertise → (2) generate 3 message variants → (3) human validation → (4) send → (5) measure → (6) improvement loop.
- The right timing is as important as the right message: 2-5 days after a publication, 3-7 days after a funding round, 7-14 days after a promotion.
- Inbox saturation by mass-generated pseudo-Ben-Franklin sequences is sharply degrading historical rates. The countermeasure: less volume, more precision, diversity of favors requested.
- Ethics is not a moral supplement but a sustainability condition: without it, the effect reverses and damages the personal brand.
In the next chapter, we will see how to build a full entrepreneurial network on the favor lever — moving from "the salesperson closing a deal" to "the founder building an ecosystem."