On May 8, 2026, I ran fifteen prompts through ChatGPT, Claude, and Gemini in the same hour, against the same scenarios, with no setup beyond the prompt itself.
One honest note before the prompts: twelve of the fifteen Gemini cells worked cleanly on that run. Three (the Glossary Pre-Read, the Weekly Review, and the Goal Breakdown) returned empty on the route I used. I re-ran those directly in gemini.google.com on May 11 and used those responses below. Same Gemini family, more reliable surface for long structured outputs. If a chat ever drops out on a long structured prompt in your own use, the consumer app is usually the fix.
The fifteen prompts below are organized by what you’re actually trying to do. Each one comes with the prompt in copy-paste form, a one-line note on what it produced across the three models, and a credit for where it came from. Bookmark the ones that fit your week. Use the ones you remember. Skip the rest.
Think: prompts for stuck decisions and shaky reasoning
The three prompts in this group share one structural move: they make the model adversarial enough that you actually get pushback instead of reassurance. The default chatbot reflex is to be helpful, which usually means being agreeable. These prompts ask for the opposite.
The Pre-Mortem
Walks you through a project’s likely failure modes one step at a time, instead of dumping a list at you.
Claude and ChatGPT honored the step-by-step pause and asked for the project description first. Gemini compressed the first two steps (“describe the project, then list every reason”) into a single ask; if you give it both up front, the per-step rhythm collapses. You can recover it in Gemini by answering only step 1 and letting it ask for the rest on the next turn. Gemini also opens with a small “it is exactly one year from today” framing flourish; ChatGPT and Claude are near-bare (one short bridge sentence at most). The interactive loop only works if you actually answer turn-by-turn. Paste the whole project description in the first message and you collapse it back into a one-shot pre-mortem.
Inspired by Ethan and Lilach Mollick’s Team Pre-mortem Coach, reshaped into the single-paragraph form. The technique itself comes from prospective hindsight research showing that imagining a future failure makes people noticeably better at predicting actual failure causes; Mitchell, Russo, and Pennington’s 1989 paper is the academic anchor.
The Hype Check (single-shot pre-mortem)
The same idea, but in one paste, with one specific instruction that changes everything: don’t reassure me.
All three models nail the brutal-honesty register; the “don’t reassure me” line does most of the work. ChatGPT goes long and structured (twelve sections, with punchy callout lines like “You weren’t early. You were exit liquidity”). Claude compresses to a tight month-by-month timeline that hits hardest because it commits to specifics. Gemini writes literary prose that puts you in the future scene. For a gut-punch, Claude. For every angle, ChatGPT. For the scene to land, Gemini.
The single-shot version was popularized by Amanda Caswell at Tom’s Guide; the “don’t reassure me” instruction is hers, and it’s the move that unlocks the prompt.
Devil’s Advocate
Argues against your position, with reasoning, until you’ve heard the other side.
All three actually challenge you; none soft-pedal. The wordcount spread is real: ChatGPT’s response runs roughly three times longer than Claude’s. ChatGPT runs through every angle in numbered sections; Claude lands three themed objections plus a “hardest question” closer that’s the most opinionated of the three; Gemini sits in the middle. Claude is best for fast surgical pushback. ChatGPT is the one to reach for when you want every angle laid out.
Mollick’s prompt library has a canonical Devil’s Advocate prompt for teams. Otherwise this one shows up across enough independent roundups that no single inventor exists.
Draft: prompts for the message you’ve been putting off
This is the writing you’d rather skip: the email that’s been sitting in drafts for two days, the conversation you’ve been rehearsing in the shower, the message that needs three different versions because you don’t know yet which one to send.
The Friendlier Email
Rewrites your terse email into a warmer version without changing the actual ask.
ChatGPT gives you one short rewrite, cleanest if you just want something to paste. Claude returns the email with an unprompted note appended explaining what it changed, which is helpful if you’re learning what makes an email feel warmer and annoying if you’re in a hurry. Gemini gives you three versions at three warmth levels, which is sometimes useful and sometimes more decisions than you came for.
This one shows up across enough email-prompt roundups (TechRadar, ClickUp, Tom’s Guide, Medium) that no single creator owns it.
The Difficult Conversation Simulator
The model role-plays the other person at their normal baseline, and you practice the actual conversation before having it.
All three open low-key in character: one or two lines, no detail, the conversational equivalent of “hey, come on in, what’s up?” That’s correct behavior; the prompt asked for the normal baseline, and the complications develop turn by turn. The real test of this prompt is whether you actually run the role-play in chat, taking your turns. A single-message read does not surface what it can do.
The pattern shows up in TeamDynamics writeups on using AI for management coaching and gets covered occasionally in tech press.
Three Versions, Different Tones
One message, three tones (warm, professional, firm) so you can pick the one that matches the situation instead of agonizing.
ChatGPT stays closest to the source message across all three versions. Claude and Gemini both add an unprompted reschedule offer to the warm version, which is occasionally what you wanted and occasionally a thing you have to delete before sending. Gemini’s warm version also overdoes the apology slightly when the original message doesn’t warrant much apology. None of the three hallucinated a fourth version or changed the substance.
Versions of this prompt appear across ClickUp, Sendboard, and most other email-prompt roundups; treat the spread as the credit.
Learn: prompts for getting up to speed fast
Three for the situation where you have a topic, a meeting tomorrow, and not enough time to read the proper book. The shape is the same: hand the model the topic, get back something structured you can actually run through in twenty minutes.
The Three-Level Explainer
Explains a topic at three increasing levels of depth, with a callout for what changes between levels.
This is the cleanest cross-model side-by-side I ran (tested on explaining vector databases). Same topic, three very different shapes, and dense on purpose. Skim each tab; you’re looking at the shape, not trying to learn every word.
Level 1: Like you’re 10
Imagine you have a huge box of toy animals.
You want to find toys that are “like a tiger.” Not just toys with the word “tiger” on them, but things that feel similar: lions, leopards, cheetahs, maybe even a striped cat.
A normal database is like a list that says:
- Toy name: tiger
- Color: orange
- Has stripes: yes
It can find exact matches really well.
A vector database is more like a magic map where every toy is placed near other toys it is similar to.
- Tigers are close to lions.
- Whales are close to dolphins.
- Airplanes are close to helicopters.
- Bananas are close to apples because they are both fruits.
Each thing gets turned into a special set of numbers called a vector. The numbers describe what the thing “means” or is like. The vector database stores those numbers and quickly finds the closest ones.
So if you ask “Find me animals like a tiger,” it does not need the exact word “tiger.” It looks around the magic map and finds nearby things. That is why vector databases are useful for search, recommendations, and AI tools.
Level 2: Like you’re a curious adult with no background
A vector database is a database designed to store and search vectors, which are lists of numbers representing the meaning or features of something.
For example, a sentence like “How do I reset my password?” can be converted by an AI model into a vector such as [0.12, -0.45, 0.88, ...]. The actual vector may have hundreds or thousands of numbers. The important idea is that similar meanings produce similar vectors.
So “How do I reset my password?” and “I forgot my login password. What should I do?” end up close together. Even though the words are different, their meanings are similar.
A vector database helps answer questions like:
- “Find documents similar to this question.”
- “Find images that look like this image.”
- “Find products similar to this one.”
- “Find support tickets related to this new ticket.”
Traditional databases are excellent when you know exactly what you are looking for (find user where email = [email protected]). Vector databases are useful when you are looking for similarity (find the help articles most relevant to this customer’s question).
They are commonly used with modern AI systems. For example, in a chatbot, you might store your company documentation in a vector database. When a user asks a question, the chatbot converts the question into a vector, finds the most relevant documentation, and then uses that documentation to answer. This pattern is often called retrieval-augmented generation, or RAG.
Level 3: Like you’re evaluating it for a real decision
A vector database is a specialized storage and search system for high-dimensional embeddings. It supports approximate nearest neighbor search, metadata filtering, indexing, and retrieval workflows over unstructured or semi-structured data such as text, images, audio, video, code, logs, and user behavior.
You should consider a vector database when your problem involves semantic similarity rather than exact matching.
Common use cases include:
- Semantic search over documents
- AI chatbots over private/company data
- Recommendation systems
- Duplicate or near-duplicate detection
- Image, audio, or video similarity search
- Code search
- Fraud/anomaly pattern matching
- Customer support ticket routing
- Personalization
A typical pipeline looks like this:
- Collect data: documents, product descriptions, images, support tickets.
- Chunk or prepare the data: for text, long documents are often split into smaller passages.
- Create embeddings: an embedding model converts each item or chunk into a vector.
- Store vectors plus metadata: IDs, dates, permissions, language, category, customer.
- Index the vectors: the database builds an index optimized for fast similarity search.
- Query: a user query is embedded into a vector.
- Retrieve nearest results.
- Optionally rerank or generate: results may be reranked by a stronger model, then used by an LLM to produce an answer.
Important evaluation criteria include:
- Search quality. The key question is not “does it store vectors?” but “does it retrieve the right things for my use case?” Measure recall (does it find the relevant items?), precision (are the top results actually useful?), latency (how quickly does it return?), ranking quality (are the best results near the top?). Embedding model choice often matters as much as, or more than, the database itself.
- Scale and performance. Number of vectors (thousands, millions, billions?). Vector dimensions (384, 768, 1536, 3072). Query volume (occasional, high-QPS, real-time?). Latency requirements (milliseconds, seconds?). Update frequency (static corpus or constant ingestion?). Vector search typically uses approximate nearest neighbor algorithms because exact search across millions of high-dimensional vectors is too expensive.
- Metadata filtering. Real systems rarely search vectors alone. You need filters: only documents the user is allowed to see, only English-language documents, only products in stock, only data from the last 30 days, only records for a specific tenant or customer.
- Operations and integration. Managed or self-hosted? How hard is it to ingest and update data? Does it integrate with your existing stack? Backups, monitoring, access control, replication. Delete and reindex handling. Costs at expected scale.
- Freshness and consistency. How quickly new vectors become searchable. Whether deletes are immediate. Whether permissions changes are reflected safely. How re-embedding is handled when models change.
- Security and governance. Data residency. Encryption. Role-based access control. Audit logs. Tenant isolation. Compliance requirements. Handling of sensitive or regulated data.
- Alternatives. You may not always need a dedicated vector database. Alternatives include a traditional database with vector extensions (Postgres + pgvector), a search engine with vector support (Elasticsearch, OpenSearch), a cloud data warehouse with vector capabilities, a managed vector service, or a custom FAISS-based system if you have strong engineering support.
A rough rule:
- Prototype or small app: pgvector, SQLite extensions, or a managed vector service
- Search-heavy product: dedicated vector database or search engine with strong vector support
- Enterprise RAG with permissions: prioritize metadata filtering, access control, freshness, and auditability
- Massive-scale recommendation/search: evaluate performance, indexing strategy, cost, and operational control very carefully
What changes between the levels
At Level 1, the focus is the basic idea: a vector database finds things that are similar, not just exact matches. At Level 2, the mechanism is introduced: AI models convert text or images into numerical vectors, and the database searches for nearby vectors. At Level 3, the focus shifts to practical decision-making: is this the right technology for your use case, and how do you evaluate quality, performance, cost, security, integration, and alternatives.
Vector Databases: Three Levels
Level 1: Like you’re 10
Imagine you have a giant box of LEGO pieces, and you want to find all the pieces that are similar to one specific piece — maybe ones that are about the same size, same color, and same shape, even if they’re not identical.
A normal database is like a librarian who can only find things by exact name: “Show me the red 2x4 brick.” If you ask for “something kinda like this,” they shrug.
A vector database is like a super-smart friend who has already sorted every LEGO piece on a giant map. Pieces that look alike sit close together on the map. When you hand them a piece and say “find me ones like this,” they just look at the map and grab the neighbors. Fast.
That’s it. It’s a way to find things by similarity instead of exact matches.
Level 2: Curious adult
Computers are great at exact matches (“find rows where email = [email protected]”) but historically bad at fuzzy questions like “find documents that mean something similar to this one.”
The trick: modern AI models can turn almost anything — a sentence, an image, a song — into a long list of numbers called an embedding (or vector). Think of it as coordinates in a high-dimensional space (often 500–2000 dimensions instead of just 2 or 3). The key property: things with similar meaning end up with similar coordinates. “Dog” and “puppy” land near each other. “Dog” and “bicycle” don’t.
A vector database stores millions or billions of these embeddings and specializes in one hard problem: given a new vector, find the nearest ones fast. Doing this naively means comparing against every vector in the database, which is too slow. Vector DBs use clever indexing algorithms (HNSW, IVF, etc.) to approximate the answer quickly.
This powers things like semantic search, recommendation systems, and the “retrieval” step in RAG (Retrieval-Augmented Generation), where an LLM looks up relevant context before answering.
What changed from Level 1: We introduced embeddings (the actual mechanism), high-dimensional space (not just a flat map), and why this is technically hard (scale + speed).
Level 3: Evaluating for a real decision
Now the question isn’t “what is it” but “do I need one, and which?”
First, do you actually need a dedicated vector DB? Many people don’t. Options span a spectrum:
- Libraries (FAISS, Annoy): in-process, no server, great for under 10M vectors and static data.
- Extensions to existing DBs (pgvector on Postgres, Elasticsearch, MongoDB Atlas): keep your existing stack, accept some performance ceiling. Often the right call if you already run Postgres.
- Dedicated vector DBs (Pinecone, Weaviate, Qdrant, Milvus): justified at large scale, high QPS, or when you need hybrid search, metadata filtering at scale, or multi-tenancy.
Key evaluation axes: scale and latency (benchmark ANN tradeoffs with your data), recall vs. speed, filtering behavior (pre vs. post), update patterns (HNSW handles updates better than IVF), hybrid search (BM25 + dense vectors with reranking), operational cost (memory is the big one — a billion 768-dim float32 vectors is ~3TB of RAM), and lock-in (embeddings are portable; the DB usually isn’t).
Common pitfalls: treating vector search as magic (retrieval quality is bounded by embedding quality), skipping evaluation (build a labeled eval set, measure recall@k on real queries), over-architecting (500K documents on pgvector on the Postgres you already run will probably beat a new dedicated system in total cost and complexity).
What changed from Level 2: We stopped explaining the concept and started assuming it. Focus shifted to tradeoffs, failure modes, and the meta-question of whether you need this category of tool at all. The “right answer” became context-dependent rather than descriptive.
What changes between these levels
- Level 1 (Age 10) completely ignores math and code. The focus is purely on intuition — understanding how computers can group things by “concept” instead of exact words, using a physical spatial analogy.
- Level 2 (Curious Adult) introduces the mechanics. The focus shifts to how the technology works (translating data into coordinates called vectors) and why traditional databases fail at this task.
- Level 3 (Decision-Maker) pivots to business value and architecture. The focus is on evaluating trade-offs, performance metrics (latency vs. recall), and how this fits into a modern enterprise AI stack.
Level 1: Like you’re 10 years old
Imagine you have a giant, magical gymnasium floor, and your job is to organize thousands of stuffed animals.
Instead of putting them in alphabetical order, you make up rules for where they sit.
- Left to right: Fluffy animals go left, scaly animals go right.
- Front to back: Small animals go front, big animals go back.
- Floor to ceiling: Animals that swim are taped to the floor, animals that fly hang from the ceiling.
Now, I hand you a brand-new stuffed animal: a pelican. I ask you, “Can you find me three animals similar to this?”
You don’t need a list. You just walk to the spot in the gym that is kinda fluffy, kinda scaly, medium-sized, and hangs from the ceiling. You place the pelican in the air, look at what is floating right next to it, and hand me a seagull, an albatross, and a duck.
A vector database is just a computer version of this magical gymnasium. Instead of stuffed animals, it organizes pictures, articles, and songs. Instead of searching for the exact word “pelican,” it just walks to the “meaning” of a pelican and grabs whatever information is sitting right next to it.
Level 2: Like a curious adult
To understand a vector database, you first have to understand why traditional databases are struggling today.
Traditional databases (like spreadsheets or SQL) are great at exact matches. If you search a shoe store’s database for “red sneaker,” it searches its text columns for the exact words “red” and “sneaker.” But if the database lists the item as “crimson athletic shoe,” you will get zero results. The computer doesn’t know that crimson means red.
A vector database solves this by searching by concept, not by exact words. It does this using a two-step process:
- Translating meaning into numbers (Vectors): We use Artificial Intelligence to “read” data and turn it into coordinates on a map — just like the gym analogy above, but instead of 3 directions (left/right, front/back, up/down), the AI uses thousands of directions to describe the data. A list of coordinates might look like
[0.45, -1.2, 0.89...]. This list of numbers is called a vector. “Crimson athletic shoe” and “red sneaker” will have almost identical coordinates. - Searching the map: A vector database is specially built to store and search millions of these coordinates. When you type a question into ChatGPT, your question is turned into coordinates, and the database calculates the mathematical distance to find the data points closest to your question.
This is how Spotify knows what songs “sound” similar, how Google Photos can find a picture of your dog even if you never tagged it, and how AI can read thousands of documents and instantly find the one paragraph that answers your question.
Level 3: Like you are evaluating it for a real decision
If you are building applications using Large Language Models (LLMs), a vector database is the critical infrastructure layer required to implement Retrieval-Augmented Generation (RAG). LLMs hallucinate and don’t know your company’s private data. A vector database acts as the LLM’s long-term memory.
Here is how you need to evaluate them for a production environment:
1. Architecture: Purpose-Built vs. Extension. Purpose-built vector DBs (Pinecone, Milvus, Weaviate, Qdrant) were built from the ground up for vector math and offer superior scalability and lower latency at massive scale. Database extensions (pgvector for PostgreSQL, Elasticsearch, MongoDB Atlas) simplify your tech stack if you already use these databases — the trade-off is often lower performance at tens of millions of dense vectors.
2. Key Performance Metrics. You are balancing recall vs. latency (exact k-NN is computationally impossible at scale, so vector DBs use approximate nearest neighbor algorithms like HNSW), and memory management (HNSW indexes usually live in RAM; evaluate vendors on disk-based indexing or vector compression so RAM costs don’t bankrupt you).
3. Hybrid Search Capabilities. Do not buy a vector database that only does vector search. Vector search is terrible at exact keyword matches (e.g., serial numbers, exact names like “John Smith”). You need a database that supports Hybrid Search — running a semantic vector search and a traditional keyword search (BM25) simultaneously, automatically fusing the scores together (usually via Reciprocal Rank Fusion).
The same prompt ('Explain vector databases to me in three levels…') sent to all three chats. Cycle through the tabs to compare how each shapes the same information.
ChatGPT goes encyclopedic with full bullet trees and every level fully developed. Claude tightens hard and adds opinions; the result is faster to read and more useful when the topic gets technical. Gemini puts the meta-frame first, leading with what changes between levels before the levels themselves. All three execute the structure cleanly.
The “explain it at three levels” lineage runs back through r/explainlikeimfive; the move that elevates this version is the “what changed between levels” callout, which is what makes the structure pay off.
The Feynman Loop
The Feynman technique in chat shape: model explains, you explain it back, model points at the fuzzy parts, you go again.
All three correctly explained step 1 and then handed control back. ChatGPT’s first explanation was the most thorough: full table, full formula, every adjacent concept. Claude added a punchline that sells the concept emotionally (“$100 becomes $4,525 in 40 years if you leave it alone”). Gemini ran an extended “tiny workers earning interest for you” analogy that’s charming and might confuse a reader who actually wanted the plain math. Whichever model you use, the loop only works if you do step 2: explain it back, in your own words, badly. The badness is what surfaces the fuzziness.
This loop framing was first widely shared on r/ChatGPTPromptGenius in mid-2025, where the iterative version took off. The Feynman technique itself is named after physicist Richard Feynman’s habit of explaining concepts at the level a beginner could grasp.
The Glossary Pre-Read
Ten key terms in plain English, with one line each on why each one matters.
ChatGPT’s table format is cleanest at a glance and best if you are scanning. Claude’s numbered-list format adds an unprompted “bonus tip for the meeting” closer that is genuinely useful in some runs (mine read: ask “physical or logical qubits?” and “what’s the gate fidelity?”, the two questions that separate substance from hype fast) and feels like padding in others. Gemini opens with a framing line that names what the PM actually cares about (“you don’t need to know the physics — you just need to know how these concepts affect scale, speed, and cost”) and then runs the same definition-plus-why-it-matters shape as Claude, minus the bonus tip. Scanner gets ChatGPT; reader gets Claude or Gemini.
Adapted from Mollick’s context-loading approach to using AI as a learning tool. The “skip what I’d already know” instruction makes this version actually useful instead of generic.
Plan: prompts for turning intentions into weeks
These three take a vague intention and shape it into something with edges: a checklist, a timeline, an agenda. They share a structural move: forcing the model to commit to specifics (weeks, durations, observable outcomes) instead of generic encouragement.
The Weekly Review
A four-phase, twenty-minute review structure you run on Friday afternoon or Monday morning.
ChatGPT and Claude both deliver the four-phase checklist with time-boxed phases. Claude’s framing reads more like a habit you would actually keep (“Friday afternoon or Monday morning, same spot, same coffee”) where ChatGPT’s reads more institutional. Gemini opens with a warm one-liner (“your brain is likely a browser with 50 tabs open”), runs the four phases as bullets, and closes with a separate named ritual (“The ‘Closing Shift’ Ritual”: write tomorrow’s Top 3 in your last ten minutes today) that lives outside the checklist itself. The recommended habit at the end differs across all three: in this run, Claude suggested protecting a creative-direction block; ChatGPT suggested a Friday traffic-light status check; Gemini suggested the closing ritual.
The pattern shows up regularly in TechRadar’s coverage and elsewhere. The structure mirrors David Allen’s Getting Things Done framework, which is the foundational source underneath most modern weekly reviews.
The Goal Breakdown
Weekly milestones for a goal, with concrete tasks, time estimates, and end-of-week indicators.
ChatGPT delivered a clean week-by-week table, the kind of plan a spreadsheet-skimmer can scan in a minute. Claude went deeper structurally, grouping the weeks into named phases and ending with a “reality check” section that explicitly named the pace as “doable but firm” given the constraints. Gemini split the difference: three named phases (Financial Audit / Side Hustle Launch / Optimizing the Engine), a week-by-week table inside each, and a closing “Reality Check” that did the arithmetic out loud (€50 from budget cuts + €140 from a 4-hour-a-week side hustle = €190/week, “effectively on track”) and then asked which lever felt more sustainable. Pick by personality: scanner gets ChatGPT, runner gets Claude, runner-who-wants-to-be-asked-back gets Gemini.
Will Francis’s six-step yearly-goals prompt is the most disciplined single voice in this space, and the inspiration for forcing the “observable indicator a stranger could see” line. Otherwise the structure is convergent across other goal-breakdown roundups.
The 1:1 Prep
A twenty-five-minute agenda for a one-on-one, with three sections, two thoughtful questions, and an opening prompt that isn’t “how was your week?”
All three deliver the agenda, the thoughtful questions, and a non-generic opener. The openers differ: in this run, ChatGPT’s was structured (one win, one open question), Claude’s was deliberately personal and non-work (“tell me about something outside of work you’ve been into lately”), Gemini’s was light and energetic. Pick the register that matches how you actually talk to your team. Claude’s openers are most likely to feel slightly off if you have a formal relationship, and most likely to land if you have a warm one.
This one is documented most thoroughly at TeamDynamics, which has the deepest single writeup on using AI for 1:1 prep.
Get unstuck: prompts for when you can’t see clearly
When you’ve been too close to a situation, the chat can give you a different angle on it. Two of the prompts below lean on psychology research that has actually been replicated; one hands the model your own reasoning and asks where the holes are.
Name the Feeling
A note on this prompt and the next one: these are reflective tools, not therapeutic ones. They ask questions; they do not give advice or treat anything. If you are working through something heavy, talk to a person who is qualified for that kind of work. The prompts here are for the everyday “I cannot see this clearly” moments.
Asks one reflective question at a time about a feeling you can’t quite identify, and waits.
All three asked one reflective question and waited, which is what the prompt asked for. Claude’s was the most surgical (“the actual instant: what happened inside you? Not what you expected to feel, but what was actually there?”). ChatGPT’s was the simplest. Gemini opened with about two paragraphs of validation before asking, which is gentler if you wanted that and slows the loop if you didn’t. None of the three told the user what to feel; the anti-coercion line in the prompt held across all three models.
No single inventor here; the feeling-naming shape shows up across enough self-discovery prompt collections that the pattern is folk.
Self-Distancing
Rewrite the situation in third person, then answer questions about it as if it happened to a friend.
All three got the third-person rewrite right; the language flattens out, the situation reads less charged on the page. The three follow-up questions varied: ChatGPT pushed on outcomes (repayment, apology, reconciliation, distance), Claude probed the silence between people (“who has been waiting for who to reach out, and what would it mean to be the one who breaks it?”), Gemini asked which thing carried more weight. None gave premature advice.
Adapted from Mollick’s Goal Play: Help a Character Get Perspective prompt. The technique itself comes from Ethan Kross’s well-replicated research at Michigan on how third-person self-talk reduces emotional reactivity. Kross’s book Chatter is the long version.
The Blind Spot Audit
Hand the model your reasoning and ask it to find your blind spots across five specific dimensions.
ChatGPT was the most exhaustive, including a small comparison table at the end. Claude was the most opinionated, with the sharpest closing line (“you are treating your own labor as free, and that is the error that will actually cost you”). Gemini staged it as conversational CFO dialogue, which is dramatic and slightly slower to read. All three converged on the same core CFO punch, namely the planning fallacy on the “two weekends” estimate plus the opportunity cost of unpaid time. When three different models reading the same reasoning all hit the same critique, that is signal worth taking seriously.
The five-dimension structure shows up across independent creators including Nate B. Jones’s Substack and various AI-prompt blogs. No single inventor; the framing here pulls from the parts that worked best in testing.
A short note on why these worked
A smart skeptic would point out that none of these prompts are doing anything magic. They are well-structured task requests, and any reasonable phrasing of the same task would do roughly the same job. That is mostly true, and the structure is what’s load-bearing. These prompts are useful because the structure has already been worked out (by Mollick, by community contributors, by the convergence of independent creators landing on similar shapes) and the test was whether they survive contact with three different models in 2026. They do.
The differences across models are about voice, not capability. Once you have used each chat for a few of these, you will start to feel the personalities yourself: ChatGPT for breadth, Claude for compression, Gemini for warmer framing that often hands a question back. Try one this week, and come back when these get re-verified. Recipes change; the chat models keep changing underneath them.
If you have also been asking these chats to make you HTML pages, the HTML cheat sheet names thirty-one browser features you can ask for by name.