AI & LLM

The Interview Method: How AskUserQuestionTool Changed My Claude Code Workflow

6 min read
#Claude Code#Workflow#Spec-Driven Development#AskUserQuestionTool

TL;DR

  • My favorite way to build large features with Claude Code: spec-based development
  • Start with a minimal spec, then let Claude interview you using the AskUserQuestionTool
  • For big features, Claude asks 40+ questions—you end up with a detailed spec you actually own
  • Execute the spec in a new session with fresh context

The Problem

Traditional prompting is backwards. You write a prompt, Claude guesses your intent, you fix misunderstandings, repeat. You’re doing the heavy lifting of figuring out what context Claude needs.

There’s a better way.

The Solution: Let Claude Interview You

Flip the script. Instead of you prompting Claude, let Claude interview you.

The AskUserQuestionTool is a built-in Claude Code feature that presents structured multiple-choice questions. Claude asks, you click. Fast. Focused. No typing essays.

Here’s how I use it.

The Workflow

Step 1: Start with a Minimal Spec

Create a SPEC.md file with your rough idea. It can be messy. Five bullet points. A paragraph. Whatever you have.

# Feature: User Dashboard

- Show usage metrics
- Export to CSV
- Filter by date range
- Maybe charts?

That’s it. Don’t overthink it.

Step 2: Run the Interview Prompt

Open Claude Code and use this prompt:

read this @SPEC.md and interview me in detail using the
AskUserQuestionTool about literally anything: technical
implementation, UI & UX, concerns, tradeoffs, etc. but
make sure the questions are not obvious

be very in-depth and continue interviewing me continually
until it's complete, then write the spec to the file

The key parts:

  • “interview me in detail” — Claude will go deep
  • “using the AskUserQuestionTool” — structured Q&A, not freeform chat
  • “not obvious” — skip the basic questions, dig into the hard stuff
  • “continually until complete” — don’t stop after 5 questions

Step 3: Answer 20-40+ Questions

Claude will start asking. A lot. For a big feature, expect 40+ questions across:

  • Technical implementation details
  • UI/UX decisions
  • Edge cases you hadn’t considered
  • Tradeoffs and priorities
  • Integration concerns
  • Error handling
  • Performance considerations

Each question comes with multiple choice options. Click the one that fits. Add context if needed.

Step 4: Get Your Detailed Spec

When Claude is satisfied, it writes the final spec to your file. You now have a comprehensive document that captures every decision you made.

Step 5: Execute in a Fresh Session

Start a new Claude Code session. The old interview context is gone—that’s intentional. Your spec is now the single source of truth.

implement @SPEC.md

Clean context. Clear instructions. No accumulated confusion.

Real Example Questions

Here’s what the AskUserQuestionTool looks like in practice. These are the kinds of questions Claude asked me during a recent dashboard feature:

Technical Implementation

How should the metrics data be fetched?

  • Real-time WebSocket updates
  • Polling every 30 seconds
  • On-demand when user refreshes
  • Other

UX Decision

What should happen when export takes longer than 5 seconds?

  • Show progress bar with percentage
  • Show spinner with “Generating…” text
  • Allow background export with notification when ready
  • Other

Architecture Tradeoff

Where should date filtering logic live?

  • Client-side (faster UX, more data transfer)
  • Server-side (less data, slower response)
  • Hybrid (server filters, client refines)
  • Other

Edge Case

How should the dashboard handle users with no data yet?

  • Empty state with onboarding prompt
  • Sample data preview
  • Redirect to setup wizard
  • Other

Priority Question

For the MVP, prioritize:

  • Feature completeness (all metrics, all exports)
  • Polish (fewer features, better UX)
  • Speed to ship (minimum viable, iterate fast)
  • Other

These aren’t yes/no questions. They force decisions.

Why This Works

You think through edge cases. The questions surface scenarios you’d otherwise discover mid-implementation. Or worse, in production.

Decisions get documented. Every choice is captured. Six months later, you know why you chose WebSocket over polling.

You feel ownership. You made every decision. The spec isn’t Claude’s guess—it’s your product vision, extracted and organized.

Clean context separation. The interview session accumulates context. The execution session starts fresh with just the spec. No confusion between exploration and implementation.

It’s faster than you think. Clicking through 40 multiple-choice questions takes 10-15 minutes. Writing a spec that detailed from scratch? Hours.

The Data Behind It

This isn’t just my opinion. The research backs it up.

MIT found that prompts matter as much as models. A study from MIT Sloan showed that 50% of AI performance improvement came from users improving their prompts—not from upgrading to better models. How you communicate matters.

Structured approaches reduce security vulnerabilities. Research comparing “vibe coding” (ad-hoc prompting) to spec-driven development found 85% fewer security vulnerabilities and 300% better maintainability with the spec-first approach.

Unstructured sessions waste tokens. A PDCA framework study found that unstructured coding sessions consumed 80% of their tokens after the AI declared the task complete—just troubleshooting and fixing misunderstandings. Planning upfront eliminates this waste.

Context degrades over long sessions. LLM generation becomes less reliable as sessions lengthen. Splitting planning and execution into separate sessions—like this workflow does—prevents context window degradation.

You’re Not Alone

Mo Bitar calls this “The Interrogation Method.” His approach: ask Claude one question per response, record the Q&A in a transcript file, continue until complete. A 45-minute notification system interview generated ~1,000 lines of transcript. The spec produced functional code on the first attempt.

His summary: “The whole game is specification. You live and die by it.”

Others call variations of this technique “Reverse Prompting” or “Socratic Prompting.” The core insight is the same: let the AI interview you instead of the other way around. It surfaces requirements you’d otherwise miss.

Try It Yourself

Next time you’re building something non-trivial:

  1. Create a minimal SPEC.md
  2. Run the interview prompt
  3. Answer everything Claude asks
  4. Execute the spec in a new session

You’ll never go back to traditional prompting for large features.


The AskUserQuestionTool is available in Claude Code. If you’re not using it yet, you’re leaving the best part of the workflow on the table.

Yannik Zuehlke
Author

Yannik Zuehlke

Consultant, Architect & Developer

Software architect and cloud engineer with 15+ years of experience. I write about what works in practice.