Rich Tabor

Design. Engineering. Product.

I built an agent that interviews WordPress users

I built an ai voice agent that interviews people about WordPress. Three minutes, their honest take, with structured insights delivered on the other side.

The way I see it, conversations are variables—context, intent, memory, tone. Set them right and agents handle structured research while you focus on judgment calls.

Humans are irreplaceable for empathy, judgment, shared experience. But for conversations that are structured and repeatable? Agents are in.

How it works

You land on the page, optionally drop your email, and start talking. It’s my voice—I’ve been experimenting a lot with voice and created my own studio-quality voice clone with ElevenLabs.

The prompt is designed with behavioral research principles—specifically, asking about past behavior instead of future intentions or abstract opinions.

When someone says what they “would do” or “think is important,” they rationalize. When they describe what actually happened last Tuesday, they reveal truth. This is straight from The Mom Test: talk about specifics, not hypotheticals.

The conversation flow is structured but adaptive. It opens with context (“tell me about yourself“), moves to the core story question (“think about the last time you were working on your site—walk me through what happened“), then probes the most interesting thread.

The agent tailors these questions based on role signals—developers get asked about building, bloggers about publishing, store owners about managing products, etc.

When interviewing, neutrality is critical. The agent never leads, never suggests answers, never assumes emotion.

If you mention something was “frustrating,” it asks “how did that make you feel?” instead of “how frustrating was that?” The difference is subtle but matters as one invites truth, the other validates an assumption.

Why voice?

The best user research feels like talking to someone genuinely curious about your experience—not an interrogation, and certainly not a form.

Even so, voice interfaces have unique constraints. We can only hold about three seconds of audio in working memory, so I tried to make it where the agent speaks in short sentences with natural conversational markers. It handles interruptions (as gracefully as AI voice agents can today), uses strategic silence to invite elaboration, and knows when energy drops so it doesn’t overstay.

After the call ends, the real work starts. I have a post call event that sends the transcript to a webhook which validates the signature, stores the call data in Upstash Redis, and triggers a Vercel Workflow.

The post call workflow

Seven prompts run in parallel—sentiment scoring, user segmentation, pain point extraction with severity ratings, feature requests mapped to underlying needs, tool mentions with sentiment, quotable moments, and a summary combining everything.

Each returns structured JSON with the final step aggregating stats in Redis and surfacing them on a dashboard where I can see metrics, read transcripts, and pull quotes.

Here’s one of the prompts for identifying tools that people may be using with WordPress:

src/workflows/analyze-tools.ts
const prompt = (transcript: string) => `Identify any WordPress tools, editors, or builders mentioned in this conversation.

<transcript>
  ${transcript}
</transcript>

Look for mentions of:
- Gutenberg / Block Editor
- Classic Editor
- Page builders (Elementor, Divi, Beaver Builder, Bricks, etc.)
- Themes
- Specific plugins
- WordPress.com vs self-hosted
- Competitors (Webflow, Squarespace, Wix, etc.)

Respond with JSON only:
{
  "tools": [
    {
      "tool": "<name>",
      "sentiment": "positive" | "negative" | "neutral",
      "reason": "<why they="" feel="" this="" way,="" if="" expressed="">",
      "isCurrentlyUsing": <boolean>
    }
  ]
}`;</boolean></why></name>

Simple, direct, and it pulls out exactly what I need to see patterns across conversations.

Why this matters

Traditional user research is time-intensive—you schedule calls, take notes, transcribe recordings, look for patterns. Or you send surveys that people don’t complete.

This runs on autopilot: people talk for three minutes when it’s convenient for them and I get structured insights without touching the transcript.

Adding your email is optional but useful. When someone shares something worth digging into, I want to talk to them. This makes it trivial to find the right people for those conversations instead of hoping you booked the right five people for moderated interviews.

Try it

The WordPress research agent is live at interviews.now. Talk to it. Tell it what you think. Takes a few minutes. I’ll be sharing insights from these conversations periodically on X.

Let me know what you think.