AI market research behind the build

AI Agents for Market Research: 11 Agents, 4 Waves

/ 12 min read

One prompt is not research

You have probably tried this. You open ChatGPT or Claude, type “analyze the market for [your idea],” and get back 500 words of generic observations that could apply to any industry in any decade.

“The market is growing.” “There are several competitors.” “Customer needs are evolving.” Thanks. That is worth exactly nothing.

The problem is not the AI. The problem is the approach. A single prompt produces a single perspective, constrained by the context window, shaped by whatever the model remembers from training data. It is the AI equivalent of asking one person at a party what they think about your idea.

AI agents for market research work differently. Instead of one prompt, you deploy multiple specialized agents, each focused on a specific research angle, working in parallel waves. The result is not a summary. It is structured intelligence that actually drives decisions.

Why single-prompt research fails

Before getting into how multi-agent research works, let me explain why the obvious approach does not.

Context window limits depth. When you ask a single prompt to cover market size, trends, competitors, customer segments, and pain points, the model has to spread its attention across all of them. Each topic gets shallow treatment. A 2,000-word response covering 5 topics gives you 400 words per topic. That is a paragraph, not analysis.

No specialization. A single prompt treats every research question with the same approach. But market sizing requires different methods than competitive analysis. Customer pain point research requires different sources than trend analysis. A generalist prompt produces generalist output.

No iteration. A single prompt runs once and gives you a result. It cannot say “this finding from the market sizing changes my approach to the competitive analysis.” It cannot adjust its questions based on what it discovers. The research is static, not adaptive.

No cross-validation. When one agent says the market is $2B and another agent’s research implies it should be $500M, that discrepancy is valuable. It means someone’s methodology is wrong, or the market is being defined differently. A single prompt never creates this kind of internal tension because there is only one perspective.

The 4-wave research architecture

The solution is structured waves of specialized agents. Each wave has a specific focus, and each wave builds on the findings of the previous one. Here is how it works.

Wave 1: Market landscape

Wave 1 answers the fundamental question: how big is this opportunity and where is it going?

Agent 1: Market sizing. This agent focuses exclusively on calculating TAM, SAM, and SOM. It triangulates from multiple approaches: top-down (industry reports, analyst estimates), bottom-up (number of potential customers times average revenue per customer), and comparable (what did adjacent markets look like at a similar stage?). Three approaches, one reality check.

Agent 2: Trend analysis. This agent maps market trends, technology shifts, regulatory changes, and macroeconomic factors that affect the opportunity. It is looking for tailwinds (forces that help you) and headwinds (forces that hurt you). A market that is growing because of a temporary trend is very different from one growing because of a structural shift.

Agent 3: Adjacent market scanner. This agent looks at markets that border yours. Adjacent markets reveal expansion opportunities, potential competitors who might enter your space, and business model patterns that have worked in similar contexts.

The output of Wave 1 is a market landscape report. It tells you how big the opportunity is, whether it is growing or shrinking, and what external forces are shaping it.

Wave 2: Competitive intelligence

Wave 2 takes the landscape from Wave 1 and maps who is already operating in it. This is where competitive analysis with AI goes deep.

Agent 4: Direct competitor profiling. This agent identifies and profiles companies that solve the same problem for the same customer. For each competitor, it extracts pricing, positioning, features, funding, team size, and growth signals. Not just “they exist” but “here is exactly what they offer and how they position themselves.”

Agent 5: Indirect competitor mapping. This agent finds companies that solve the same underlying problem in a different way. If you are building a project management tool, indirect competitors include spreadsheets, email, Slack channels, and in-person standup meetings. These are the alternatives your customers use TODAY, before they find you.

Agent 6: Customer sentiment analysis. This agent mines reviews, forums, social media, and support discussions about existing solutions. What do customers love? What do they hate? What features do they ask for that nobody builds? The gaps in customer satisfaction are your positioning opportunities.

The output of Wave 2 is a competitive landscape map with battle cards. Each competitor gets a card with their strengths, weaknesses, pricing, and the customer segments they serve well (and poorly).

Wave 3: Customer deep-dive

Wave 3 zooms in from the market level to the customer level. Who exactly are these people, and what do they actually need?

Agent 7: Customer segmentation. This agent identifies distinct customer segments within your target market. Not “small businesses” but “bootstrapped SaaS founders with less than $10K MRR who do their own marketing.” Specific enough that you could find these people and talk to them.

Agent 8: Pain point mapping. This agent catalogs the specific problems, frustrations, and unmet needs within each customer segment. It prioritizes by severity (mild annoyance vs. burning pain) and frequency (occasional vs. daily). The intersection of severe and frequent is where your product should live.

Agent 9: Willingness-to-pay analysis. This agent estimates what each segment would pay for a solution. It looks at current spending on alternatives, budget constraints, and price sensitivity indicators. This directly feeds into pricing decisions and helps you figure out which segment to target first.

The output of Wave 3 is a customer map: segments ranked by attractiveness (pain severity times willingness to pay times reachability).

Wave 4: Synthesis and strategy

Wave 4 is where everything comes together. The previous 9 agents produced raw intelligence. Wave 4 agents synthesize it into actionable strategy.

Agent 10: Opportunity scoring. This agent takes all findings from Waves 1-3 and scores the opportunity across multiple dimensions: market attractiveness, competitive intensity, customer accessibility, and founder-market fit. It produces a single scorecard that tells you whether to proceed, pivot, or kill the idea.

Agent 11: Strategic recommendations. This agent generates specific strategic recommendations. Which customer segment to target first. How to position against competitors. What features to build in V1. What pricing model to start with. These are not generic platitudes. They are specific recommendations backed by the research from previous waves.

The output of Wave 4 is a strategic brief: your go/no-go decision plus a concrete plan for how to enter the market if you proceed.

Why waves matter more than parallel execution

You might wonder why the research is organized in sequential waves instead of running all 11 agents simultaneously. The answer is dependency.

Wave 2 needs Wave 1. You cannot do a meaningful competitive analysis without first understanding the market boundaries. Is this a $50M niche or a $5B market? The answer changes which competitors matter and how you evaluate them.

Wave 3 needs Wave 2. Customer segmentation is sharpened by understanding what existing solutions serve which segments well. If a competitor dominates enterprise clients, your customer research should focus on the segments they underserve.

Wave 4 needs everything. Synthesis without all the inputs is just speculation with extra steps.

Within each wave, agents CAN run in parallel. That is the performance advantage. Wave 1’s three agents all work simultaneously. Wave 2’s three agents all work simultaneously. But the waves themselves are sequential because later research is informed by earlier findings.

What 11 agents produce that 1 prompt cannot

Let me give you a concrete comparison. I ran both approaches on the same startup idea: an AI-powered pricing optimization tool for indie SaaS founders.

Single-prompt result: A 1,500-word summary that mentioned “the SaaS market is large and growing,” listed 4 competitors I already knew about, and suggested “targeting small to mid-size SaaS companies.” Nothing I could not have written myself in 20 minutes.

11-agent result: A 12,000-word research package with specific market sizing ($340M addressable market for pricing tools, $45M serviceable for indie SaaS), 14 competitors mapped with pricing and positioning, 4 distinct customer segments with different pain points and willingness-to-pay, a gap analysis showing that no existing tool handles usage-based pricing well for sub-$50K MRR companies, and a recommendation to target that specific segment with a specific pricing model.

The difference is not marginal. It is the difference between “sounds interesting” and “here is exactly who to sell to, what to charge, and why they will switch from their spreadsheet.”

What each agent actually does under the hood

Each agent follows a specific research protocol. It is not just “search for X.” It is a structured process.

The market sizing agent, for example, runs three estimation methods independently and then compares them. If the top-down and bottom-up estimates differ by more than 3x, it flags the discrepancy and investigates why. Maybe the market definition is too broad. Maybe the assumed average revenue per customer is wrong. The investigation of discrepancies often produces the most valuable insights.

The customer sentiment agent does not just pull review summaries. It categorizes complaints by theme, tracks sentiment trends over time (is a competitor getting worse?), and identifies feature requests that appear across multiple competitors (indicating unmet market demand, not just one company’s failing).

The opportunity scoring agent uses weighted criteria, not gut feeling. Market attractiveness might be weighted at 30%, competitive intensity at 25%, founder-market fit at 25%, and customer accessibility at 20%. The weights can be adjusted based on the founder’s priorities, but the point is that the scoring is systematic and transparent.

Building this yourself vs. using a tool

You can replicate this architecture manually. Write 11 different prompts, run them in sequence, feed outputs from earlier waves into later prompts. It works. It takes a few hours instead of a few weeks.

Or you can use a tool that has the architecture already built. When I was testing 5 startup ideas in one week, the multi-agent research ran automatically for each idea. Same structure, same depth, different input. That consistency is what makes structured research valuable: the process does not degrade because you are tired or excited.

The market research wave is one phase of a larger AI startup strategy framework that covers the full journey from idea to validated plan. The research feeds into competitive analysis, business model decisions, financial projections, and go-to-market planning.

Common mistakes with AI market research

Even with a multi-agent approach, there are ways to get bad results.

Treating AI research as the final answer. AI market research is a starting point, not a conclusion. It gives you hypotheses to test, not facts to rely on. The customer segment analysis tells you who might be your best customer. Then you go talk to those people and find out if the AI was right.

Not questioning the numbers. AI-generated market sizing should be treated as an estimate with wide error bars. If the agent says the market is $500M, the real number could be anywhere from $200M to $1B. The directional signal matters. The exact number does not.

Skipping the synthesis wave. Running 9 research agents without the synthesis wave gives you a pile of data. Data without interpretation is noise. The synthesis agents force you to reconcile conflicting findings, prioritize segments, and make decisions.

Using research to justify a decision you already made. This is the most common mistake. You run the research hoping it will confirm your idea is great. When it surfaces problems, you dismiss them. That defeats the entire purpose. Let the research change your mind. That is what it is for.

The output that matters

After all 4 waves, the output that actually drives your next steps is not the full research package. It is the strategic brief from Wave 4:

  • Go/No-go recommendation with supporting evidence
  • Target segment ranked by attractiveness
  • Positioning statement based on competitive gaps
  • V1 feature priorities based on unmet customer needs
  • Pricing starting point based on willingness-to-pay analysis
  • Key risks with mitigation strategies

Everything else is supporting documentation. Important for reference, but the strategic brief is what you actually use to make decisions.

Try it on your idea

The full multi-agent research architecture described in this article is built into an open source AI skill. You point it at your startup idea, and it runs all 4 waves automatically. Same structure, same depth, every time.

If you have an idea you have been thinking about but never properly researched, this is the fastest way to get real data instead of opinions.


startup-skill is free and open source: github.com/ferdinandobons/startup-skill

FB

Ferdinando Bons

Building tools for startup validation