Growth Strategy

How to Validate a SaaS Idea Before Writing a Single Line of Code

A pre-build validation playbook for indie devs: landing page tests, manual prototypes, pre-orders, customer conversations, and the signals that tell you to build or stop.

13 min read
How to Validate a SaaS Idea Before Writing a Single Line of Code

Key Takeaways

  • The only meaningful validation signal is a payment attempt or a structured conversation that reveals the person has already tried to solve the problem and failed. Everything else is flattery.
  • A landing page with a real buy button and 300 visitors tells you more than six months of building. Measure email signup rate: 5% or above is a real signal, under 1% means the positioning needs rethinking before any code ships.
  • Do the thing your software would automate before you automate it. If nobody will pay you to do it manually, nobody will pay for software that does it automatically.
  • Reading 100 App Store or G2 reviews for your closest competitor is the fastest market validation research available. The top complaints are your feature brief. If those complaints don't match what you planned to build, you have a product/market mismatch before writing a single line.

I've built enough products to know the pattern. You have an idea that feels sharp. You can see the interface in your head. You start sketching database schemas. Six months later, you ship to nobody, because the people you thought wanted this either didn't have the problem as badly as you assumed, or had already solved it well enough with something else.

The antidote isn't longer planning. It's spending two to three weeks running the specific tests that cut through your own wishful thinking and tell you whether strangers will actually pay for this thing. Here's how I'd run that process.

Why do most SaaS ideas fail before they launch?

They fail because the founder conflated interest with intent to pay. "That's a great idea, I'd totally use that" is the most dangerous sentence in product development. It costs the listener nothing to say it, and it tells you almost nothing useful. What you need to know is whether someone has the problem badly enough to reach for their wallet.

Most validation efforts stop at the wrong checkpoint. A hundred Twitter replies to your concept post, a Hacker News thread that gets 40 upvotes, a survey with 80 responses all saying the idea sounds useful: none of these are validation. They're applause, and applause doesn't pay server bills.

The tests below are designed to manufacture the one signal that actually counts: a payment attempt, or the next closest thing to it.

The "would you pay right now" test

This is the first test I run, and it costs nothing except the discomfort of asking people directly.

Find ten people who have the problem your product solves. Not your developer friends, not people in startup communities. People who experience the pain your product addresses, in their daily work or life. Then describe what you're building in one sentence and ask: "I'm selling early access for $29 right now. Here's the link. Do you want it?"

You will learn more from that question in a week than from three months of discussing the idea abstractly. The gap between the people who say "yes definitely, send it over" and the people who actually click the payment link is your real conversion rate. In my experience, most ideas that sound universally appealing at the concept level convert at 5 to 10 percent when you introduce a real price. Ideas with genuine traction convert at 30 percent or higher among warm audiences who have the specific problem.

The goal isn't to collect $290 in pre-orders. The goal is to observe who hesitates and why. The people who say "not quite yet" or "maybe when it has X feature" are giving you the real objections your product will face. Listen to what they add after "but."

Landing page tests with paid traffic

The "would you pay" test works well when you have direct access to people who have the problem. A landing page test works when you don't.

Build a single-page site. Not a full product website, just one page: the problem you solve at the top, three to five specific features or outcomes in the middle, a clear price, and an email signup or buy button at the bottom. The page should take a weekend to build. Use Carrd, Framer, or a simple Next.js template.

Then drive 200 to 500 targeted visitors to it. The cheapest way to do this with reasonable targeting is Meta ads or Reddit ads, depending on your audience. Budget $50 to $100. Target by job title, interest category, or the subreddits where your potential customers discuss the problem. You're not trying to acquire customers yet. You're measuring whether your positioning resonates with people who have never heard of you.

The number to watch is email signup rate, or buy-now click rate if you've put a real price on the page. Here's how I read those numbers:

  • Above 5%: Real signal. The positioning is landing with cold traffic, which means the pain is acute and the framing is clear. Worth continuing.
  • 2 to 5%: Ambiguous. The idea might be right but the page isn't converting well. Try a different headline or problem framing before drawing conclusions.
  • Under 1%: Stop or pivot. Either the positioning is wrong, the audience is wrong, or the problem isn't acute enough to generate action from people who don't know you. All three of those are problems you need to solve before writing any application code.

One thing I want to be direct about: under 1% is not a failure of your marketing. It's a data point about the idea. Plenty of founders respond to a low-converting landing page by redesigning it endlessly rather than questioning the premise. If you've tested three or four different headlines and the conversion rate stays under 1%, the market is telling you something.

Manual prototypes: do it with people before you do it with code

Airbnb's founders rented out air mattresses in their apartment before they built a platform to let other people do the same. Zappos's founder took photos of shoes in local stores and posted them online, then bought the shoes at full price and shipped them when orders came in, before building any inventory system. The pattern is deliberate: do the thing your software would automate, manually, before you build the automation.

This approach answers a question that landing page tests don't: is the core value exchange real? A page might convert because the headline is compelling. A manual prototype tests whether you can actually deliver what you're promising, and whether customers value it enough to pay when they've experienced it.

The way to apply this to a SaaS idea is to ask: what is the core output my product delivers? Then deliver that output manually to five real customers. If you're building a tool that generates personalized email sequences from a customer profile, do it by hand in a Google Doc. If you're building a competitive intelligence tool, pull together the research manually and present it. Charge them for it.

If you can find five customers who pay you to do the manual version, you have confirmed two things at once: the problem is real enough to pay to solve, and the output is valuable enough that people want it. If you can't find five paying customers for the manual version, building the software version will not fix that.

Structured customer conversations

Five real conversations are worth more than 500 survey responses. Survey responses tell you what people think you want to hear. Conversations, run correctly, tell you what the problem actually costs them.

The trap most founders fall into is asking predictive questions: "Would you use a tool that did X?" "How much would you pay for something like this?" These questions generate garbage data because people are bad at predicting their own future behavior. They'll tell you they'd pay $49/month and then balk at $19.

Run the conversations differently. Ask about the past, not the future:

  1. "Tell me about the last time you ran into this problem. What were you trying to do?"
  2. "What did you try to solve it? What happened?"
  3. "What did you do next?"
  4. "What does this problem cost you, in time, money, or frustration?"
  5. "What would it mean for you if this problem just went away?"

You are not pitching your solution during these conversations. You're mining for evidence that the problem exists, that it's painful enough to act on, and that existing solutions are failing. If the person you're talking to can't recall a specific recent instance of the problem, or if their current workaround is "fine, actually," those are signals worth taking seriously.

After five conversations, look for patterns. If three of the five people described the problem in nearly identical language, that's the language to use in your positioning. If four of the five mentioned the same broken workaround, that's your primary competitor. If none of them could describe a recent instance of the problem, that's a red flag about whether the market is real.

Competitor review mining

This is the validation method most people skip, and it's one of the most useful ones available. Reading 100 App Store or G2 reviews for the closest existing competitor to your idea tells you more about what the market actually wants than any survey you could run.

Open the competitor's listing. Sort reviews by lowest rating. Read every one-star and two-star review that has at least three sentences in it. Copy the ones that describe a specific unmet need or a consistent frustration into a doc. After 100 reviews across two or three competitors, tag them by theme. The themes that appear most frequently are your feature brief.

What you're looking for is whether the complaints match what you planned to build. If the top complaints from competitor customers are "no offline mode," "export is broken," and "pricing is confusing," and your product solves all three: that's validation. The market has told you what it wants, in the words of real frustrated users, before you've written a line.

If the top complaints don't match your hypothesis at all, that's equally useful. It means you have a product/market mismatch based on what real customers are actually frustrated about, and you should either adjust your roadmap or reconsider whether this is the right gap to fill.

For a deeper breakdown of how to run this process systematically, the guide on doing competitor analysis with free tools covers the full method, including where to find reviews beyond the App Store and how to structure your findings.

GrowthMap

Ready to stop guessing and start growing?

Get a personalized growth playbook built on real competitor data, live SEO metrics, and actual outreach targets. 14 sections. $29 one-time.

Get My Playbook $29 one-time. 14-day money-back guarantee.
GrowthMap: Find your first 1,000 customers

The four signals that mean keep building

After running these tests, you'll have a mix of signals pointing in different directions. Here's how I decide what counts:

1. Someone tried to pay you. Not "I'll definitely buy this when it's ready," but an actual payment attempt or a completed pre-order. Even one real payment from a stranger carries more weight than a hundred enthusiastic responses from people who didn't open their wallet.

2. Someone described their problem in terms that exactly match your solution hypothesis. In a customer conversation, without you prompting it, someone used language that sounded like you'd written it yourself: the same pain point, the same broken workaround, the same outcome they're after. That means you've correctly identified the problem, which is harder than it sounds.

3. The top competitor complaints are exactly what you planned to fix. Your review mining surfaces three to five consistent complaints across competitor customers. Your product directly addresses all of them. This is the best pre-build signal available because it's grounded in data from real paying customers, not hypothetical interest.

4. You found a community where people actively ask for this solution. A subreddit, a Slack group, a forum where people are already asking "does anyone know a tool that does X?" and the answers are "nothing good exists yet." That community is your launch audience, and their question confirms the gap.

You don't need all four. Two or three, with at least one being a real payment attempt or a specific competitor complaint match, is enough to justify building. One enthusiastic conversation is not.

The three signals that mean stop

These are harder to sit with, but they're the most useful things these tests can produce.

1. Nobody could name a time they'd actually use it unprompted. In your customer conversations, you asked "tell me about the last time you faced this problem" and people hedged. "Oh, I guess sometimes when..." or "I could see how this might be useful for..." are not problem descriptions. They're politeness. If five people can't recall a concrete recent instance of the problem you're solving, the problem may not be acute enough to drive purchasing behavior.

2. The only people excited are other developers. This is the most common false positive in indie SaaS validation. Other builders are an enthusiastic, accessible audience who love discussing ideas, give detailed feedback, and will encourage you to ship. They are almost never your actual customer. If the only people who got genuinely excited about your landing page or your pitch are developers in startup communities, you have not reached your market yet, and you may not have a market.

3. The competitors have already solved the core problem and users are satisfied. Review mining should reveal this. If the competitor's reviews are mostly four and five stars, with one-star reviews complaining about minor interface issues rather than fundamental gaps, the market is reasonably well-served. Building a me-too product into a satisfied market requires either significantly better execution or significantly lower pricing, and neither of those is an easy path as a solo founder.

How to de-risk the bet before you write a line

There's a fourth step I've added to my own pre-build process that compresses a lot of this research into a shorter timeframe. Before building a competitor to any existing product, I run a GrowthMap report on that product.

What comes back is the actual competitive review data at scale, the SEO landscape (whether there's real search demand in the category), the audience profile built from real data, keyword opportunities, and the feature gaps between the target and its competitors. That's the core of a market validation research pass, done in about 10 minutes.

The review mining piece is particularly useful at this stage. GrowthMap pulls real App Store and competitor reviews and surfaces the consistent complaint patterns. If the complaints match what you planned to build, you have a competitor-sourced feature brief confirmed before you've touched your code editor. If they don't match, you've saved yourself months.

Running that report on the adjacent product you're thinking of competing with is the fastest way I know to answer the question, "is there actually a gap here, and is it the gap I think it is?" before committing to six months of building.

The hardest part of pre-build validation isn't the tests themselves. It's staying honest about what the results are telling you. A landing page that converts at 0.4% is telling you something specific. Customer conversations where nobody describes a recent instance of the problem are telling you something specific. The founders who build things people actually pay for are the ones who treat those signals as data rather than obstacles to work around.

Run the tests. Read what comes back. If the signals are green, build fast. If they're not, be grateful you found out now.

Frequently Asked Questions

How do I know if my SaaS idea is worth building?

Someone tried to pay you, or described their problem in terms that exactly match your hypothesis, or the top competitor complaints map directly to what you planned to build. Three or more of the four 'keep building' signals is a reasonable threshold. One enthusiastic conversation is not.

What is the fastest way to validate a SaaS idea?

Ask five people who have the problem to pay you $29 right now for early access. Not 'would you use this,' but an actual payment link. The gap between people who say they're interested and people who click the payment link tells you where you stand before you've written anything.

How much should I spend on paid traffic for a validation test?

Between $50 and $100 is enough to send 200 to 400 visitors to a landing page if you target carefully. That sample is sufficient to measure whether your core positioning resonates. Spending more before you understand why people are or aren't converting is just burning money.

How do I run a customer discovery conversation that actually produces useful data?

Never ask 'would you use this?' Instead, ask them to tell you about the last time they tried to solve this problem. What did they do? What broke down? What did they wish existed? Five conversations structured this way produce better signal than 500 survey responses asking about hypothetical feature preferences.

Can competitor reviews really validate my idea?

Yes, and they're underused for this purpose. One hundred App Store or G2 reviews for your closest competitor will show you the consistent complaints customers have. If those complaints match what you planned to build as a solution, that's a market gap confirmed by real frustrated users, not your assumptions.

saas validationindie developeridea validationpre-launchsolo founderstartup validation
Jordan Kennedy
Jordan Kennedy

Founder, GrowthMap

Founder of GrowthMap. I build indie products (Balance Pro, Limelight, GrowthMap) and help solo founders find their first 1,000 customers using data instead of guesswork.

GrowthMap

Ready to stop guessing and start growing?

Get a personalized growth playbook built on real competitor data, live SEO metrics, and actual outreach targets. 14 sections. $29 one-time.

Get My Playbook $29 one-time. 14-day money-back guarantee.
GrowthMap: Find your first 1,000 customers

Related Articles