There's a pattern showing up in the AI discourse: SaaS is in trouble. Coding agents — Cursor, Claude Code, Lovable, Bolt — have made it possible for small teams to build internal tools that would have required a vendor contract two years ago. Some people think this is the end of the SaaS model. Others think it's overblown. The debate has real stakes, and both sides have smart people on them.
I'm not going to argue the macro thesis. What I can do is share two things we built at our company recently that probably would have been SaaS products in a different era. The intent wasn't to avoid SaaS. The intent was to solve specific operational problems. The tools we built are almost a side effect of that — we reached for coding agents the way an earlier version of us might have reached for a vendor evaluation.
Make of that what you will. Here's what we built.
Tool 1: The Research Bot
We sell smart eyewear. Multiple product lines — sports, optical, safety, fashion. When a retailer, distributor, or partner asks us a question, the right answer depends on a lot of variables: which product line they're asking about, what market they're in, what their customer profile looks like. The answer for an industrial safety distributor is not the answer for a retail optician, even if they're asking roughly the same question.
The old way: a salesperson manually researches, searches internal docs, pulls specs, and writes a custom response. This takes time. The quality depends on who's on. And at any reasonable volume, it doesn't scale.
The obvious SaaS path was an AI-powered knowledge base. We looked at Guru, Notion AI, Glean — genuinely good products. The problem wasn't the products themselves. The problem was that our questions weren't generic knowledge base questions. They were product-specific, market-specific, customer-segment-specific. The routing logic that separates "this is a sports line question for a B2B buyer in industrial safety" from "this is an optical question for a direct-to-consumer retail partner" would have required significant custom configuration on top of any off-the-shelf platform. And then we'd own the ongoing maintenance of that configuration, plus a per-seat subscription for a narrow internal use case.
What we built instead: a purpose-built internal research bot, wired directly into our own product data. The architecture isn't complicated. The key decision was what to build against. We already had structured product data — specs, FAQs, pricing, positioning, market-specific materials. The bot is a routing and retrieval layer on top of data we were already maintaining. We didn't create a new knowledge management problem to feed a platform. We built something that uses the source of truth we already had.
The SaaS version of this would have been: pick a platform, upload documents, train it, maintain it, pay per seat, negotiate the enterprise tier when our usage grows. What we have instead: a tool that does exactly what we need, integrates with data we control, and costs us engineering time rather than a subscription.
The tradeoff is real. We spent time building it. We own the maintenance. There's no customer support number to call. But there's also no vendor dependency, no seat negotiation, and no mismatch between what the tool does and what we actually need it to do.
Tool 2: The Exchange Program
Returns and exchanges in consumer products are supposed to be solved problems. Returnly, Loop, AfterShip Returns — these are mature products with good UX and solid integrations. The standard flow works: customer requests a return, a portal opens, a label gets generated, a refund gets issued. Thousands of Shopify stores run on exactly this.
Smart eyewear with prescription lenses is not a standard consumer return.
The flow has branches that don't exist in typical e-commerce. Some customers have prescription inserts, which are custom-manufactured and not returnable the same way a frame is. Exchanges depend heavily on what part of the product is being exchanged — the frame, the electronics, or the lens system. Some requests that look like returns are actually upgrades: a customer wants a newer model, not a refund. Warranty claims have a completely different resolution path than preference returns. And the combinations multiply.
Every major returns SaaS we evaluated assumed a standard e-commerce flow. That's not a criticism — it's the right call for most of their customers. But for us, handling the prescription-insert branch alone required logic that would have landed us in enterprise custom-tier territory. We'd be paying for a platform to handle 20% of our cases correctly, and then duct-taping workarounds for the rest.
So we built our own exchange program — custom logic for each resolution path, its own communication flows, its own integrations. The first version wasn't polished. It took longer to build than spinning up a SaaS tool would have. Those are real costs and I'm not going to pretend otherwise.
But it handles our actual cases. All of them. And when our flows change — new product lines, new policies, new edge cases — we change the tool. We don't open a support ticket with a vendor, wait for a feature request to get prioritized, or find out that what we need is locked behind a tier we're not on.
What This Actually Means
We're not anti-SaaS. We run Shopify, HubSpot, NetSuite, Klaviyo. These are the right tools for their domains, and I'd be an idiot to build any of them from scratch. The difference is those platforms solve general problems well — problems that are common enough that investing in a great off-the-shelf solution makes obvious sense.
The question we ask now before starting a vendor evaluation: how much of this tool's surface area do we actually need? If the answer is less than 30%, and we have coding capacity, building is often the faster and cheaper path over a 12-month horizon. That math didn't used to work. Building was slow, expensive, and required dedicated engineering resources.
Coding agents changed the math. Not because they make building free — it still takes time and ongoing maintenance. But because the gap between "evaluate a vendor" and "have a working prototype" has compressed dramatically. What used to take a sprint or two now takes a few hours to a day. The bar for build vs. buy has moved.
The SaaS-is-dying thesis is probably too strong. General-purpose tools that solve common problems well still have enormous value. But the "always buy before you build" default? That one is dead. Or at least it should be retired and replaced with an actual decision framework.
Where We Go From Here
We're still figuring out where the line is. These two projects made sense to build. The next decision might go the other way — there are things we use SaaS for today that I have no intention of replacing, and there will be future problems where a vendor is clearly the right call.
The point isn't "build everything." The point is the question is worth asking now in a way it wasn't two years ago. The tools exist to make building fast. The cost model has shifted. The decision deserves a real analysis instead of a default.
We asked. Twice, it came back: build. Here's what we built.