Infrastructure or Prompt: What AI App Builders Actually Deliver in 2026
A Note on Expertise
I'm not writing as an "expert" or claiming to have all the answers. I'm a builder sharing my journey on what worked, what didn't, and what I learned along the way. The tech landscape changes constantly, and with AI tools now available, the traditional notion of "expertise" is evolving. Take what resonates, verify what matters to you, and forge your own path. This is simply my experience, offered in the hope it helps fellow builders.
Ten years ago, when WordPress was the default and static HTML pages were still common, there was a class of seller who would walk into a small business and say: I can build you a website. You will be online. You will exist on the internet.
The owner would pay. The website would get built. It would be technically correct. It would have a domain, hosting, a homepage, a contact form. From the seller's side, the job was done.
Then nothing would happen.
Nobody would visit the site. The owner would not know how to update it. They would not know that you had to register the domain with Google Search Console so the search engine could find it properly. They would not know what Google Analytics was. They would not know that a website without distribution is just a URL printed on the back of a business card. They would think they had done the digital marketing thing, because they had paid for the digital marketing thing.
Ten years on, the slogan has changed but the structure of the lie has not. The new version is "you can build your own app, no developer needed." The new buyer pays the new tool. The new tool produces the new artifact. And the new buyer is left holding the same thing the WordPress buyer held in 2014: a product that exists, technically, and does almost nothing for them in practice.
I want to talk about why this keeps happening, what is actually getting sold, and what the chain from "built" to "ready" actually looks like once you have been on the inside of it.
What these tools have actually got better at
Before I criticise the slogan, I want to be fair about what these tools have achieved. The state of AI app builders in 2026 is genuinely better than it was eighteen months ago, and pretending otherwise would be dishonest.
Lovable will generate a full TypeScript and React application with Supabase wired up for authentication, database, and file storage, in a single prompt. Bolt gives you a real Node.js runtime in the browser with framework choice across React, Vue, Svelte, and more. v0 produces production-quality React components. Replit Agent will build, deploy, and now even patch security vulnerabilities in your dependencies automatically. Base44 will build full applications and let you export the frontend code on a paid plan.
The deployment story is largely solved. Most of the major tools handle hosting, domains, SSL, and basic CI out of the box. The source code story is improving. Lovable does two-way GitHub sync. Bolt exports cleanly. Even basic SEO scaffolding is now common, things like meta tags, sitemaps, and structured data templates.
These are real improvements. I use AI-generated code every day. The platforms I am building, including the one this blog runs on, would not exist on the timelines they exist on without these tools.
So the critique that follows is not "these tools are useless." It is "the slogan is selling a 100% solution, the tools generously cover around 40% of what the buyer actually needs to have a working product in the world, and that gap is not advertised anywhere."
Infrastructure or prompt: which one are you actually getting?
There is a distinction worth naming because it explains why the gap exists in the first place.
A piece of software can offer something as infrastructure or as a prompt-based feature. Infrastructure runs whether the user asked or not. The agent has no choice. It is part of the system. Prompt-based features only run when someone asks for them, in language specific enough for the agent to act on.
Almost everything outside of build and deploy on AI app builders is prompt-based. Tests, prompt-based. Sitemap submission, prompt-based. Analytics goals, prompt-based. Auth rules beyond the defaults, prompt-based. Replit's CVE patching is one of the few things that has been promoted from prompt-based to infrastructure, which is exactly why it stands out as a counter-example.
The non-technical buyer cannot prompt for things they do not know exist. They cannot ask the system to submit a sitemap if they have never heard of Google Search Console. They cannot ask the system to harden auth against a specific edge case if they have never thought about that edge case. The whole class of work that requires the buyer to know what to ask for is, by definition, the work the buyer cannot get out of these tools.
There is a subtler version of this problem too. Even when the tool does handle something for you, the moment you want to change it, you are usually back in expert territory. Take auth, which is the wall everyone hits. The first prompt creates an auth setup and it works. The second prompt, the one where you want the password reset flow to check a custom rule, or restrict signup to a specific email domain, or add a second factor for admin accounts, is where the user-friendly veneer cracks. The agent can do the change, in theory. It assumes the user can describe the change in language the agent will translate correctly. It assumes the user can verify the agent did not silently break something else. Both of those are technical skills.
So the slogan is selling a system that handles a fixed set of things well, and treats every variation, edge case, and second-order need as a prompt away. For technical users, that is fine. For non-technical users, the moment their needs deviate from the default flow, they are stuck. They paid for "no developer needed" and now they need a developer to describe the change clearly enough that the agent can make it, and to verify the change after it is made.
This is not a knock on the agent. It is a knock on the marketing that pretended the agent's prompt interface was the same thing as a finished product.
Building is one step out of five
When I think about what it actually takes to have a working product live in the world, I count five distinct stages. They have nothing to do with whether you wrote the code yourself or had an AI generate it. They are the same chain regardless.
Build. Generate the code, the screens, the pages. AI builders do this well in 2026.
Test. Verify the thing actually works. Unit tests, integration tests, end-to-end tests, lint checks, accessibility audits. AI builders deliver almost none of this by default.
Deploy. Get it on real infrastructure. Domain, hosting, SSL, environment variables, secrets, CI pipeline, monitoring. Most major AI builders now handle this. Real progress.
Promote. Make sure the world can find it. Search engine indexing, sitemap submission, analytics setup, search console verification, ongoing content, distribution. AI builders generate scaffolding. They do not do the work that turns scaffolding into traffic.
Maintain. Keep it alive. Security patches, dependency updates, content refreshes, regression fixes when the platforms underneath you change, third-party API changes. Replit has started automating CVE patching, which is genuinely new. The rest of the maintenance story is still on the buyer.
Generously, the slogan covers around 40% of the chain well. The buyer assumes they have bought all five stages. The other 60% is still there, waiting to be done by someone, and "no developer needed" is doing a lot of work to hide that.
What you actually get for your money
A friend showed me what his partner had built using one of the AI app builder tools that has been heavily marketed lately. He sent me the result. It looked like a website. It had a layout, sections, what looked like buttons.
I looked at it for about thirty seconds.
None of the buttons did anything. Not because they were broken. Because there was nothing behind them. The output was static HTML and CSS. Visually it was a product. Functionally it was a screenshot pretending to be one.
That is the bad-day outcome. On a good day, the same class of tool will produce a working full-stack application with auth, a database, and a real backend. The variance is the problem. The same prompt, on the same platform, in the hands of two different users, can produce a working app or a brochure pretending to be one. The buyer has no good way to evaluate which they got before they pay.
This is structurally different from buying any other kind of software. When you buy a Stripe integration or a CRM, you know what you are getting. When you buy "an app built by AI from your prompt," the result is a probability distribution. Most of the time you get something. Sometimes you get something that works. Almost never do you get something that is, by professional standards, finished.
I am not naming the tool, because it does not matter. The pattern is the pattern. Different tool next month, same shape.
Why the testing story matters more than people realise
Of the stages the slogan handles poorly, testing is the one I want to spend time on, because it is the cleanest, most current critique. It is also the one I can speak to from inside the work.
I have been writing software for a long time. I am the technical co-founder, the developer, the QA, the tester, the deployer, the operator, all in one person. And I will tell you honestly: I do not always remember to ask for the tests.
The current generation of AI coding assistants, including the model I am writing this with, are extraordinarily capable. They will build you a feature, wire it into your stack, generate the migration, and ship it. They will do all of that without writing a single test, because you did not ask. The same is true of every major AI app builder. None of them ship test suites by default. The output is a working feature with no automated way to verify it stays working.
Here is the part most people do not realise. AI testing is a thriving product category in 2026, but it is a separate category. Diffblue, TestSprite, BaseRock, GitHub Copilot Testing, mabl. There are at least a dozen serious tools that will generate unit tests, end-to-end tests, integration tests, coverage reports, and CI integrations. They are good. They are also a separate purchase, separate setup, and separate workflow from the AI builder that wrote the original code.
The buyer who paid the no-developer slogan does not know any of this exists. They were told the tool would build their app. The tool built their app. They were not told there is a second category of tool they need to buy and set up to make sure the first tool's output keeps working. They will not know they needed it until something breaks.
If a senior engineer who has been at this for years has to actively remember to ask for tests, what does a non-technical buyer get when the tool ships them code without any of it?
They get code with no safety net. The product appears to work because the demo flow happens to be the only flow that has been run. The day a real user does something the demo did not anticipate, it breaks. There is no test that would have caught it because there is no test, period. The buyer has no way to even diagnose what happened, because diagnosis requires a level of system understanding the slogan promised they would not need.
Scaffolding is not promotion
A defender of these tools will tell you, fairly, that they now generate sitemaps, meta descriptions, structured data, sometimes even custom domain support. That is true. The scaffolding is there.
But scaffolding is not promotion, and this is where the gap between technical and non-technical buyers gets really wide.
A sitemap on your server is a file. It is only useful if it has been submitted to Google Search Console and the search engine has crawled and indexed your pages. A meta description is a string in your HTML. It is only useful if your page is being shown in search results, which only happens if the page has authority, which only happens through links, content, distribution, and time. Structured data is markup. It is only useful if Google's validator says it is well-formed and your content actually merits rich snippets in the first place.
The non-technical buyer who was told "the tool generates SEO" thinks SEO is done. The non-technical buyer who has never registered for Google Search Console, never submitted a sitemap, never set up Google Analytics, never run a Lighthouse audit, never written a meta description that an AI summariser would actually quote, has had none of the work done. The tool delivered a sitemap to a buyer who has no idea what a sitemap is for.
This is not a fault of the tools, exactly. It is a fault of the slogan. The slogan promises a complete product. The tool, generously, delivers infrastructure that a knowledgeable user could turn into a complete product. If the buyer were knowledgeable, they would not be the target of the slogan in the first place.
I genuinely do not know how the tools are supposed to fix this. Either they include a long, careful explanation of what the buyer still needs to do, which most non-technical buyers will skim, get bored by, or skip entirely. Or they do not, and the buyer assumes the work is done. There is no path that closes the gap inside the product. The gap lives in the buyer's head, and you cannot ship a software update to that.
The maintenance problem nobody warns you about
Maintenance is the stage with the longest tail, and the one where the tools have moved most recently.
Replit deserves real credit here. Their Security Agent, launched in 2026, automatically detects when a CVE is announced against any dependency in your project, prepares and tests a patch, and emails you a one-click apply link. That is genuine maintenance automation. None of the other major tools have caught up to this yet, and it is the first time I have seen one of these platforms take ownership of any meaningful slice of the post-deployment chain.
But Replit's Security Agent, even at its best, only covers one slice of maintenance. The slice it covers, dependency CVEs, is important. The slices it does not cover are at least as important.
When the third-party API your app depends on changes its contract, no AI builder will catch that for you. When a browser update breaks a CSS feature you depended on, no tool will tell you. When the platform you deployed to deprecates a runtime version, you have a window to migrate, and someone has to actually do the migration. When your content goes stale, no agent rewrites it. When the search engine algorithm changes and your traffic drops twenty percent, no platform notifies you, and no one is going to recover the traffic except a person who knows how.
A built-and-deployed product is not a finished product. It is the start of a maintenance commitment that runs for as long as the product exists. The buyer who paid the no-developer slogan price is, by definition, not the person who can take on that commitment. So one of two things happens. Either they pay someone every time something breaks, which is the model the slogan promised they would not need. Or they let the product rot.
If the current generation of these tools froze right here and never improved another inch, the long-term result would be the internet turning into a cemetery for half-built web apps and abandoned mobile products. Beautiful URLs returning 500 errors. Stale Let's Encrypt warnings. Apps in the store last updated when their creator still believed the slogan. I am not predicting this will happen. I am pointing out that nothing about the current setup prevents it. The tools are not contractually obligated to keep getting better. The buyers are not equipped to maintain the products on their own. If the tools stop where they are, the cemetery is what the math produces.
I already see the early version of this. Sites that were sold as products three years ago, sitting at the same domain, still online but unmaintained, slowly going stale. The owner does not even know. They paid for the product once. They thought it was done.
You may not own what you think you own
There is one more thing worth naming, because it is invisible at the moment of purchase and very visible later.
Source code access on these platforms is tiered, and the tiers do not always mean what the buyer assumes. Lovable does two-way GitHub sync, which is the gold standard. Bolt exports cleanly. Replit gives you the project files. Base44 lets you export the frontend code on a paid plan, but the backend stays on Base44's infrastructure. If you ever decide to leave, you are leaving with half the product.
The buyer reading the marketing copy does not catch this. The marketing says "you own your code" and that is technically true for some of the code, on some of the plans, with caveats. The buyer who never reads the small print thinks they bought a product. They bought a product that runs on the platform's infrastructure for as long as the platform decides to keep running it that way.
This is a long-tail risk. The tool may be excellent today. The tool may shut down in three years, change its pricing, get acquired by a company with different priorities. If the buyer does not have a complete export of the application, they have a beautiful URL that one day returns a redirect to a sales page for whatever the platform got rebranded into.
Why the slogan persists
It works because it sells a feeling. The feeling is "I have entered the digital economy without needing to learn anything." That feeling is genuinely valuable. People want it. They are willing to pay for it. The tools that promise to deliver it are tapping into a real desire.
The dishonest part is that delivering the feeling and delivering the product are two different things. The slogan delivers the feeling. The buyer leaves the transaction believing they have entered the digital economy. The tool builder leaves the transaction with the money. The actual entry into the digital economy, which requires testing and ongoing promotion and ongoing maintenance and a real understanding of how each of those works, has not happened, and the gap between feeling and reality might not become visible for months.
By then, the buyer has either accepted that the product does not work and quietly given up, or has gone looking for someone to fix it and found out, the slow way, that the slogan was selling 40% of the chain and pricing it like 100%.
What this looks like from the inside
I want to be clear that I am not against the tools themselves. I use AI-generated code every single day. The platforms I am building, including the one this blog runs on, would not exist on the timelines they exist on without these tools. I am the obvious counter-evidence to the claim that AI coding tools are useless.
But I am not the buyer the slogan is pitched at. I am a technical founder who knows the chain from build to test to deploy to promote to maintain, who knows which of those stages the tool is helping with and which it is not, and who is therefore in a position to use the tool well. The slogan is not pitched at me. It is pitched at the small business owner, the side hustler, the person who wants their idea to exist online without learning the chain.
For that person, the tool delivers the build stage well, the deployment stage reasonably, the promotion stage in name only, and the maintenance stage in tiny pieces. The buyer thinks all five are handled. They are not. And I do not think the right response is to tell that buyer to go learn it all. There is too much, and most of them do not want to be in this industry. They just want a product. The right response is to be honest about what the tools actually do and where the gaps are, so the buyer can make an informed decision about whether to pay someone for the rest of the chain or accept the limitations of what the tool gave them.
The work that comes after the slogan
There is a side effect to all of this that is worth naming, because it changes the shape of the next few years.
The work of fixing broken AI-built apps is going to be a real category. Every cohort of buyers who paid for the slogan and ended up with a half-finished product is a future client for someone who knows the chain end to end. A lot of those buyers will eventually need a person, and the person will probably look like a developer, or someone who specialises in the rescue work specifically.
I am not making a prediction about whether developer jobs go up or down in aggregate. I am making an observation about a specific kind of work that is going to exist, because the math of how many half-finished apps these tools are putting into the world demands it. The slogan does not eliminate the need for technical knowledge. It defers it, raises the cost of the defer, and lets the buyer discover the bill in stages.
What I would tell someone considering one of these tools
Not gatekeeping. Just naming what is in the package.
Assume you are paying for around 40% of the chain. Build is real. Deployment is mostly real. The rest is partial or scaffolded. Factor in the cost of getting the rest done before you decide the tool is cheap.
Click every button before you trust it. Submit every form. Output quality varies wildly. The same tool that produces a working app on Monday can produce a brochure pretending to be one on Tuesday.
Ask whether the tool gives you the full source code, including the backend. Frontend export is now common. Backend export is rarer. If the platform owns the backend, they own the product. You are renting.
Ask whether the tool ships tests. None is the answer in 2026 for every major AI builder. That means a separate AI testing tool is something you will need to buy or operate. If you do not, your product has no safety net.
Plan for ongoing work even if you do not plan to do it yourself. Maintenance is going to happen one way or another. Either you do it, or you pay someone to, or the product rots. Pick the path you want before the day the SSL expires and you are working out who to call.
Treat scaffolding as scaffolding, not as the building. A sitemap on your server is not search engine optimisation. A meta description is not promotion. The tool delivered the parts. Someone still has to assemble them and submit them to the right place. If you do not know where the right place is, the parts are decoration.
Treat the slogan as marketing. It is selling a feeling. The feeling is real. The product behind it is partial. Both things can be true. Knowing they are both true is the difference between an informed buyer and an unhappy one.
The honest version of the slogan
If I could rewrite the marketing copy on these tools, it would say something like this.
"You can generate around 40% of an application without a developer, mostly in the build and deployment stages. The other 60%, which includes testing, real promotion, ongoing maintenance, and the system knowledge to run any of those, is its own job. Some of it can be automated with separate tools. Most of it will eventually be handled by a person, and that person will probably be a developer or someone who specialises in that kind of work."
It would not sell as well. That is exactly why the current slogan exists.
I will keep using these tools. I will keep recommending them to people who can use them well. I will also keep telling everyone else that the slogan is selling a feeling, the feeling is genuine, and the product they are getting is one chunk of the journey, sold as the whole one.
Built is not ready. It never has been. The marketing just got better at pretending otherwise.
This is part of a series about building products as a solo founder. Earlier posts cover marketing in 2026, the other half of shipping, and the night my own login broke. More coming.
About the Author
Alireza Elahi is a solo founder building products that solve real problems. Currently working on Havnwright, Publishora, and the Founder Knowledge Graph.