Built something on Lovable or Bolt? Here's what to do next.

Rhys Williams
25/03/2026
Share with
vibe-codinglovableboltapp development

You've got a working prototype from a no-code AI tool. Now what? How to get it production-ready without starting over.

You spent a few weeks with Lovable, Bolt, v0, or something similar. You described what you wanted. You iterated. You clicked around in a thing that looks like real software and does most of what you imagined. Maybe you've already shown it to potential customers. Maybe a few of them are already using it.

Now you want to know what it would take to actually launch it.

This is the question we're getting more often, and it's a good question. These tools are genuinely useful, and what you built is a real starting point. The question is what needs to change before the thing is in front of real users with real expectations.

What these tools are actually good at

Lovable, Bolt, v0 and similar tools produce good UI code. React components, Next.js pages, Tailwind layouts, forms, dashboards. This is the part of software development with the most training data and the most standardised patterns, and it shows in what these tools produce. The frontend of a well-prompted AI prototype is often genuinely usable with minimal changes.

They're also good at the happy path. The main user flow, the thing the app is supposed to do when everything goes right, is usually well-handled. Buttons connect to actions, forms submit data, screens navigate in the right order. If you were testing whether an idea works, this is exactly what you needed.

And they're fast. The ability to go from "here's my idea" to "here's a working interface" in days rather than months is real. That's not a small thing when you're validating an idea and don't want to spend $50,000 to find out it doesn't fly.

The problems are elsewhere, and they're mostly predictable.

What almost always needs fixing before launch

Authentication and data isolation

This is the one that causes the most serious problems in production. AI-generated auth typically implements the visible part correctly: users log in, sessions are created, routes are protected. What it usually gets wrong is the data layer.

In a multi-user app, every database query needs to be scoped to the user or organisation making the request. This means: when a user asks for their records, the query fetches records belonging to them, not all records of that type. In vibe-coded apps, this scoping is frequently missing. The route checks that you're logged in. The query returns everyone's data.

This isn't a minor issue. It's the kind of thing that results in one customer being able to see another customer's information, either through a bug or through deliberate probing of the API. Fixing it is usually not complicated, but it has to happen before the app is in front of anyone who might look for it.

Database structure and migration history

How your data is structured matters more than almost anything else about the application. The tables, the relationships between them, the fields on each table, these are the decisions that everything else is built on.

Lovable and Bolt make these decisions based on what you described in your prompts. They don't ask detailed questions about how your business works, what edge cases you need to handle, what you might need to report on in twelve months. They make reasonable guesses and build from them.

Those guesses are often good enough for a prototype. For production, "good enough" depends entirely on what the actual shape of your business is. If the data model misrepresents something important, every feature you build later has to work around that misrepresentation.

The other issue is migration history. Databases change as products evolve, tables get added, columns get renamed, indexes get created. The way to manage this safely is with a migration file for every change, run in order, so that any environment can be reproduced exactly. Many vibe-coded apps have no migration history: the schema exists in one place, production, and it got there through a series of changes nobody fully recorded. Adding migration tooling retroactively is doable, but it requires knowing what the current schema actually is and being careful about what you're codifying.

Secrets in the wrong places

API keys, database credentials, Stripe keys, third-party service tokens. These need to live in a secrets manager or in environment variables that are never committed to the repository. In many AI-generated projects, at least some of these end up in the codebase.

Once a secret has been committed to a repository, you don't know who has it. Even if the repository is private, it may have been forked, cloned, or previously public. The safe assumption is that any committed secret is compromised. This isn't alarmist: it's the basis of good practice. Every committed credential gets rotated before anything else happens.

Error handling and observability

When something goes wrong in your production app, two things need to happen. The user should see something sensible instead of a crash or a blank screen. And you should know about it, with enough information to understand what happened and where.

AI tools tend to handle the happy path well and leave the error paths either empty or generic. "Something went wrong" is the typical fallback. In production, that's not enough. You need to know which request failed, what the error was, what the user was doing, and whether it's happening once or continuously.

Logging, error tracking (something like Sentry), and graceful error handling are the basics. None of them are complicated to add, but they're usually not there in the initial export.

Hosting and infrastructure

Lovable and Bolt apps need to live somewhere in production. Often the AI tool has a built-in hosting option that works fine for a prototype, but comes with constraints around custom domains, environment configuration, scaling, and cost that make it the wrong choice for a real product.

Getting the app onto proper infrastructure, whether that's Vercel, Fly.io, AWS, or something else depending on what the app is, involves setting up environment variables correctly, connecting the database properly, setting up a deployment pipeline so that changes can be shipped without manually uploading files, and making sure the infrastructure configuration is version-controlled and repeatable.

The conversation we have with founders who bring us an export

The first thing we want to do is run it locally. Not look at screenshots of it, actually check it out and get it running on a machine in our office. Whether we can do that without archaeology is immediately informative.

Then we look at the five things that tell us most of what we need to know.

Where are the secrets? Can it run locally? What does the database look like and does it have a migration history? Does auth scope data to the right user at the query level? And does the data model actually reflect how the business works?

The answers to those questions determine the path forward.

If the data model is sound, we're usually in good shape. We can address the security issues, add the infrastructure, fill in the error handling, and build from there. The cost is real but predictable.

If the data model is wrong in ways that matter, the conversation is harder. Not because there's no path, but because the path usually involves migrating the data to a better structure before anything else happens, and that's a significant piece of work. It's also the piece of work that pays for itself the most clearly: every feature you build on a bad data model costs more than it should, and that overhead compounds.

The frontend code usually survives. AI tools are good at this part, and it would be wasteful to throw away working UI if the underlying logic can be fixed. We'll keep whatever's genuinely usable, which is often more than founders expect.

What "production-ready" actually means

It's worth being specific about this because it gets used loosely.

Production-ready means the app works correctly when more than one person is using it at the same time. It means user data is isolated from other users' data. It means credentials are properly managed and rotated on a schedule. It means when something breaks, you know about it before your customers call you. It means you can make a change and test it before it hits the live app. It means you can reproduce your environment if something goes wrong with your hosting.

It doesn't mean perfect. It doesn't mean all features are complete. It doesn't mean you've solved every scaling problem you might ever have. Production-ready is a floor, not a ceiling.

Most vibe-coded exports are not at that floor yet. Most of them can get there without being rebuilt from scratch. The question is what specific work is needed, and that requires looking at what's actually there.

What to bring to the conversation

If you're thinking about bringing your Lovable or Bolt export to a developer, here's what to have ready.

Access to the codebase, or the ability to export it. This means the actual code, not just the deployed URL. Most of these tools give you a way to export or connect to a GitHub repository.

An honest account of where the app currently lives, what it's connected to, and what credentials it uses. Not a polished version, just the truth. We're not going to judge the setup; we need to understand it.

A description of what the app does, who uses it, and what's missing from it. The more specific the better. "It needs to handle payments" is less useful than "users need to be able to subscribe to a monthly plan and cancel at any time."

And a sense of your timeline and priorities. There's usually a difference between "I need this to be secure before I go any further" and "I have fifty customers waiting and I need new features." Both are valid starting points and they lead to different conversations about what to do first.

Bring us your export

We work with founders who've built something real and want to take it seriously. Whether that means a full assessment of what needs to happen before launch, help with a specific part of the stack, or taking the whole thing forward as a development engagement, the starting point is the same: have a look at what's there.

If you've built something with Lovable, Bolt, v0, or any other AI tool, and you want an honest read on what it would take to get it to production, bring it to us. We'll tell you what we see.

Book a free chat with Code Workshop. Bring the export, or a link to the repo. We'll go from there.