Your developer is vibe coding too. Here's why it's different.
AI writes a lot of my code. Probably more than you'd expect. Here's why that's different from doing it yourself.
On my most active project right now, I barely see the code. There's a lead agent, a designer agent, a frontend agent, a QA agent, a scrum master agent. I talk to the lead agent, it breaks the work into tasks, the others pick them up and execute. I review, I redirect, I make architectural calls, but I'm not the one writing most of the lines. By most definitions of the term, I am vibe coding.
When founders ask whether they really need a developer if AI can just build the thing for them, it depends on what you mean by "build the thing."
The AI doesn't have scars
Most of what I know about software development isn't knowledge I looked up. It's things I watched go wrong.
I know to scope every database query to the tenant who made the request. Not because I read it somewhere, but because I've seen what happens when you don't. One customer's data bleeds into another's and you spend a night trying to figure out how far back it goes.
I know to require authentication at the data layer, not just the route. I know not to put secrets in the frontend bundle. I know not to leave a database port world-accessible, even temporarily, even in staging.
None of those are things I consciously apply like rules from a checklist. They're reflexes. The AI doesn't have those. It has training data, which is descriptions of mistakes, not the experience of being there when one unfolds at 2am.
The AI has never been on a 2am incident call. That's the actual difference.
What it produces is plausible, well-structured code that misses the shape of the mistake. It looks right. It passes a casual review. It works in testing. And then, under some combination of conditions nobody anticipated, it doesn't.
What I actually let my agents do
I keep hard limits that I don't move on.
AI doesn't touch production environments. It doesn't touch the production database: no reads, no writes, no schema migrations. It doesn't touch AWS or any credentialed environment without me watching. It can use a browser via Playwright, but only under supervision.
These aren't rules I'm particularly proud of. They're what's necessary when you're the person who has to stand up on an incident call and explain what happened.
On some of my projects, legacy clients and older engagements, there are no agents at all. Just me using Claude in VS Code, case by case. I'm closer to the code there, but I'm still making the same calls: what do I let it touch, what do I review before it ships, what do I never hand off.
The limits are different for every project. The person setting them is always me.
Informed trust is not the same as blind trust
When I let an agent make a change, I'm trusting it within a system I designed, in a domain I understand. I know the architecture. I know the data model. I know where the sharp edges are. When something comes back and looks off, I know it looks off, because I have enough context to recognise the shape of a mistake even when I didn't write the code.
Honestly, I have no idea if every line my agents produce is any good. But I trust it within a system I can inspect, and I know what to look for when I do.
A founder who vibe-codes their product trusts it because it works. The app will probably work most of the time. It'll probably get most things right. But "probably" and "most of the time" aren't good enough when it's someone's payment data, or their health records, or the financial reports their accountant relies on. And you won't know which edge case it got wrong until something breaks.
AI probably writes code better than me in a lot of ways. More consistent, less fatigued, less likely to cut a corner at 6pm on a Friday. But I carry a conscious awareness of the things I can't audit as easily when I haven't written it myself, and I try to compensate for that.
The accountability gap
The real issue with a vibe-coded product isn't whether the code is good or bad. It's who's in the chain.
In my projects I'm still in the chain. I do code reviews. I set the policies. When something goes wrong, I'm the one who answers for it. The AI is a tool I'm wielding. If the tool slips, it's still my hand holding it.
In a vibe-coded startup with no developer in the loop, the person in the chain is nobody. Everything is fine until it isn't. And when it isn't, that responsibility lands somewhere: on the founder, on the customers, on whoever's data was in the system. It tends to land all at once.
Security breaches, data leaks, billing errors at scale. These are the incidents that come from systems that worked until they didn't, built by people who trusted the output because they had no way to evaluate it. That's not hypothetical. It happens.
Vibe coding isn't the problem. I'm doing it. Most developers are, whether they admit it or not.
It's more like working on your own car. You can absolutely do it, YouTube tutorials, forums, giving it a go. And a lot of the time it works fine. But when something goes wrong that you don't understand, you're stuck. You can't diagnose it. You don't know what you're looking at. You can ask the internet but you don't know if the answer applies to your situation.
A mechanic with the same YouTube tutorial isn't just faster. They're reading the same instructions through twenty years of knowing what can go wrong, what the symptoms mean, what the shortcut is going to cost you in six months.
That's what a developer brings. Not the ability to write code without AI, nobody's really doing that anymore, but the ability to read what comes back, know when it's wrong, and be the person standing there when it isn't.
The question isn't whether AI wrote the code. It's whether anyone who knows what they're doing has looked at it.