I graduated in 2007. Computer science undergrad, then a master's, then a PhD in computer engineering. I've spent nearly two decades in this industry — as an engineer, as a manager, as someone who got away from the keyboard more than I wanted to during those management years, and as someone who's come back to it with a vengeance.
In that time, I've watched the "developers are going to be replaced" narrative cycle through at least four or five generations. Low-code platforms. Offshore outsourcing. No-code tools. Each time, smart people declared the end of the traditional software engineer. Each time, the job survived.
So when I say I think this time is genuinely different, I want you to understand I'm not saying it lightly.
How I Got Here
I started using GitHub Copilot right when it launched — around the same time ChatGPT-3 was making noise. Before that, at IBM, we were already experimenting with Watson Code Assist, which was essentially an early internal cousin of what Copilot became. That was 2022. I was there for the awkward, clunky first version of all of this.
And yeah, it was awkward. The suggestions were often wrong. The context window was tiny. You'd get a completion that looked plausible and then completely fell apart two lines later. I remember thinking: this is interesting, but it's not transformative.
That changed with Sonnet 3.5. That was the first time I looked at what a model produced and thought — this is not just a tool helping me write code faster. This is something that genuinely understands what I'm trying to build. That was about a year ago. Things have moved fast since.
Call Me a Vibecoder
I've been called a vibecoder. Andrej Karpathy coined the term and I don't think he meant it as a slur, but let's be honest — it's become one in some circles. There's an implication that you're just winging it, prompting an LLM and hoping for the best, not really understanding what's happening under the hood.
I want to push back on that framing.
I consider myself a slightly above average programmer. I've worked with people who are genuinely exceptional — the kind of engineers who can hold an entire system architecture in their heads and reason about edge cases at 2am on a whiteboard. I'm not claiming to be that. But I've spent enough time building real systems, managing teams that ship real products, and thinking about how software connects to business problems that I know what "good" looks like.
And I'll tell you this: with Opus 4.6, we are dangerously close to having something that rivals the best programmers I've ever worked with. Not on every dimension. Not for pure algorithmic invention. But for the day-to-day work of building production software? It's remarkable.
The skill that's emerged — and I do think it's a skill — is knowing how to work with these agents. How to give them enough context, how to structure a problem so they can reason about it, how to recognize when they've gone off the rails, and how to pull them back. That's not nothing. That takes engineering judgment. Engineers who dismiss it as "not real programming" are going to find themselves behind.
Tests Are the New Source Code
Here's the insight that's been rattling around in my head lately.
Cloudflare recently cloned one of Vercel's most popular frameworks, NextJS. Vercel's CEO was publicly critical of it — cited security vulnerabilities, said it'd be hard to maintain. And look, he might be right about both those things. But here's what I think he will not admit publicly: Cloudflare was able to do it because Next.js is open source and the tests were available. The behavior was fully specified. A coding agent can reconstruct almost anything if the expected behavior is clearly defined.
This isn't hypothetical anymore. Claude with Opus 4.6 reproduced a C compiler that compiles Linux. Think about what that means. Not because it was trained on that exact codebase — but because the specifications, the expected behaviors, the interface contracts were all well understood and publicly available.
Now here's where it gets philosophically uncomfortable.
SQLite keeps its test suite private. On purpose. You can read every line of the SQLite source code, but you can't see the tests that define what correct behavior actually looks like. And I've been sitting with this question: is that still really open source? The tests are the spec. They're the ground truth. If you hide them, you make the project significantly harder to clone, fork, or improve with an AI agent. That might be the intent. But does hiding the tests undermine the open source principle, even if you're sharing the code?
I don't have an answer. But I think this is going to become a real debate in the next few years as more projects realize that their test suite is the crown jewel, not the implementation.
AI Slop and the DNA Argument
The other thing I keep coming back to is the dismissal of AI-generated code as "slop."
I get the concern. I've seen it. LLMs produce plausible-looking garbage. They repeat patterns. They miss nuance. They confidently produce solutions that don't actually work. It's real.
But here's the thing: the same criticism could technically be leveled at evolution.
Evolution is essentially an enormous amount of biological "slop" — random mutations, failed experiments, dead ends — running at scale over billions of years, filtered by a brutal test suite called survival. What came out the other end is genuinely complex, genuinely elegant, genuinely intelligent. Us.
I'm not implying that AI-generated code is going to spontaneously develop consciousness. But I am saying that the argument "it's messy and imperfect at the generation level, therefore it can't produce genuine complexity" is wrong. We are the counter-example. Nature's method is exactly that: generate, test, select, iterate.
If we let AI agents generate code, run it against rigorous test suites, select what works, and iterate — I think we're going to see emergent complexity that surprises us. The slop matters less than people think, if the filtering mechanism is strong enough.
Where This Actually Goes
Let me be direct about what I think happens to the software industry over the next decade.
The number of people who need to deeply understand code at the implementation level — the people who can read a kernel patch and immediately understand its implications — that group gets smaller. Not zero. Never zero. You will always need people who can operate at that level, especially when things go catastrophically wrong. But as a proportion of the overall industry, it shrinks.
What expands is software architects. People who understand systems well enough to direct agents, evaluate their outputs, recognize structural problems before they become production incidents, and make the hard calls about trade-offs. The skill set looks less like "can you implement a balanced binary search tree from scratch" and more like "can you design a system that's composable enough for agents to extend safely."
That's a real skill. It requires deep experience to do well. It's not easier than what came before — in some ways it's harder, because the surface area you're responsible for has grown dramatically.
I moved to Elastio partly because I saw this shift coming and I wanted to get my hands dirty again before the rules changed. I've spent 10 years managing teams. That taught me how to think about products and systems at a high level. I wanted to combine that with actual building, because I think the people who will do best in this next phase are the ones who can do both — who understand the code and the product simultaneously, who can work at the speed these tools enable without losing sight of what they're actually building.
One Last Thing
I wish Karpathy had called it something else. "Vibecoding" feels like it was designed to be simple and amateurish, even if that wasn't the intent. The actual practice — when it's done well — is thoughtful, iterative, and deeply informed by engineering experience. It just looks different from what we're used to.
Some people are going to adapt and build remarkable things with these tools. Secure things. Performant things. Things that ship and hold up in production. Those people are going to be terrifyingly productive.
The ones who refuse to engage because it doesn't look like "real" programming are going to find themselves increasingly on the outside of something important.
I've watched this industry for a few years now. I've never said "this time is different" before. I'm saying it now!
Loading comments...