It's Not About the Ceiling, It's About the Floor
The New Baseline of Software Development Competence in the AI Era
If your engineering and product workflow looks basically the same as it did 18 months ago, you’re behind. Not falling behind. Already behind.
And if you’re moving faster than ever but haven’t stopped to ask whether you’re building the right thing for real people, you might be in worse shape than the team that’s slow.
There’s no shortage of signal about where things are going. No matter if you believe the specifics, it’s clear that we’re on a trajectory and the ceiling is growing exponentially. Boris Cherny, Head of Claude Code at Anthropic, shipped 22 PRs in a single day, every one of them 100% written by Claude. He hasn’t manually edited a line of code since November 2025. Thibault Sottiaux, who runs Codex at OpenAI, says his team is now drowning in code review because agents produce so much output so fast. Vercel’s v0 has 3 million users, and a huge chunk of them aren’t developers. They’re PMs and designers shipping production code through prompts. Cat Wu, Head of Product for Claude Code at Anthropic, argues the traditional PM playbook breaks entirely when model capabilities improve exponentially mid-project.
What these massive changes in workflow make us all believe is that the ceiling on how fast and effective product and software development is being raised exponentially right now. And if you’re paying a lot of attention to what’s being published you may be thinking that you need to aim for a new ceiling - targeting a new ideal for this lifecycle in the new world.
But the ceiling isn’t your problem. The floor is. And the floor isn’t just about tools and speed. It’s about whether, in all this acceleration, you still know how to build things that matter to actual people.
The floor moved
There’s a new baseline for what it means to be competent as a PM or engineer. Not exceptional. Not bleeding-edge. Just competent. And a lot of people are still operating like it’s 2023.
We see this constantly. We meet with 5 - 10 prospective clients every week, and 85% of them are feeling the pain of this problem and looking for help. Teams where maybe one or two people have integrated AI into their actual workflow and the rest are kind of poking at it occasionally, or worse, treating it as someone else’s problem. The gap between “uses AI tools daily” and “tried ChatGPT once at a team offsite” is already massive. And strangely, it’s getting wider.
The thing is, nobody has yet written down what the new floor actually looks like. The ceiling gets all the blog posts. The new floor just quietly rises, the baseline changing, and pretty soon — you or your team is working in last year’s processes with antiquated tools.
So let’s write it down.
For Engineers
The floor isn’t “writes code faster with AI.” It’s deeper than that.
AI is part of your daily workflow. Not sometimes. Every day. Boris Cherny describes a clear progression at Anthropic: first AI helps you write code, then it handles the tedious stuff entirely, then you’re orchestrating multiple agents in parallel. “I have never had this much joy day to day in my work,” he says, “because essentially all the tedious work, Claude does it, and I get to be creative.” If you’re still at step zero, writing every line by hand, you’re the developer equivalent of someone in 2010 who refused to use Stack Overflow on principle. Nobody was impressed by the purity then either.
You can plan and spec work for agents, not just for yourself. Cherny put it plainly: “Once there is a good plan, it will one-shot the implementation almost every time.” The bottleneck has shifted from writing code to deciding what to build. The skill that matters isn’t “good at prompting.” It’s the ability to decompose a problem clearly enough that an agent can execute it. Think of it as writing really good user stories, except the reader is tireless, literal, and has perfect recall of your codebase.
You review AI-generated code like it matters. Because it does. Thibault Sottiaux, who leads Codex at OpenAI, says his team’s biggest complaint right now is that there’s too much code to review. That’s not a humble brag. It’s a real bottleneck. The developer who blindly ships agent output is worse than the developer who writes mediocre code by hand, because at least the second one understands what they shipped. The floor now includes the ability to critically evaluate code you didn’t write: catch the subtle bugs, notice architectural drift, know when the agent took a shortcut that’ll cost you two sprints next quarter.
You compound your work. Each cycle should make the next one easier. You document patterns. You build context that agents can reuse. Anthropic does this internally: Claude is improving Claude’s own scaffolding and toolchains. If you’re treating every task like a blank slate, you’re leaving the single biggest advantage on the table.
You know when to throw the AI’s work away. This might be the most underrated skill on the list. An agent can produce something fast, coherent, and completely wrong for the problem. The floor isn’t just knowing how to use AI. It’s knowing when the output doesn’t serve the person on the other end, and having the judgment to kill it and start over, or do the work yourself.
For Product Managers
The floor isn’t “uses AI to write PRDs.”
You prototype before you spec. Cat Wu makes this point well: write the spec, then hand it to an AI tool and see if it can build it. Guillermo Rauch, CEO of Vercel, is even more direct. v0 exists because the distance between “idea” and “working thing” should be measured in minutes, not sprints. The PM who shows up with a 15-page PRD and no prototype is now moving slower than the PM who shows up with a rough working demo and three questions. The floor is: you can get to a working thing, fast, and use it to test whether your idea holds up before you burn engineering cycles.
You plan in shorter cycles. Cat Wu nails this: “The traditional product management playbook is built on the assumption that what’s technologically possible at the start of a project is roughly what’s possible at the end.” That assumption is broken. Model capabilities shift mid-sprint. Features you scoped as “hard” become trivial when the next model drops. The floor-level PM reviews their roadmap against capability changes, not just customer feedback. If you’re not doing this, you’re making planning decisions with outdated information. (Which, to be fair, PMs have always done. But now the information goes stale in weeks, not months.)
You know the tools well enough to smell BS. You don’t need to be an engineer. But you need enough fluency to call it when someone says “we’ll just use AI for that” with zero plan. And enough to push back when engineering says something will take six weeks that an agent could realistically do in a day. The floor is technical literacy, not expertise. Enough literacy to make good calls.
You’re experimenting. Regularly. Vercel didn’t build v0 for developers alone. They built it for anyone on a product team who has ideas and wants to test them. The practitioners pulling ahead aren’t following a playbook. They’re building one. The floor-level PM has an experimentation habit. They’ve tried multiple AI tools in their actual work, formed actual opinions, and can articulate what works and what’s hype.
You’re still talking to customers. This sounds obvious. It isn’t. When you can prototype in an afternoon and ship by the end of week, the temptation is to just build and see what happens. But “see what happens” is not a product strategy or a legitimate way to get to product/market fit. The floor-level PM is moving faster and still validating with real people. Not A/B tests. Not analytics dashboards. Actual conversations with the messy, complicated humans who use what you build. Speed without signal is just expensive guessing.
What the floor is really about
Strip all the specifics away and it comes down to three things:
Speed of learning. The landscape is moving fast enough that the half-life of any specific workflow is maybe six months. The floor isn’t knowing the right tools. It’s the ability to pick up new ones quickly and fold them into how you work. The people falling behind aren’t the ones who picked the wrong tool. They’re the ones who stopped picking up tools altogether.
Comfort with imperfection. AI outputs aren’t perfect. Prototypes are rough. Agent-written code needs review. The old floor rewarded polish and certainty. The new floor rewards speed and iteration. If you’re waiting until something is perfect before you share it, you’re optimizing for a world that doesn’t exist anymore.
Taste. This one’s harder to teach, and it might be the most important. When everyone has access to the same AI tools, the differentiator is judgment. Knowing what to build, what to cut, what “good” looks like when you can generate ten options in an hour. Taste is the human skill that gets more valuable as AI gets better, not less.
The So What
If you’re a leader: audit your team against the floor, not the ceiling. How many of your engineers are using AI daily in their actual workflow? How many of your PMs have prototyped something with AI tools in the last month? How many of them talked to a customer this week? If the honest answer is “some” or “not sure,” the floor in your org is lower than the market floor. And that gap compounds fast.
If you’re an IC: be honest with yourself. Not about whether you’ve “tried AI” but about whether it’s actually changed how you work day-to-day. If your workflow looks basically the same as it did 18 months ago, you’re below the floor. Not because you’re bad at your job, but because the floor moved.
The good news: the floor is achievable. We’re not talking about becoming an AI researcher or rebuilding your entire skill set. It’s a handful of habits and a commitment to the experimentation loop. The people who’ve already made this shift will tell you it took weeks, not months.
The ceiling will keep rising. The companies building these tools will keep pushing what’s possible. That’s great. Someone needs to be doing that work.
It’s easier than ever to make stuff. It’s faster. And AI can be super confident about correctly making the wrong solution and/or a complete waste of time/talent/tokens. It doesn’t care if you’re right, just that you use more tokens.
It’s up to us, humans, to make sure we build the right things as well as we can.







