From computing power as the bottleneck to human judgment as the constraint. This is how the software engineering productivity has evolved over the years.
Humans are the bottlenecks for productivity with AI. We need to adapt and be selective - choose to intervene only when we absolutely believe it necessary - it's likely less often than you thought. Love the thoughts here.
I totally agree with your point. Humans should only intervene in the highest leverage point. It's like a manager: Good managers don't micromanage, but they just touch the right point to change someone's direction
This is putting in words everything probably every senior engineer is been feeling!
Using tools like Claude code or cursor feels amazing but terrifying! And there is definitely a different cognitive load that I have been feeling when AI generates a 500 line addition and another 300 lines change to an existing file and review this before I pull the PR!
It’s been exciting and exhausting!
With these tools the expectation from the management has also changed .. now they do expect one engineer to work like 10 engineers and as you said we are now limited by human brain power!
I think we're trying to apply the 10x multiplier to a system that just won't scale that much (our brains). It's like asking the horse to go at 100+ miles/km per hour. It just can't.
I think the devops movement will help with that. Rather than trying to get all humans to review all code, if we can apply the AI to develop 10x as much code, we should apply AI to speed up the review and verification process by 10 times too.
If we move the review from mostly human to mostly testing and pipeline verifications, we'll be able to have this 10x multiplier but without 100x the mental load :)
That makes a whole lot of code nobody knows intimately! But even today, almost every company has legacy code which is written by some brilliant programmer many years ago, who didn’t bother to write any comments and now every on call engineer prays that nothing breaks in that part of the code base :)
The "taste" point is where I've seen teams struggle most. Senior engineers have it because they've debugged systems at 2am, lived through bad architectural decisions, and felt the consequences.
Junior engineers using AI from day one are skipping exactly the experiences that build that taste.
The code ships faster. The judgment doesn't develop at the same pace. What I'm watching in teams is a widening gap - not in output, but in the ability to reason about what was built when something eventually breaks.
I don’t really agree with this, at least not completely. AI regularly generates bad code (as you’ve stated) and yet you also make the claim that it won’t make sense for us to review code in the future. These are contradictory statements and it sets a very dangerous precedent.
Further, LLMs have clear limitations. They are incredibly productivity tools for lots of things but handing off all work to it is a mistake. I believe if organizations follow your advice, they will end up in a horrible position in a very small timeframe.
With ai context switching is something I too have recognized as a common problem when cooperating with my agents. I’m hearing a lot of people say scaffolding is more important than compute. Also, agile philosophy and aiming for the mvp and then making iterations originated from software development and is still proving to be the most effective approach for project management as we pivot to ai dominated projects. Just subscribed thanks!
Thanks for the insightful and lucid analysis.
Glad you liked it! Thanks!
Glad it resonated Emanuele!
Humans are the bottlenecks for productivity with AI. We need to adapt and be selective - choose to intervene only when we absolutely believe it necessary - it's likely less often than you thought. Love the thoughts here.
Glad you liked it!
I totally agree with your point. Humans should only intervene in the highest leverage point. It's like a manager: Good managers don't micromanage, but they just touch the right point to change someone's direction
Right, setting up the proper guardrails and constraints is key imo. The better you have them implemented, the less manual reviewing you need to do.
This is putting in words everything probably every senior engineer is been feeling!
Using tools like Claude code or cursor feels amazing but terrifying! And there is definitely a different cognitive load that I have been feeling when AI generates a 500 line addition and another 300 lines change to an existing file and review this before I pull the PR!
It’s been exciting and exhausting!
With these tools the expectation from the management has also changed .. now they do expect one engineer to work like 10 engineers and as you said we are now limited by human brain power!
Yes, I definitely feel that exhaustion too.
I think we're trying to apply the 10x multiplier to a system that just won't scale that much (our brains). It's like asking the horse to go at 100+ miles/km per hour. It just can't.
I think the devops movement will help with that. Rather than trying to get all humans to review all code, if we can apply the AI to develop 10x as much code, we should apply AI to speed up the review and verification process by 10 times too.
If we move the review from mostly human to mostly testing and pipeline verifications, we'll be able to have this 10x multiplier but without 100x the mental load :)
That makes a whole lot of code nobody knows intimately! But even today, almost every company has legacy code which is written by some brilliant programmer many years ago, who didn’t bother to write any comments and now every on call engineer prays that nothing breaks in that part of the code base :)
The "taste" point is where I've seen teams struggle most. Senior engineers have it because they've debugged systems at 2am, lived through bad architectural decisions, and felt the consequences.
Junior engineers using AI from day one are skipping exactly the experiences that build that taste.
The code ships faster. The judgment doesn't develop at the same pace. What I'm watching in teams is a widening gap - not in output, but in the ability to reason about what was built when something eventually breaks.
Human judgment has always been the bottleneck. Only AI has now exposed it as such.
I don’t really agree with this, at least not completely. AI regularly generates bad code (as you’ve stated) and yet you also make the claim that it won’t make sense for us to review code in the future. These are contradictory statements and it sets a very dangerous precedent.
Further, LLMs have clear limitations. They are incredibly productivity tools for lots of things but handing off all work to it is a mistake. I believe if organizations follow your advice, they will end up in a horrible position in a very small timeframe.
With ai context switching is something I too have recognized as a common problem when cooperating with my agents. I’m hearing a lot of people say scaffolding is more important than compute. Also, agile philosophy and aiming for the mvp and then making iterations originated from software development and is still proving to be the most effective approach for project management as we pivot to ai dominated projects. Just subscribed thanks!