Top AI coding tools for engineering teams in 2025
An opinionated guide on which AI coding tools to use and the risks associated with them!
DevStats (sponsored)
Do you know what’s actually holding you back from shipping faster?
If you don’t have data-informed answers to questions about your delivery pace, you’re never going to improve. But when getting data is a slog and you lack clear metrics, it becomes hard to understand what’s going on.
DevStats highlights bottlenecks, burnout risks, and delivery delays in real-time - so you can make continuous process improvements.
Visualize your sprints and focus on mission-critical work
Deploy faster with a clear view of flow metrics
Spot bottlenecks and risks earlier
Improve developer experience
I’ve checked DevStats thoroughly and I used it in one of my projects as well, so I highly recommend checking them out, if you are looking for such a tool.
Let’s get back to this week’s thought.
Intro
There are many different options for using AI, both personally and professionally. It feels like every single week, a new tool is available, an LLM update is released, or something new shifts how we use AI.
It can feel daunting when choosing which tool to go for and to help us with this, I am happy to bring in Jeff Morhous as a guest author to today’s newsletter article.
Jeff is a Senior Software Engineer and a big AI enthusiast. He is also writing a newsletter called
, where he regularly shares his insights on different AI-related topics.Today, he is sharing his overview of the top AI coding tools he recommends for engineering teams, his opinion of them, the associated risks and how they can potentially be mitigated.
Let’s get straight into it!
Jeff, over to you.
AI tools are no longer (just) toys. They're power tools
Used well, they can help your team ship faster. Used poorly (or not at all), they become a liability.
Rolling out AI at work isn’t as simple as telling your developers to “go use ChatGPT.” There are privacy implications, licensing questions, and a rapidly shifting landscape of tools that do everything from autocomplete to autonomous refactoring. Some tools are great for individuals, while others are built for teams.
This guide walks through the major AI tools your engineers are likely to reach for and shows you some of the enterprise-friendly options you can choose from + risks associated with them. Let’s get into it!
So what AI tools are worth using?
If you aren’t already using some AI tools in your day-to-day work, you probably have at least heard some names. Of course, everyone knows ChatGPT, and many are familiar with Copilot, but there are plenty more to get into. Let’s take a look at each one.
ChatGPT
ChatGPT is easily the world’s most popular AI product.
I like ChatGPT because of how great a product it is, even if the underlying model isn’t always the best choice. With a Plus subscription, you have access to GPT-4o, GPT-4.5, and more models that are on the bleeding edge.
You also get Deep Research, which in my opinion is the single most useful AI feature on the planet. If you don’t have time to wait for Deep Research, the Search function is still quite useful.
Combine this with near-infinite chat history, context windows, and custom instructions, and you have a recipe for an incredibly useful product. Using “projects” to organize chats, files, and instructions is another great way to get extra use out of the tool.
ChatGPT for work
The free version of ChatGPT gives you access to some basic models. It’s fast, decent for casual use, and works fine for things like writing boilerplate, summarizing documents, or basic coding help. But it’s also missing a lot:
No access to GPT-4 (or GPT-4o)
No file uploads, vision, or advanced tools
No data privacy guarantees
ChatGPT Plus gives you access to GPT-4o and other top-tier models, including the latest reasoning models. It also has much higher tiers for Deep Research. For $20/month per user, you unlock:
Better models
Tools like Deep Research
Custom instructions and longer context windows
Projects to organize chats, files, and settings per use case
It’s a huge step up from the free tier, and it’s what I use for personal stuff. It is still a consumer product. Your data isn’t used for training, but there are not a lot of guarantees or guardrails.
ChatGPT Teams is for small companies or departments who want more control. It includes everything in Plus and then some:
A shared workspace for managing usage and billing
Data privacy
Admin controls and some usage analytics
Higher usage limits
This tier starts at $25/user/month, and it’s a great middle ground if you’re not ready for Enterprise but want something with better data privacy than Plus.
Meanwhile, ChatGPT Enterprise is the full-blown “AI at work” option. You get:
Even longer context windows (128k+ tokens)
Full SOC 2 compliance, SSO, and audit logging
Dedicated support and deployment help
SLA-backed uptime and security guarantees
The big draw for businesses here is data privacy and compliance. Nothing your team sends into ChatGPT Enterprise leaves the walled garden. It’s isolated, encrypted, and never used for training.
If you’ve not been able to get your team's approval to have subscriptions individually expensed, this is probably the next stop.
Claude
Claude is the other big player on the AI-chat block. It’s a web-based AI tool that has a familiar chat interface but is backed by some truly bleeding-edge models.
Anthropic has done an incredible job with the Claude models, but I’m a little less impressed with the product. It’s hard to explain exactly why, but I (and plenty of others) have a preference for ChatGPT. Still, Claude, specifically 3.7 Sonnet has consistently better performance at coding than most of OpenAI’s models.
I can’t say whether ChatGPT or Claude is objectively better - I’d bet you find your team mostly split.
Claude for work
Claude Pro costs $20/month and gives you more usage, more models, and more features (like projects!). There’s even a $100/month tier with even higher caps.
Still, this is a consumer-grade product. Like ChatGPT Plus, it’s powerful, but not likely a great fit for enterprise work unless you're willing to accept some risk.
Claude Team is Anthropic’s business-tier product for smaller companies. This is the tier most companies should look at if they’re using Claude seriously. The high token limit is especially valuable for working with codebases, logs, or long-form technical documents.
Claude Enterprise offers similar data protections and features to ChatGPT Enterprise, and is a better fit for bigger companies or those subject to heavy data regulations.
Copilot
GitHub Copilot pioneered the AI-in-the-editor tool category. When Copilot came out, I was genuinely shocked.
As a simple VS code (or your editor of choice) extension, Copilot started as just fancy autocomplete, but has evolved with more features such as Chat, Copilot Edits, and Agent, all of which help you get more work done in less time.
Though Copilot was the first to the space, competitors such as Cursor and Windsurf have mostly eaten its lunch. Most developers have a preference for one of those tools over Copilot.
CoPilot for work
Copilot Business is $19/user/month and is aimed at companies that want to roll it out at scale but have control over usage and data. It includes:
Unlimited agent mode and chats with GPT-4o
Unlimited code completions
Access to code review, Claude 3.5/3.7 Sonnet, o1, and more
300 premium requests to use the latest models per user, with the option to buy more
User management and usage metrics
IP indemnity and data privacy
This plan addresses the biggest concern most companies have: sending proprietary code to Microsoft. With Copilot Business, code stays private, and Microsoft guarantees it won't be used to improve the model.
Copilot also has an enterprise tier here that comes with even higher limits.
It’s worth saying that I’ve used Copilot extensively and found it much less useful than Cursor or Windsurf, even with the recent rollout of agent mode.
Cursor
Cursor is my favorite AI tool right now. It’s an IDE rather than a code extension, and it’s everything GitHub Copilot should’ve been.
The Cursor team forked VS code and baked AI into every piece. Here are some highlights:
AI-autocomplete as you’re typing
Highlight sections of code and ask questions or make alterations
An agent that will autonomously make changes across your entire codebase
AI-generated commit messages
It feels like VS code, but superpowered. If you haven’t tried it, I recommend checking it out for at least an hour. Do yourself a favor and go straight to agent mode while trying to write as little actual code as possible.
Cursor for work
While Copilot tries to layer AI into your editor, Cursor is the editor. And for many engineers (myself included), it's the first tool that feels like a true pair programmer rather than just a code autocomplete engine.
Cursor also offers a Business product, which includes:
Enforce privacy mode org-wide
Centralized team billing
Admin dashboard with usage stats
SAML/OIDC SSO
This tier makes it easy to standardize Cursor across your engineering org while keeping control of how it’s used and where your data goes.
Windsurf
Windsurf is probably Cursor’s closest competition. It’s another standalone IDE, and the developers that love it say they love it because it’s easier to stay in a focused state of work, almost like pair programming with the editor.
Windsurf for work
Windsurf also has an Enterprise tier that offers some features and privacy guarantees that make it an easier integration.
There’s a huge emphasis on legal compliance and privacy that makes Windsurf stand out from the other IDE-based AI tools.
What are the risks associated with AI coding tools?
The tools mentioned above give every engineer a superpowers, but these superpowers come with risks.
While tools like ChatGPT, Claude, Cursor, and Copilot can help you move faster and write better code, they also open the door to new kinds of mistakes, liabilities, and misunderstandings.
Not really a problem if you’re vibe coding at home, but businesses care a lot about these risks.
If you're giving your team access to these tools, it’s worth understanding the major risks so you can make smart, informed decisions.
Intellectual property and code ownership
One of the most common concerns with these tools is about who owns the code AI helps you write.
If you prompt a model with “write me a Stripe integration,” and it spits out something that looks very similar to Stripe’s open-source SDK, is that your code now? What if it lifted that snippet from a repository with a non-permissive license?
There’s no real conclusion here yet.
AI models are trained on massive amounts of data, some open-source, some not. While the major players have legal teams arguing that the output is transformative and doesn’t constitute copying, it still (rightfully) makes many legal teams a little nervous.
Windsurf for example mitigates this with guarantees about where they get their training data.
Intellectual property leakage
The flip side of ownership is leakage. If your engineers paste sensitive code into ChatGPT or Claude, you’ve just handed it to another company.
Personally, I’m not concerned about this sort of risk, because that risk already applies to traditional search tools. Knowing not to leak proprietary data to Google carries over to LLM web tools.
It’s a harder problem to solve with editor-based tools like Copilot, Cursor, and Windsurf, which is why many companies are banning those tools entirely.
Sure, this works, but each tool has an enterprise tier that attempts to address some of these concerns, and you may decide the tradeoffs are worth it.
Engineers taking output at face value
The last big risk is the easiest to overlook and the hardest to fix: overconfidence.
AI-generated code looks confident, but it can still be wrong.
Less observant engineers may be prone to assume the output is “probably correct” just because it came from a confident model output.
But these tools don’t know things the way humans do, and I’m still not convinced they can actually reason. Sometimes they guess right, sometimes not. You still need engineers who can read, understand, and take responsibility for the code.
The fix here is cultural, not technical. Make it clear that AI is a tool, not a teammate. Every line of code it writes is your code now.
Taking responsibility for AI-augmented work
Computers can’t be held accountable. These tools are powerful, but as of now, they just predict the next best token. The responsibility for every line of code still falls squarely on human engineers, reviewers, and decision-makers.
There’s no way to fully eliminate the risks of using AI tools, but avoiding them altogether comes with its own risk: falling behind.
Teams that augment themselves with AI are moving faster, exploring more ideas, and automating the boring stuff.
Every business will have to decide where it draws the line. But if you're going to encourage your engineers to use AI at work, make it clear that the outputs are suggestions, not decisions.
They still need human eyes, human judgment, and human accountability.
Related articles for further reading
Here are some more AI-related articles that might be interesting for you:
Last words
Special thanks to Jeff for sharing his insights and opinion on this important topic with us!
Make sure to follow him on LinkedIn, X and also check out his newsletter
, you’ll find a lot more AI-related articles there.We are not over yet!
Simplicity vs Complexity in Software Engineering: Which is Better?
Check out my latest video. I am sharing how costly complex solutions can be and why they shouldn’t be praised. As engineers, our purpose is to provide as much business value as possible. Amazing technical solution may not contribute to that.
New video every Sunday. Subscribe to not miss it here:
Senior Engineer to Lead: Grow and thrive in the role
It’s getting closer to the start of June’s cohort of the course Senior Engineer to Lead: Grow and thrive in the role. We start on June 10!
In the course, we particularly focus on the development of much needed people / communication and leadership skills in order to grow from engineer to leader!
Use code EARLYBIRD for limited-time offer of 25% off or use this link: Senior Engineer to Lead where the code is already applied.
Looking forward to seeing some of you there.
Liked this article? Make sure to 💙 click the like button.
Feedback or addition? Make sure to 💬 comment.
Know someone that would find this helpful? Make sure to 🔁 share this post.
Whenever you are ready, here is how I can help you further
Join the Cohort course Senior Engineer to Lead: Grow and thrive in the role here.
Interested in sponsoring this newsletter? Check the sponsorship options here.
Take a look at the cool swag in the Engineering Leadership Store here.
Want to work with me? You can see all the options here.
Get in touch
You can find me on LinkedIn, X, YouTube, Bluesky, Instagram or Threads.
If you wish to make a request on particular topic you would like to read, you can send me an email to info@gregorojstersek.com.
This newsletter is funded by paid subscriptions from readers like yourself.
If you aren’t already, consider becoming a paid subscriber to receive the full experience!
You are more than welcome to find whatever interests you here and try it out in your particular case. Let me know how it went! Topics are normally about all things engineering related, leadership, management, developing scalable products, building teams etc.
Great article! :)
Cursor being higher than Windsurf is an interesting idea. Need to go back to Cursor, maybe it got better lately 🤔
Great article. At work, lately I am really enjoying Cline.