How to Evaluate AI Fluency in Technical Interviews
Why banning AI is outdated and how to redesign your process without lowering the bar.
This week’s newsletter is sponsored by Stigg.
Most engineering teams discover the gap between billing and usage enforcement the hard way. AI features ship. Adoption grows. Then one day an enterprise customer asks why a request got blocked while an identical one went through, and nobody has a clean answer.
The problem isn’t billing. Billing is working exactly as designed. The problem is that billing records what happened after execution. It was never built to decide what’s allowed while the system is running. And it’s just one layer in a monetization stack that most teams only realize is incomplete once AI hits production.
Shai Betito, VP of Engineering at Stigg, breaks down why this distinction matters more than most teams realize until it’s already painful:
Why limits stop being simple counters under concurrency and become a consistency problem at scale.
How a single request can touch multiple organizational dimensions simultaneously, and why resolving that correctly in milliseconds is a fundamentally different challenge than monthly reconciliation.
Why identity-based access control breaks under AI workloads. Automated pipelines don’t have seats, and the control model has to change accordingly.
A technical breakdown of a class of infrastructure problems that appears after AI reaches production, written by engineers who’ve had to solve it.
Find out what's missing from your AI product's infrastructure:
Thanks to Stigg for sponsoring this newsletter. Let’s get back to this week’s thought!
Intro
Based on recent reports, we can see that AI adoption in companies is increasing. As AI becomes more common in the workplace, the hiring process must reflect its presence.
From my conversations with engineering leaders from companies like OpenAI, Meta, and others, I’ve learned that there is an increased expectation in AI fluency, especially for engineering and product roles.
Companies are starting to understand that AI is becoming an important part of the day-to-day work that tech professionals are doing.
The question we want to answer today is how can we actually check for AI fluency in technical interviews? To help us with this, Hamid Moosavian, director of engineering at Xe, our guest author, will share practical insights on how to evaluate candidates.
Let’s introduce our guest author and get started.
Introducing Hamid Moosavian
Hamid Moosavian writes First 90 Days for Engineering Managers, a biweekly newsletter of practical management tips, and is the creator of the free First 90 Days Kit, a collection of copy-ready templates for new engineering managers. Hamid is also director of software engineering, Americas, for Xe.
Your engineers are already using AI in their work. Your engineering candidates probably are, too. But how do you know whether candidates are using AI well if you’re not interviewing for it? Hamid walks us through the importance of including AI in your interviews and how to do it.
Over to you, Hamid!
Your Tech Interview Process Shouldn’t Be AI Resistant
When Copilot and similar AI pair-programmers emerged, most teams made their hiring processes AI resistant. The logic was simple: If candidates used AI during assessments, their real skills couldn’t be evaluated.
That approach is outdated.
By the end of 2025, over half of professional developers were using AI daily. Some surveys put the number at 97% when including those planning to adopt it. AI assistants have become standard tools for productivity.
At the same time, senior engineers and tech leads started sharing examples of poorly reviewed AI code. The pattern is clear: Some developers use AI as a thinking partner and ship better code, while others blindly accept suggestions and generate unmaintainable systems.
The tool isn’t the problem. How people use it is.
The comparison to Google and Stack Overflow is obvious. No one expects developers to avoid documentation or search engines. But we do expect them to use these resources well.
This situation changes what good hiring looks like. The goal is no longer to prevent AI use. It is to evaluate how candidates use AI and whether they produce clear, maintainable code even when the tool misleads them.
Companies like Meta and Canva have already made this shift. More are following.
Here’s what to understand about AI fluency and how to update your process.
What AI Fluency Actually Means
Let’s first be more specific about AI fluency. I suggest the following framework, though you can adjust categories to fit your context.
Level 0 AI Avoidant: Is skeptical of AI tools and refuses to use them. Relies on memorization or traditional resources. Sees AI as cheating and won’t trust AI-generated code. Struggles with productivity when peers use AI assistance.
Level 1 AI Dependent: Uses AI-generated code without understanding or verification. Accepts the first suggestion, can’t explain what the code does, and gets stuck when AI gives wrong answers. Can’t debug AI output.
Level 2 AI Competent: Uses AI for boilerplate, refactoring, unit tests, and repetitive tasks but reviews output critically. Catches obvious errors, tests generated code, and explains what AI suggested and why they accepted or rejected it.
Level 3 AI Advanced: Knows when not to use AI. Combines AI with deep technical knowledge to explore edge cases and verify assumptions. Treats AI as a thought partner, not a decision-maker. Works effectively with or without AI.
What do these levels look like in practice? Let’s break down the patterns that separate competent and advanced users from the rest.
Level 2 and 3 engineers share these habits:
Context-rich prompting: Provide relevant code snippets, file paths, and project constraints rather than vague or broad requests.
Incremental integration: Generate code in small chunks, test after each step, and commit frequently.
Critical review: Read all generated code line by line, check for edge cases and architectural fit, and validate against project standards.
Security awareness: Verify package names (up to 20% of AI suggestions reference nonexistent packages), check for hardcoded secrets, and validate input sanitization.
Appropriate skepticism: Treat AI as a pair programmer, not autopilot. Can proceed when AI is wrong or unavailable.
AI fluency shows up differently across engineering tasks. In debugging, it means using AI to generate test cases or explain error messages. In code review, it means catching issues you might miss manually. In architecture discussions, it means exploring design patterns without letting AI make final decisions.
Here’s what this looks like in practice:
A Level 1 engineer might prompt like this:
Create a POST endpoint for user registration.A Level 2 or 3 engineer provides context and constraints:
Using Express.js with TypeScript, generate the boilerplate for a POST endpoint at /api/users that:
- Accepts JSON payload with email and name
- Includes Joi validation
- Uses async/await error handling
- Returns 201 on success, 400 on validation errors
I’ll add the business logic afterward.You want engineers who can code without AI but also use AI fluently to improve their productivity.
Remember, though, AI is not a magic wand. It’s just a very powerful tool. Think of it like a power saw: It can cut through work 10 times faster than a hand saw, but without proper technique and safety measures, you can lose a finger. You still need to know carpentry; the tool just amplifies the skill.
Why Companies Are Making This Shift (and Why You Should Consider It Too)
The traditional approach to technical interviews is changing. Companies are realizing that if AI tools are standard in daily work, they should be part of the hiring process too.
Why? Because you want to see how candidates actually work, not how they perform under artificial constraints. The closer your test mirrors reality, the more reliable the results. Interviews should predict job performance, and job performance now includes AI.
Who’s already making the shift
Companies like Meta, Canva, and Intuit are testing a new approach. They’re not just allowing candidates to use AI tools like Copilot but observing how they use them.
Goldman Sachs has given its engineers access to GitHub Copilot and Gemini Code Assist and is even holding competitions to promote creative AI use among developers.
At Meta, candidates work with an IDE and AI chat window for writing code, debugging, and creating unit tests. Evaluation focuses on how effectively candidates use AI as a tool, integrating it into their process for daily engineering work.
The evaluation format is moving away from pure data structures and algorithms (DSAs) toward project-based tasks that mimic real engineering challenges. Interviewers assess problem-solving, code quality, verification, and communication, specifically how candidates prompt, debug, and justify AI suggestions.
Candidates must demonstrate control. Doing so shows that they’re the primary engineer, not a passive AI user. They should be catching subtle errors in AI-generated code and iterating on solutions. They should be articulating their prompts, reasoning, and trade-offs. They should be asking targeted questions to get efficient, accurate results.
Many Fortune 500 companies still share Amazon’s stance against AI interviews. But a growing number are embracing this shift, wanting to hire engineers who leverage new tools to enhance their skills.
Addressing the skeptics
Companies that don’t allow AI in technical assessments have a valid concern. They argue that allowing AI means everyone gets the same AI-generated solution, candidates don’t show their true skills, and AI covers for their weaknesses. Allowing AI does require redesigning the technical evaluation. Without it, you won’t collect meaningful signals.
However, that’s exactly what you want to test: how candidates differentiate when everyone has access to the same tools. Judgment becomes more critical than ever.
The evaluation shifts from “Does it work?” to “How was it built?” Discussions focus on key decisions, major choices, and code structure. Qualities like thoroughness become crucial: writing detailed prompts, refining them when needed, reading responses attentively, and reviewing proposed solutions carefully.
These habits separate engineers who think about tech debt and maintainability from those who just want working code.
Why This Matters for Your Company
Properly evaluating AI fluency leads to better hires in several ways.
Cost implications: When you evaluate candidates using the tools they’ll use on the job, you get a clearer signal faster. Instead of watching them struggle with boilerplate syntax, you see how they think about architecture, edge cases, and trade-offs, things that actually matter in the role.
Quality of hire: Here’s an example drawn from life. Let’s say you hire an intermediate front-end developer who writes good CSS and understands the front end well enough for the level. You give them Copilot access. Suddenly they move fast but create unmaintainable code. They’re using Copilot for the first time or using it irresponsibly. Your hire failed because you didn’t test what they’d actually do on the job.
Karat recently argued that interviews should mirror the job. Its blog states that preventing AI use hampers hiring engineers who can adapt to new tech. Its data suggests that watching how candidates interact with a large language model provides a stronger signal of seniority than traditional whiteboarding. If you’re not collecting this signal, you’re missing valuable input.
Competitive risk: Your competitors are already evaluating candidates’ AI fluency, and you’re losing AI-fluent candidates to them. You don’t need to pivot immediately, but start making your assessment more comprehensive.
Future-proofing: The skill gap is widening. For most product engineering roles, candidates who aren’t AI fluent today will struggle within six months. AI fluency signals that candidates are good learners who stay current and adapt to new tools. This is a strong indicator of future development potential.
These benefits stem from one principle: When interviews mirror how engineers actually work, you get a clearer signal and fewer post-hire surprises. Test the real workflow, and you’ll hire for the real job.
We’re at the beginning of this shift. My projection is that any company hiring product engineers at scale will eventually include an AI-fluency component, whether hands-on testing or interview questions. Since AI is becoming integral to developers’ work, it must be addressed during hiring. Otherwise, you risk unpleasant surprises.
How to Evaluate AI Fluency in Your Interviews
Understanding AI fluency is one thing. Evaluating it in a 60-minute interview is another. You don’t need to redesign everything, though. Just adjust what you look for and how you interpret what you see.
What changes in your interview setup
The mechanics are simple. Explicitly allow candidates to use AI tools like Copilot, Cursor, Claude, whatever they prefer. Provide access within the interview environment if they don’t have it. This levels the playing field.
Not all candidates have access to premium AI tools, so providing them ensures you’re testing fluency, not financial access. Make it clear that using AI is encouraged, not forbidden, and that you care about how they work with it.
If a candidate hasn’t used AI tools before, that’s a valuable signal. For senior roles where AI fluency is critical, inability or unwillingness to engage with AI is a red flag. For junior roles, you’re evaluating potential: Can they learn it quickly when given the chance? Watch how they respond when you offer to show them how the tools work.
The bigger shift is in what you ask. Move away from isolated algorithm problems to realistic scenarios. Instead of “reverse a binary tree,” try “build an authentication endpoint” or “debug this performance issue.” These problems work better with AI because they mirror real work, and they reveal how candidates integrate tools into their thinking.
The signal framework
Focus on process, not just output. Strong AI fluency looks like this:
Provides context before prompting AI (”I’m building a REST API for user authentication...”)
Iterates on AI suggestions (”That’s close, but let me modify the error handling.”)
Catches AI mistakes in real time (”Wait, this function signature is wrong.”)
Explains reasoning independent of AI (”I chose this approach because …; let me verify with AI.”)
While weak AI fluency looks like this:
Accepts first suggestion without reading it
Can’t proceed when AI gives wrong answer
Doesn’t verify AI output against requirements
Treats AI as a source of truth that can’t be wrong
The difference between Level 1 and Level 3 shows up in these moments. A Level 1 engineer copies and hopes it works. A Level 3 engineer uses AI to accelerate their own thinking.
How AI fluency shows up across interview types
Evaluation looks different by format. Let’s look at some examples.
Live coding: Watch how they prompt, review output, and catch bugs. Do they read what AI generated or blindly accept it? When AI makes a mistake, can they debug it or do they get stuck?
Code review: Give them buggy code and let them use AI to find issues. Strong candidates validate AI’s findings against their own understanding. Weak candidates trust whatever AI suggests.
Take-home assignments: Assume AI was used extensively, which is fine since it mirrors real work. During the debrief, ask candidates to walk through their approach: “How did you use AI? Where did it help? Where did you override it?” You’re evaluating their process and judgment, not whether they wrote every line by hand.
Technical assessment platforms: Most screening tools, such as Codility, HackerRank, and LeetCode, now allow you to enable an AI assist feature for candidates. You can review their prompts along with their solution and assess how they used AI.
Architecture discussion: Ask candidates to use AI to explore design patterns or validate assumptions. Strong candidates treat AI as a thought partner, asking questions, challenging suggestions, and synthesizing insights. Weak candidates let AI make decisions for them.
System design: AI fluency shows in how candidates research patterns and verify ideas, not in how they memorize solutions. Can they use AI to stress-test their thinking? Do they know when AI’s generic suggestions don’t apply to their specific constraints?
Integrating AI Fluency into Your Rubric
AI fluency matters, but it’s not the only thing that matters. Weight it 10–40% based on seniority and how critical AI fluency is to the role. The more senior the role, the higher the weight. They’ll be role models for junior developers’ AI usage.
In practice, add an AI fluency row to your scorecard:
1 = Avoids or misuses AI
2 = Writes basic prompts, accepts suggestions uncritically
3 = Uses AI effectively, with verification
4 = Uses AI strategically, knows when not to use AI
Core competencies: system design, communication, debugging, architectural thinking, still dominate. AI doesn’t replace these skills. Instead, it amplifies them. You’re looking for engineers who are strong without AI but are even stronger when they use it well.
If you allow AI in interviews, adjust problem difficulty accordingly. With AI available, candidates should tackle more complex problems, handle more edge cases, or work under tighter time constraints. Don’t compare AI-enabled interviews directly to traditional ones; you’re testing different skills.
The key is treating AI fluency as one signal among many. You’re not replacing technical evaluation; you’re just making it more realistic.
Conclusion
The shift is already happening, and companies allowing AI in interviews are getting better signals about real job performance.
Start simple: Allow AI in one interview format, watch for the patterns outlined in “How to Evaluate AI Fluency in Your Interviews,” and learn and adjust quickly.
The goal is making technical evaluation mirror actual work. When you do that, you’ll hire engineers who are productive from day one.
AI fluency is here to stay. The question isn’t whether to evaluate it. It’s when you’ll start and how you do it.
Last Words
Thanks very much to Hamid for helping us learn to interview better in regards to AI fluency! Learn more about Hamid on LinkedIn, and be sure to grab your free copy of the First 90 Days Kit.
Liked this article? Make sure to 💙 click the like button.
Feedback or addition? Make sure to 💬 comment.
Know someone that would find this helpful? Make sure to 🔁 share this post.
Whenever you are ready, here is how I can help you further
Join the Cohort course Senior Engineer to Lead: Grow and thrive in the role here.
Interested in sponsoring this newsletter? Check the sponsorship options here.
Take a look at the cool swag in the Engineering Leadership Store here.
Want to work with me? You can see all the options here.
Get in touch
You can find me on LinkedIn, X, YouTube, Bluesky, Instagram or Threads.
If you wish to make a request on particular topic you would like to read, you can send me an email to info@gregorojstersek.com.
This newsletter is funded by paid subscriptions from readers like yourself.
If you aren’t already, consider becoming a paid subscriber to receive the full experience!
You are more than welcome to find whatever interests you here and try it out in your particular case. Let me know how it went! Topics are normally about all things engineering related, leadership, management, developing scalable products, building teams etc.







