Banning AI in interviews is already a losing move and it's getting worse fast. But the flip side is that 'uses AI well' is genuinely hard to evaluate in an interview format. What I've found separates fluent users from heavy users: fluent people can describe where AI breaks down, not just where it helps.
Someone who only knows the wins doesn't have real fluency yet. The best signal I've seen: ask what they've had to take back from the AI and redo themselves. That answer tells you more than any task demo.
Those are great insights, and yes, I agree. I like to connect “checking AI fluency” to “good judgment”. Do they know if the output is good for this case, whate aleternatives there are and how can we do it better.
That’s what we should be really looking for, not just pure end result.
Banning AI in interviews is already a losing move and it's getting worse fast. But the flip side is that 'uses AI well' is genuinely hard to evaluate in an interview format. What I've found separates fluent users from heavy users: fluent people can describe where AI breaks down, not just where it helps.
Someone who only knows the wins doesn't have real fluency yet. The best signal I've seen: ask what they've had to take back from the AI and redo themselves. That answer tells you more than any task demo.
Those are great insights, and yes, I agree. I like to connect “checking AI fluency” to “good judgment”. Do they know if the output is good for this case, whate aleternatives there are and how can we do it better.
That’s what we should be really looking for, not just pure end result.