Gregor, thanks for this post and for recognizing that fear, FOMO, and vague hype create real damage. I would add and say that psychological safety isn’t just a cultural “plus.” It’s the precondition for any real AI adoption. If people don’t feel safe, they won’t explore and no amount of productivity metrics will fix that.
I’ve written about this shift as moving from mandates to invitations, trusting people to return to what’s real, and letting AI meet them there. Thanks for naming this pattern clearly.
RIght, well said Roi. When people don't feel safe to "make mistakes", there is hardly any inovation happening. And if you enforce the usage, it will just make people either want to game the system or just do what is necessary and that's it.
Indeed, but I think that now this fear goes far beyond "make mistakes".
Companies are asking engineers to use AI to accelerate their work while still measuring them by speed. At the same time those engineers read that in 2 years AI will replace them.... This is something deeper... for me it is almost an identify issue.
I define myself as father and software engineer, I spent 30 years of my life to sharp my edge, software craftsmanship, and now this might be not relevant anymore.
Now I am not saying not to adapt AI, i am saying that we need to acknowledge this complexity and talk about it. Especially leaders who also expected to help their people carry that fear while going through the same storm. This is the fist step for starting to unlock AI IMO.
I wrote about it in my blog "Leading Through What You’re Afraid Of"
Right, it's important to have conversations around this to ease the burden a bit. Psychological safety is a big deal and can't be neglected. Without it, it's really hard to do good work long term.
This is such a grounded and necessary take, Gregor. FOMO-fueled AI pressure is quietly eroding psychological safety, essential for high-performing teams. Your call for thoughtful, measured experimentation over hype-driven mandates is the kind of leadership the industry needs more of right now.
Thanks Colette, glad the article resonated! It's a topic that is not spoken so loudly about, but a lot of people feel it. So, it's important that we talk about it openly.
Really appreciate this article, Gregor, especially the clarity around how hype creates pressure on engineering leaders. One thing I’d add is that a lot of frustration comes from mismatched timelines. Leaders expect instant results from AI, but meaningful gains take time. We need smaller, focused rollouts with clear goals to bridge that gap.
Gregor, thanks for this post and for recognizing that fear, FOMO, and vague hype create real damage. I would add and say that psychological safety isn’t just a cultural “plus.” It’s the precondition for any real AI adoption. If people don’t feel safe, they won’t explore and no amount of productivity metrics will fix that.
I’ve written about this shift as moving from mandates to invitations, trusting people to return to what’s real, and letting AI meet them there. Thanks for naming this pattern clearly.
RIght, well said Roi. When people don't feel safe to "make mistakes", there is hardly any inovation happening. And if you enforce the usage, it will just make people either want to game the system or just do what is necessary and that's it.
Indeed, but I think that now this fear goes far beyond "make mistakes".
Companies are asking engineers to use AI to accelerate their work while still measuring them by speed. At the same time those engineers read that in 2 years AI will replace them.... This is something deeper... for me it is almost an identify issue.
I define myself as father and software engineer, I spent 30 years of my life to sharp my edge, software craftsmanship, and now this might be not relevant anymore.
Now I am not saying not to adapt AI, i am saying that we need to acknowledge this complexity and talk about it. Especially leaders who also expected to help their people carry that fear while going through the same storm. This is the fist step for starting to unlock AI IMO.
I wrote about it in my blog "Leading Through What You’re Afraid Of"
Right, it's important to have conversations around this to ease the burden a bit. Psychological safety is a big deal and can't be neglected. Without it, it's really hard to do good work long term.
I think that this topic doesn't get enough attention.
This is such a grounded and necessary take, Gregor. FOMO-fueled AI pressure is quietly eroding psychological safety, essential for high-performing teams. Your call for thoughtful, measured experimentation over hype-driven mandates is the kind of leadership the industry needs more of right now.
Thanks Colette, glad the article resonated! It's a topic that is not spoken so loudly about, but a lot of people feel it. So, it's important that we talk about it openly.
Really appreciate this article, Gregor, especially the clarity around how hype creates pressure on engineering leaders. One thing I’d add is that a lot of frustration comes from mismatched timelines. Leaders expect instant results from AI, but meaningful gains take time. We need smaller, focused rollouts with clear goals to bridge that gap.