Gregor, thanks for this post and for recognizing that fear, FOMO, and vague hype create real damage. I would add and say that psychological safety isn’t just a cultural “plus.” It’s the precondition for any real AI adoption. If people don’t feel safe, they won’t explore and no amount of productivity metrics will fix that.
I’ve written about this shift as moving from mandates to invitations, trusting people to return to what’s real, and letting AI meet them there. Thanks for naming this pattern clearly.
RIght, well said Roi. When people don't feel safe to "make mistakes", there is hardly any inovation happening. And if you enforce the usage, it will just make people either want to game the system or just do what is necessary and that's it.
Gregor, thanks for this post and for recognizing that fear, FOMO, and vague hype create real damage. I would add and say that psychological safety isn’t just a cultural “plus.” It’s the precondition for any real AI adoption. If people don’t feel safe, they won’t explore and no amount of productivity metrics will fix that.
I’ve written about this shift as moving from mandates to invitations, trusting people to return to what’s real, and letting AI meet them there. Thanks for naming this pattern clearly.
RIght, well said Roi. When people don't feel safe to "make mistakes", there is hardly any inovation happening. And if you enforce the usage, it will just make people either want to game the system or just do what is necessary and that's it.