Discussion about this post

User's avatar
Pawel Brodzinski's avatar

Since Google's Project Aristotle was brought as the context, its most important finding was that the key aspect of high-performing teams is psychological safety.

There's some good advice here:

* Make communication transparent, especially everyone's intent.

* Addressing blockers.

* Reviewing context to seek alignment.

* Having "context broker" (I really like the framing of this one).

Yet, there's nothing about how to make people feel psychologically safe. There's scarcely anything about improving alignment where there isn't enough of it.

Also, that's a side note, but just making ChatGPT find patterns across input just to make it "AI-enabled" practice looks like forcing AI just for the sake of AI.

Human beings are genetically programmed pattern-recognition machines. If you're doing the stuff mentioned in the post transparently (and people care), it will be clear when patterns emerge. But I get it. These days, everything has to be AI...

Coming back to the point, something that fits neatly with Project Aristotle is research on Collective Intelligence (youtube.com/watch?v=Bz1dDiW2mvM). While they used a different language, the findings were similar.

Teams that perform better when solving complex problems are those that are socially perceptive (looking from a different angle at the same root causes that enable emotional safety) and participate evenly in discussions/conversations (which helps with clarity and alignment cited in Google's research).

Also, incidentally, both Collective Intelligence research and Project Aristotle found no correlation between the level of tech skills and team performance.

So, I would say, let's have less AI and more genuine conversations where we actively listen to *everyone* in a team, and thus, everyone feels heard.

Expand full comment
1 more comment...

No posts