The Future of HR — Finding a Third Way With AI Through the Noise, Part 2

!tags:: #lit✍/🎧podcast/highlights
!links::
!ref:: The Future of HR — Finding a Third Way With AI Through the Noise, Part 2
!author:: At Work with The Ready

=this.file.name

Book cover of "The Future of HR —  Finding a Third Way With AI Through the Noise, Part 2"

Reference

Notes

Quote

(highlight:: Capitalism Disincentivizes Candor In Orgs + Using AI To Replace Performance Management
Summary:
The current structure of organizations, where job security and benefits are tied to employment, creates a strong incentive for employees and managers to avoid candor.
This results in emotionally avoidant systems where genuine feedback is rare. The speaker believes that interacting with technology, such as AI, that is unbiased and objective, can help individuals process feedback without feeling threatened.
Utilizing AI for performance management can eliminate emotional biases, provide personalized feedback, enhance self-awareness, and improve psychological safety within organizations.
Transcript:
Speaker 1
I do just fundamentally think that the way that organizations work right now, the fact that our livelihood, our safety, our medical benefits, etc, are all tied up in our employment, In at least in America, means that the incentive to belong and to not get kicked out is incredibly high, which means that the incentive to know for real, what's going on and how you're Doing, and if you're failing and what needs to change is low. And it's not just low for the employees is low for the managers too, like there is not a lot of upside to real candor in most places. And that is just the truth. And we have, as a result, we have complicated, emotionally avoidant systems. And I don't think that there is a human solution to that. I think the only solution to that that really starts to move the needle is me interacting with technology that does not have any power over my circumstances to tell me the truth. Because I think that is the only place that I could theoretically process that without immediately being in existential threat.
Speaker 2
Yeah, I like that the privacy of making sense of the feedback with a technological tool. That's really interesting because even in the best of circumstances, another human being that you have potentially a kind of just a long history with the first move is always first The like checking of the emotional valence of like, how do I feel about this? And that's immediately a distraction from the objective sort of feedback that you're trying to get. So yeah, I wonder that if talking to an AI and receiving that feedback will remove that emotional valence. I think it's maybe is the answer to that because people seem to be having some pretty emotional reactions and conversations with AIs, which is very interesting as well.
Speaker 1
It's very interesting. And you know, my hope is like through exposure and practice and over time, we get there. But I also think that part of it is also I've interacted with a couple of AI tools that help with like sentiment analysis with communication that you're creating. And like where that information and that analysis comes from, I don't know and is not germane to this conversation. But like Sam's reading of my tone through Sam's lens and trauma, saying that I'm being too assertive or too aggressive or too direct or whatever versus the AI's reading of my sentiment Is very different. Like it's a very different experience. But I think part of why feedback has been so confounding is we will never get radar bias out of it. And part of it is that we are constantly trying to make sense of opposing data points. And I found this to be truly in some of this tooling that I use, where like, I think I'm just being polite or conversational. And the AI is like, this language is hedging, like basically like say what you want to say, you know what I mean? And so I think in the feedback realm, there is just a quantity of data and a removal of a lot of our most human tendencies that might allow us to really increase our self awareness, like In a really wonderful way that doesn't completely fuck up our psychological safety.
Speaker 2
Well, and not even just the removal of stuff, but I'm thinking about this massive potential around personalization, like the AI knows how I best respond to feedback. And it's able to give it to me in a way that I get maximum benefit from so that I can internalize it and be a better human.)
- Time 0:02:57
-