When AI evaluates humans: between performance and social justice

A new era of measurement

Artificial intelligence has arrived where we least expect it: in the evaluation of people.
Video interviews scored by algorithms, automated behavioral analyses, performance scoring... These tools are multiplying in the name of efficiency. But one question is disturbing: can we really quantify the human without reducing it?

Philosopher Byung-Chul Han would say that we have entered the "transparency society" - an era where everything must be measured, visible, calculated. And AI, in its cold rationality, loves to measure.

When the promise of objectivity becomes a mirage

Proponents of HR AI defend a simple idea: an algorithm doesn't judge according to its emotions, so it would be fairer.
In reality, this promise of objectivity is often a statistical mirage.
AI simply amplifies the biases already present in human data.

The example ofAmazon, forced in 2018 to abandon its automated recruitment system because it discriminated against women, remains emblematic. The model had learned... to mimic the historical preferences of male recruiters.

As Binns et al. (2018) remind us, algorithmic justice lies not in calculation, but in collective deliberation about what is deemed fair.
This is where CSR finds its full relevance: it reintroduces moral debate into the machine.

From HR to CSR: a systemic challenge

The link between AI and CSR is deeper than it seems. Unfair, opaque or discriminating AI destroys internal trust, damages the employer brand and, ultimately, undermines overall performance.

A study published in Harvard Business Review (2024) shows that companies which have set up an "AI ethics board" made up of diverse profiles (lawyers, data scientists, sociologists, employee representatives) reduce the risk of bias in their HR scoring models by 40%.

This is not just a matter of compliance. It's a governance lever. Responsible AI in HR means AI that enhances the dignity of employees, rather than turning them into data points.

Rethinking performance: the courage of uncertainty

Algorithmic evaluation is based on the illusion of a predictable world.
Human performance is anything but linear. It depends on context, trust and recognition. Variables that no model can fully capture.

So, should we ban AI from recruitment or management? Not necessarily.
But we do need to change the starting point: don't ask the machine to decide, but to support the human decision.
"Human in the loop" means nothing if the loop does not include doubt, criticism and listening.

As Kate Crawford (2021) points out in Atlas of AI,

"Every dataset is a frozen history of power.
In other words: every model carries the memory of the inequalities it claims to correct.

What if the real responsible AI was the one that could say "I don't know"?

The day our evaluation systems recognize their own limitations - that they don't always have the right data, or the right criteria - we'll be one step closer to truly ethical AI.

Companies that understand this before anyone else will have a lasting strategic advantage: trust.
And in a world saturated with "intelligent" tools, trust remains the rarest resource - and the most human.

Sources

  • Binns, R. et al. (2018). Fairness in Machine Learning: Lessons from Political Philosophy. Proceedings of the 2018 ACM Conference on Fairness, Accountability, and Transparency.

  • Crawford, K. (2021). Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press.

  • Harvard Business Review (2024). Building Trustworthy AI for HR: Governance Lessons from the Field.

  • OECD (2023). AI, Governance and Trust Frameworks.

  • Jobin, A., Ienca, M., Vayena, E. (2019). The Global Landscape of AI Ethics Guidelines. Nature Machine Intelligence.

BetterAIForum

AI that is innovative yet controlled, powerful yet transparent, technical yet deeply human.

https://www.betteraiforum.com
Previous
Previous

Can we avoid the next MOASS?

Next
Next

How do you make an AI truly responsible?