How do you make an AI truly responsible?
Responsible AI: an ideal or a discipline?
Talk of responsible AI has become a reflex. But how many companies really know what it means? This often decorative concept should be seen as a moral and methodological contract. It commits: society, researchers, engineers, but also... the planet.
The word "responsible" doesn't just refer to compliance with a legal framework. It speaks of intent and consequences. Simply put: if AI improves profits but degrades resources or undermines trust, then it misses its time.
Indeed, the OECD Principles on AI (2019) emphasize this tension between innovation and the duty of sustainability: to develop systems that are "robust, safe and fair". Easy to say. Harder to do.
Stakeholders, the forgotten code
One of the first reflexes would be to map the stakeholders before even writing the first line of the algorithm. Who will be affected? Who has the power to say "no"?
Too often, stakeholder mapping is limited to a few slides in a presentation. However, including users, employees, regulators and even territories in the design of a model changes everything.
Let's take a concrete example: an HR AI that ranks job applications. If it is trained on biased data, it perpetuates inequalities; but if the steering committee includes an employee representative or a diversity expert, the code evolves. And governance too.
The work of Brundage et al (2020) on verifiable governance reminds us of the obvious: an AI is not responsible by nature, it becomes so by design and debate.
The planet, the voiceless stakeholder
Rarely invited around the table, the planet remains the great absentee of AI dialogue. Yet every query, every model, every training session consumes colossal amounts of energy.
According to Lee et al. (2025) (AI and Ethics), the carbon footprint of a large linguistic model can be equivalent to several hundred transatlantic flights.
So, how can we reconcile progress and sobriety? Not by trying to "green" AI a posteriori, but by integrating the ecological constraint right from the design stage.
This means assessing the impact right from the prototyping phase: water consumption, recyclability of hardware, lifespan of servers. It's not glamorous, but it's where the credibility of digital sustainability is at stake.
Governing technology rather than being subjected to it
Today, public and private institutions are experimenting with different governance models: ethical committees, labels, bias audits, ESG charters... but all this often remains cosmetic.
Responsible AI is AI under living governance, not AI locked up in a PDF.
The European Union, through its AI Act (2024), is pushing to make these principles binding: transparency of models, explicability, right of appeal. This is certainly a turning point. But regulation will never replace culture.
Training engineers capable of saying "no" to an unjustified automated decision: that's the real challenge.
To conclude, without concluding
Responsible AI is not a destination. It's a discipline in motion, fueled by contradictions: innovate without damaging, automate without dehumanizing, optimize without exhausting.
As the IEEE nicely sums up in its report Prioritizing People and Planet (2023):
"An ethical AI is not an AI that never fails, but one that learns why it failed."
That's easier said than done - but that's precisely why this symposium exists.
Sources
OECD (2019). Principles on Artificial Intelligence. oecd.org
Brundage, M. et al. (2020). Toward Trustworthy AI Development: Mechanisms for Verifiable Claims. arXiv:2004.07213
IEEE (2023). Prioritizing People and Planet as the Metrics for Responsible AI.
Lee, S.U. et al. (2025). Integrating ESG and AI: A Responsible AI Assessment Framework. AI and Ethics. Springer.
Papagiannidis, E. et al. (2025). Responsible AI Governance: A Review. Technological Forecasting and Social Change.

