The 2025 edition : The Key Highlights

BetterAIForum Conference Recap: Rethinking AI, Work, and Academic Responsibility

This year’s BetterAIForum conference at ESCE offered a clear and structured look at how artificial intelligence is reshaping both the world of work and the mission of higher education. Three key moments stood out, each shedding light on a different dimension of this transition.

1. Mapping the Research Landscape: How AI Is Transforming Work and Management

The day opened with a bibliometric analysis revealing a sharp global acceleration in research on algorithmic management. China, the United States, and the United Kingdom are leading the field, driven by a rapidly expanding body of work.

Across studies, a few themes consistently reappear: performance, engagement, autonomy, trust, emotional strain, and resource management.

The takeaway is clear: researchers are no longer simply examining AI in the workplace. They are documenting the rise of new models of coordination, supervision, and motivation, fundamentally reshaped by algorithmic systems.

2. Toward Inclusive AI: Designing Systems Around Human Diversity

The second presentation shifted the focus to an equally critical issue: cognitive and linguistic inclusion.

Two strong messages emerged:

  • Neurodiversity (ADHD, autism spectrum conditions, dyslexia, etc.) represents a reservoir of creativity and alternative reasoning styles that should inform how AI systems are built.

  • Linguistic diversity is essential for fairness and ethical robustness. An inclusive AI cannot simply “speak multiple languages”; it must learn to think through them, instead of reinforcing English-centric patterns that erase non-Western argumentation styles.

The message was powerful: responsible AI is designed not to standardize people, but to listen to their languages, cultures, and cognitive styles.

3. The Academic Paradox: Embracing AI While Protecting Integrity

The final session, led by Professor Serge Besanger, highlighted a dilemma facing every institution today: on one side, pressure to adopt AI and modernize curricula; on the other, the duty to certify genuine academic skills and original student work.

To respond to this challenge, the school introduced three new institutional charters (for students, faculty, and researchers), along with a shift in instructional posture: faculty members are becoming research coaches, and evaluation is moving from the final product to the research process itself.

Practical tools were presented, including AI audit trails, interim oral defenses, raw-data verification, and reflective chapters critiquing AI output. All aim to safeguard academic integrity while enabling responsible innovation.

Key Insights

The conference made one thing clear: building responsible AI is not only a technical challenge, but a cultural and educational one.

AI can expand creativity, elevate academic rigor, and open new possibilities, as long as it is used as a collaborative intelligence platform, attentive to human diversity rather than erasing it.

BetterAIForum

AI that is innovative yet controlled, powerful yet transparent, technical yet deeply human.

https://www.betteraiforum.com
Next
Next

The Philosophy Behind Our New Logo