On April 10th, the Human Technology Foundation hosted an event on a highly topical subject :
"AI Governance: How to Reconcile Innovation, Regulation, and Ethics?"
As the European AI Act moves closer to implementation, this event provided an opportunity to explore the implications of the new regulation and to collectively reflect on the strategies needed to balance compliance, impact, and competitiveness.
1. A risk-based approach at the heart of the European framework
The AI Act introduces a classification of AI systems according to their level of risk. This means stricter requirements for so-called “high-risk” systems — a category that still requires clarification in practice.
2. Regulation and ethics: between complementarity and tensions
The discussions highlighted the necessary trade-offs between legal obligations and ethical values. These two pillars must not be seen as opposed, but rather as working in dialogue to build trustworthy AI.
3. Strategic roles front and center
Ethics, legal, business, and CSR departments play a central role in the practical implementation of responsible AI. Their coordination is essential to anticipate risks and meet societal expectations.
4. Towards a clear, flexible, and human-centered framework
Beyond legal compliance, participants stressed the importance of keeping humans at the center of automated processes. This requires a regulatory framework that is both structured and adaptable — one that can evolve alongside innovation.
We had the pleasure of welcoming four distinguished speakers :
On this occasion, Grimaud Valat presented our report published in January, which focuses on the implementation of the obligation to provide a detailed summary of the data used to train general-purpose AI models. 👉 Read the full report
🙏Thank you to our speakers for the depth of their insights, and to all participants for joining this important discussion.