Interview with Thomas Hann

“Transparency and additional research in explainability of algorithms is essential to strengthen and maintain user confidence in AI systems »

Thomas Hahn, Chief Expert Software at Siemens, advocates for a harmonized and risk-based regulatory approach to AI at the European level.

How would you define the main objectives of an AI regulatory framework ?

A regulatory AI framework must ensure that AI systems on theEU market are safe and respect EU laws and values. It also creates a legal certainty to facilitate investment and innovation in AI. We welcome a risk-based approach. For example, the use of AI in assembly lines in factories is commonly accepted because it implies efficiency gains. Conversely, i.e. the use of robotics in everyday life may be considered a higher risk class. It is also necessary to take into account existing regulations and avoid duplication of rules which could be counterproductive.

How would you define the risks associated with AI ?

The introduction of machine learning into regulated industries might have the effect of changing the behavior of a machine, which leads to a number of questions about the impact of this transformation on regulation. The other major issue is the explainability of algorithms. The increasing performance of algorithms over the years has been accompanied by growing difficulty in explaining the rational of their internal workings, which is a real issue at the legal level. More research on explainability is needed to strengthen and maintain users’ trust in AI systems.

Regulation is sometimes seen as a brake on innovation. What is your opinion ?

It is difficult to give a clear-cut answer. It is necessary to find a balance between the need to build trust in artificial intelligence without falling into the trap of destroying the innovative power of this technology. We all are involved in the European Commission’s related policy discussion and consultation processes (e.g. on the AI Act and in particular on the proposal to create regulatory AI sandboxes), which aim to test real-world projects with cohorts of companies in a controlled environment. This collaborative approach between government, industry and other stakeholders was founded on the importance of establishing transparency and trust in AI.

At the moment, different strategies within companies and even countries are emerging with regard to AI. What do you think is the best approach ?

The creation of a common and harmonized approach at the European (even on the global) level would be highly desirable. There are already many initiatives in this direction. A few examples are : under the umbrella of the Digital Trust Forum, we are currently working on a project for a “Trust AI” label that would guarantee the acceptability of AI in a clear and transparent way. On a personal level, I am also involved in the French program and the Data/Industrial AI initiatives with BDVA/Gaia-X, promoting the industrialization of so-called trusted AI in critical industrial products or services. 

How can governments manage to regulate Artificial Intelligence which is constantly changing ?

It is in the interest everyone to include AI experts (from government, from academia, from industry, from user, from provider, from society etc.) in the shaping of AI regulation. This is for one simple reason: all of these actors share a common interest in achieving safe and trusted AI.

Share on facebook
Share on twitter
Share on linkedin
Share on email
More posts