AI Management

Study "Management System Support for Trustworthy Artificial Intelligence"

Artificial intelligence offers great innovation potential for business and society. Particularly powerful AI systems that are intended to perform demanding tasks, e.g. in autonomous driving, are based on machine learning methods and thus on the processing of large volumes of data. This poses new challenges for companies and developers in particular: They must keep track of the associated risks, ensure safe use and, at best, develop AI solutions that can be deployed worldwide.

New standards for the management of artificial intelligence

To meet these requirements, various organizations are currently working on regulatory guidelines and international standards that companies can use as a guide when using and developing new AI technologies. Among other things, this involves standards for so-called management systems, which have successfully supported companies in other sensitive areas to date, such as for the management of information security.

Currently, a joint working group of the International Standards Organization (ISO) and the International Electrotechnical Commission (IEC) is developing an international standard for AI management systems (AIMS).  

An investigation into the role of management systems for AI

© Alex – stock.adobe.com/Fraunhofer IAIS
In over 60 pages, the Fraunhofer scientists investigate the contribution of AI management systems to Trustworthy AI.

But what role do management systems have in the context of Artificial Intelligence and can they promote the development and use of Trustworthy AI? This question has been investigated by researchers at Fraunhofer IAIS.

In the study "Management System Support for Trustworthy Artificial Intelligence", they have reviewed the previous draft of this standard and compared it with the current requirements and recommendations for trustworthy artificial intelligence that have been made so far by the European Commission, the High-Level Expert Group on AI (HLEG) commissioned by it, and the German Federal Office for Information Security (BSI).

The Fraunhofer IAIS study shows that an AI management system can promote trustworthy AI in two ways: On the one hand, it helps companies define suitable strategies and processes for the trustworthy development and use of AI technologies. On the other hand, it is an important building block for strengthening the trust of users and others in AI systems.

Trustworthy AI at Fraunhofer IAIS

Did you know? As one of the leading research institutes in the field of artificial intelligence in Europe, Fraunhofer IAIS is focusing on the trustworthiness and reliability of AI systems. Here you can find an excerpt of current projects and offered services.

Project

"KI-Absicherung"

How can we ensure and prove that AI modules in autonomous vehicles work? This is what partners from industry and research are working on in the "KI-Absicherung" project.


Project

"Zertifizierte KI"

The project promotes the development and standardization of testing criteria, methods, and tools for technically reliable AI systems to ensure responsible use.

 

Whitepaper

Trustworthy AI

The Whitepaper explains the fields of action from a philosophical, ethical, legal and technological perspective. It forms the basis for the further development of AI certification.

 

AI Trustworthiness Check

We analyze your AI application with regard to trustworthiness and usability - even before the introduction in the concrete use case.

 

Guide

KI-Prüfkatalog

The "KI-Prüfkatalog" is a structured guide for the design, evaluation, and quality assurance of trustworthy Artificial Intelligence for developers and examiners.

Certified training

"Certified Data Scientist Specialized in Trustworthy AI"

Ethics, law, transparency - trustworthy AI should be developed along certain fields of action. This training provides the basics for this.