Guideline for Trustworthy Artificial Intelligence
From voice assistance systems to the analysis of application documents and autonomous driving - Artificial Intelligence (AI) is extensively used as a key technology of the future. This makes it all the more important to design AI applications in such a way that they act securely and handle data transparently and reliably. This is a necessary prerequisite for AI to be used in sensitive areas and for users to have consistent trust in the technology.
Quality and trust as competitive advantages
In order to develop high-quality AI products and services, it is therefore essential for companies and developers to ensure and prove the trustworthiness of an AI system: Either from the start of development (by design) or through objective assessment in the course of application operation.
In this way, AI applications not only comply with appropriate guidelines and create trust and acceptance, but can also make a valuable contribution to branding and thus create competitive advantages.
Structured guideline to define application-specific assessment criteria
In order to limit risks and ensure societies fundamental trust in AI, the High Level Expert Group on AI (HLEG) and the German government's Data Ethics Commission have created general guidelines for the development of AI applications. However, these are often quite abstract and contain hardly any concrete requirements for companies and developers. In addition, the German AI standardization roadmap, including its recently published second version, makes it abundantly clear that there is a great need for precise quality regulations and standards for AI applications. Finally, the AI Act that will soon be enacted also requires mandatory AI conformity assessments for high-risk systems.
The AI assessment catalog of Fraunhofer IAIS addresses precisely this issue and offers a structured guideline that can be used to concretize abstract quality standards into application-specific assessment criteria.