AI Assessment Catalog

Guideline for Trustworthy Artificial Intelligence

From voice assistance systems to the analysis of application documents and autonomous driving - Artificial Intelligence (AI) is extensively used as a key technology of the future. This makes it all the more important to design AI applications in such a way that they act securely and handle data transparently and reliably. This is a necessary prerequisite for AI to be used in sensitive areas and for users to have consistent trust in the technology.

Quality and trust as competitive advantages

In order to develop high-quality AI products and services, it is therefore essential for companies and developers to ensure and prove the trustworthiness of an AI system: Either from the start of development (by design) or through objective assessment in the course of application operation.

In this way, AI applications not only comply with appropriate guidelines and create trust and acceptance, but can also make a valuable contribution to branding and thus create competitive advantages.

Structured guideline to define application-specific assessment criteria

In order to limit risks and ensure societies fundamental trust in AI, the High Level Expert Group on AI (HLEG) and the German government's Data Ethics Commission have created general guidelines for the development of AI applications. However, these are often quite abstract and contain hardly any concrete requirements for companies and developers. In addition, the German AI standardization roadmap, including its recently published second version, makes it abundantly clear that there is a great need for precise quality regulations and standards for AI applications. Finally, the AI Act that will soon be enacted also requires mandatory AI conformity assessments for high-risk systems.

The AI assessment catalog of Fraunhofer IAIS addresses precisely this issue and offers a structured guideline that can be used to concretize abstract quality standards into application-specific assessment criteria.

Scope and application areas of the AI assessment catalog

The AI assessment catalog offers you:

  1. A guideline for the structured identification of AI-specific risks with regard to the six dimensions of trustworthiness: fairness, autonomy and control, transparency, reliability, safety and security, and data protection.
  2. Guidance that can be used to formulate specific assessment criteria for an AI application. To this end, the AI assessment catalog lists established KPIs to quantify corresponding targets - where possible.
  3. Guidance on the structured documentation of technical and organizational measures along the lifecycle of an AI application that reflect the current state of the art and whose implementation can mitigate potential AI risks.
On the one hand, the AI assessment catalog provides developers with a guideline for systematically making AI applications trustworthy. On the other hand, it guides auditors and assessors to examine AI applications for trustworthiness in a structured manner.


Receive our AI assessment catalog free of charge

Work with us, benefit from our experience

Do you have questions about implementing an AI assessment? Do you need support and advice on how to make your AI application trustworthy? Then work with us!

Take advantage of the expertise of Fraunhofer IAIS - one of the leading institutes between business and science in the field of Artificial Intelligence in Europe - and make a not binding consultation appointment today.

Non-binding consultation appointment

Projects around trustworthy AI

Certified AI (Zertifizierte KI)

The Certified AI project promotes the development and standardization of assessing criteria, methods, and tools for AI systems to ensure technical reliability and responsible use of the technology.

AI Safeguarding (KI-Absicherung)

How can we ensure and prove that AI modules in autonomous vehicles will work? Automobile manufacturers, suppliers, technology companies and research institutes are working on this in the "AI safeguarding" project.

Promotion and cooperation

The development of the AI assessment catalog was supported by the Ministry for Economic Affairs, Innovation, Digitalization and Energy of the State of North Rhine-Westphalia.

In cooperation with: