Offerings for AI assurance & AI assessments

Verifiably demonstrating the quality of AI systems among users and creates competitive advantages. For the productive deployment of AI in companies, it is also important that AI performs reliably and complies with legal requirements. The EU AI Act has created a complex regulatory environment around AI. In addition, there are now a large number of AI-specific standards.

We bring together expertise in both the latest developments in AI regulation and standardization and the scientific state-of-the-art in the field of trustworthy AI. We use this knowledge, combined with our experience of what works in practice, to provide the best possible technical assurance for your AI systems and mitigate AI-related risks.  

Your advantage: With our support, you receive verifiable trustworthy AI solutions that meet current safety, security and quality standards.

 

Contact us now

Analysis of Requirements

Analysis of Standards and Legal Requirements

We identify which standards are relevant for your AI system and how regulatory requirements can be translated into technical implementation.

Implementation

Assurance for your AI System

We provide assurance for your AI system through appropriate technical measures and safeguards.

Implementation

Implementation of the EU AI Act

Get support with implementing the EU AI Act.

Implementation

Implementation of TAIOps Frameworks

Your MLOps-Pipelines are extended to ensure that regulatory requirements are addressed.

Implementation

IT Infrastructure for AI Testing Labs

We automate AI testing workflows and integrate AI evaluation tools into your operational processes.

Testing

Development of AI Testing Requirements

We develop tailored AI testing requirements adapted to your application domain.

Testing

Conducting AI Assessments

We test and assess your AI system and provide you with a scientific evaluation report.

 

Testing

Complimentary AI Assessment Catalog

AI Assessment in 4 Steps: Guidelines and Instruction

 

Professional Training

EU AI Act Professional Training Program

Lectures, briefings, training sessions, and compact crash courses tailored to the requirements of the EU AI Act.

 

Do you need a customised AI solution?

We would be happy to advise you without obligation and develop a solution specifically for your use case.

References

Success Stories

Here you will find a selection of our project partners.

How we have improved risk management

We compared AI risk assessment frameworks for a large international company. Based on this, the company optimized its global AI risk management.

This enables the globally active AI company to overcome the challenge of implementing different regional normative requirements for its AI systems.

 

 

Voluntary AI quality label for companies as part of MISSION AI

For AI innovations to be trustworthy and successful on the market, companies need clear quality criteria. As a scientific partner in the MISSION KI initiative, we are working with PwC Germany, TÜV AI.Lab, VDE, AI Quality & Testing Hub, and CertifAI to develop a voluntary minimum quality standard for AI.

This standard is compatible with the EU AI RegulationAct, specifically addresses the low-risk area of AI systems, and offers AI providers and operators a cross-industry, practical guide for quality assurance of to securing AI systems. Compliance with this standard enables companies to differentiate themselves from the competitioncompetition through quality and trustworthiness.

Checking large language models for bias

Providers and operators of AI systems are legally required to identify bias against protected groups throughout the entire AI lifecycle and prevent associated discrimination and unfairness. This is particularly important for large language models (LLMs). To address this challenge, we are developing a text-specific data bias pipeline for an international technology corporation. This pipeline detects distortions in training data and reduces them using innovative approaches.

This enables the technology corporation to check large amounts of data for bias and improve it. This increases the fairness of the LLMs based on it and protects end users in accordance with regulatory requirements.

Contact

 

Dr. Maximilian Poretschkin

Head of Department
AI Assurance and Assessments


“Our interdisciplinary team with expertise in computer science and law operationalizes the entire value chain for trustworthy AI.”

Publications

 

AI Assessment Catalog: Guideline for Trustworthy Artificial Intelligence

Download


Developing trustworthy AI applications with foundation models

Download whitepaper


Management System Support for Trustworthy AI

Download study