Research on Artificial Intelligence

AI excellence "made in Germany"

Artificial intelligence (AI) today offers companies a wide range of opportunities to make processes more efficient, reduce costs and develop innovative business models. AI solutions analyse large amounts of data from heterogeneous sources, enable more precise searches in any media format, and automate routine tasks such as content creation and compliance documentation. Our AI technologies are used in a wide range of applications in the financial sector, retail, the media, public administration and healthcare.

Fraunhofer IAIS is one of Europe's leading research institutes for Artificial Intelligence. Working in excellent teams and at the centre of a strong network, we develop new AI applications, train young scientists and actively shape the scientific community. Our research focuses on the further development of fundamental AI technologies with concrete benefits for businesses and society, the responsible design of large language models, and the development of reliable and secure agentic AI systems. We always adhere to European AI standards for data protection and transparency and work closely with partners from industry, SMEs and the public sector to bring innovations into practice in a trustworthy and sustainable manner.

AI research priorities

 

Hybrid AI

What is hybrid AI and what are the advantages of combining different AI techniques? What research is being conducted at Fraunhofer IAIS in this area? Find out more.

 

Generative AI

How does generative AI work, and what are Fraunhofer IAIS's key interests in the development and specialisation of generative models? Find the answers here.

AI agents

What can AI agents do and how do they use generative AI? Read more about this topic here soon.

Trustworthy AI

What risks does AI entail, how can they be assessed, and what measures can be taken to test AI and make it more trustworthy? All important information will follow shortly.

Further research topics

Resilience & sustainability

Read more about this topic here soon.

Quantum computing

Quantum computing uses qubits and quantum physical effects to overcome the limitations of classical computers. Read here soon how quantum computers can massively accelerate AI processes.

AI from science for business

With us, companies become innovation leaders: At Fraunhofer IAIS, we conduct applied research with the aim of developing AI solutions that offer clear benefits for business and society. Our experienced scientists are sought-after partners for companies looking to use Artificial Intelligence in a trustworthy, explainable and application-oriented manner in a wide variety of industries. We work hand in hand with our customers and tailor our AI solutions to their individual requirements.

News from research

 

Keynote speech at Hannover Messe 2024

»Generative AI: Transformative potential for Germany and beyond«

Prof. Dr. Stefan Wrobel

Press releases & news

Read press releases and news about our latest research results, research projects, collaborations and more.

White papers and studies

In our studies and white papers, we present the latest findings on Artificial Intelligence, digitalisation and more – in a practical manner, with specific application examples and guidelines for companies.

Scientific publications

Here you will find the latest research findings from our scientists.

Definitions Artificial Intelligence

From AI agents to trustworthy AI: the most important terms related to Artificial Intelligence.

  • Agentic AI refers to AI systems that think, learn and act independently in order to solve complex, multi-stage problems as autonomously as possible. The systems can react independently in dynamic environments and adapt their workflows or plans to new situations. Agentic AI uses the tools of generative AI. Agentic AI systems range from individual AI agents to multi-agent systems and powerful foundation models.

    Agent-to-agent (A2A) is a standard for communication and collaboration between autonomous AI agents. There are already public catalogues where an agent can dynamically find suitable external agents for a specific subtask. A new ecosystem for AI agents is expected to emerge on the internet, the "Agentic Web" or Web 4.0.

  • AI hallucinations are false or misleading results generated by large language models. The models generate seemingly plausible, grammatically correct and coherent texts or other content that is, in fact, false, invented or without any basis in reality. An important countermeasure is grounding, i.e. checking the output against reliable data sources.

  • Artificial General Intelligence (AGI) is a hypothetical type of AI that possesses human-level cognitive abilities. Unlike weak AI, which is limited to specific tasks, AGI could also solve previously unknown problems flexibly and creatively. Generative AI models are an important intermediate step on the path to AGI.

  • Artificial Intelligence (AI) is a branch of computer science that deals with how computers can mimic intelligent human behaviour. Neither the meaning of "intelligent" nor the technology used is defined. Knowledge-based technologies, machine learning, deep learning, and generative AI are different AI technologies that are combined in hybrid AI to compensate for their respective weaknesses. A major breakthrough in AI research would be the development of artificial general intelligence.

  • In computer science, an algorithm is a precise set of instructions for solving a problem. A learning algorithm (or self-learning algorithm) is an algorithm that receives sample data (learning data or training data) and calculates a model for the data seen, which can be applied to new sample data.

  • Bias in Artificial Intelligence refers to systematic distortions or prejudices that occur when AI models are based on flawed, unbalanced or unrepresentative training data or contain algorithmic assumptions. An AI system with bias can produce discriminatory or stereotypical results.

  • Data science is an interdisciplinary field of science that deals with methods, processes and algorithms for extracting insights from structured and unstructured data. The profession of data scientist requires knowledge of mathematics, business administration, computer science and statistics. Data scientists identify and analyse available data resources, determine requirements and develop concepts for using the data profitably.

    Fraunhofer IAIS offers data scientist training courses as part of the Fraunhofer Big Data and Artificial Intelligence Alliance.

  • Deep learning encompasses learning algorithms that generate artificial neural networks with many layers of artificial neurons as models. Deep learning is responsible for many successes in speech and text processing as well as image and video processing. Deep networks are referred to as black boxes because the features relevant for learning are formed independently and are expressed as numbers in the weights between the nodes. These weights are called parameters.

  • Domain-specific AI models are tailored precisely to the needs of a specific field, with the aim of working particularly accurately, efficiently and cost-effectively in that field. They are usually developed from foundation models by retraining them with relevant data. A process called knowledge distillation is used to obtain particularly compact and therefore efficient models.

  • Embedded AI refers to the integration of Artificial Intelligence directly into hardware devices and systems, enabling them to perform AI functions such as data processing, decision-making and pattern recognition locally and in real time without relying on external servers or the cloud. If the AI in the device interprets and controls sensors and actuators, this is also referred to as physical AI.

  • The EU AI Act is a European Union law that sets out rules and requirements for the use of AI systems. Its aim is to ensure the development and use of trustworthy AI in Europe by assessing risks, requiring transparency and regulating or prohibiting certain applications.

  • Foundation models are deep artificial neural networks with billions to trillions of parameters. These models are trained on high-performance computers using huge data sets. This relies on self-supervised learning, which does not require annotations.

    Foundation models trained with text are called large language models (LLM). They can solve a wide variety of tasks, such as answering questions or generating, revising or translating texts, without having been specifically trained to do so. These are referred to as emergent capabilities. To reinforce the desired behaviour, the models are retrained with additional learning methods and data. Hallucinations, bias and lack of transparency are among the risks of generative AI models, which are addressed in the context of trustworthy AI.

  • Generative Artificial Intelligence (Generative AI or GenAI) makes it possible to generate content such as text, images, audio, code or structured data simply by entering prompts.

    Generative AI is based on large language models, reasoning models or multimodal models. In modern AI systems, interactive AI assistants, or autonomous AI agents, they are equipped with external knowledge, individual memory, or external tools via retrieval-augmented generation. The Model Context Protocol (MCP) is an important standard for connecting models and external tools.

    Learn more about generative AI at Fraunhofer IAIS.

  • Hybrid AI combines different types of Artificial Intelligence technologies, such as generative AI, deep learning, classic machine learning algorithms, and reasoning based on structured expert knowledge (knowledge graphs). The aim is to increase the flexibility and efficiency, but above all the precision, transparency, and robustness of an AI system.

    Learn more about hybrid AI at Fraunhofer IAIS.

  • Machine learning (ML) aims to generate "knowledge" from "experience" by using learning algorithms to develop a complex model from examples. The model, and thus the automatically acquired knowledge representation, can then be applied to new, potentially unknown data of the same type. Whenever processes are too complicated to describe analytically, but sufficient sample data – such as sensor data, images or texts – is available, machine learning is the ideal solution. The learned models can be used to make predictions or generate recommendations and decisions without any predefined rules or calculation formulas.

    Learn more about machine learning at Fraunhofer IAIS.

  • Multimodal foundation models can process data in various modalities, such as text, speech, images, videos, audio and other sensory inputs. It is important to distinguish between the modalities that a model can understand and those that it can produce. Some models process the various modalities “end-to-end”, in one piece and without combining different models.

  • Prompt engineering is the discipline of designing instructions or inputs (prompts) for generative AI models in such a way that they deliver accurate, relevant and consistent results. In few-shot prompting, one or more examples are given for the task. In chain-of-thought prompting, the model is asked to first break down the task into subtasks and then work through them step by step. In context engineering, the aim is to optimise not only the prompt but the entire input, for example by supplying relevant data via RAG.

  • Quantum computers base their elementary calculation steps not on classical bits, but on quantum mechanical states, known as qubits, and on special properties such as superposition and entanglement, in order to perform calculations in a much more parallel and complex manner. Quantum machine learning (QML) explores various quantum algorithms for machine learning, including algorithms for quantum neural networks (QNNs).

  • Reasoning models are large language models that do not respond immediately, but rather break down their task into several steps, deal with failures and document this process. They are recommended for complex, multi-stage questions that require precise and comprehensible problem solving, such as in AI agents.

  • Reinforcement learning is an approach to machine learning in which the model is gradually trained through feedback on its results. Reinforcement learning can be used to retrain foundation models or in AI agents.

  • Retrieval-Augmented Generation (RAG) is a method of supplying a large language model with relevant data from external, current and specific knowledge sources. RAG allows the model's responses to be made more context-relevant, accurate and up-to-date without having to retrain the actual model. RAG is particularly useful for reducing hallucinations.

  • Supervised and unsupervised learning are two approaches to Machine Learning. For supervised learning, the correct result (annotation, label) must be provided for each data example. The learning algorithm optimises the model by comparing its predictions with the correct results. In self-supervised learning, the learning algorithm generates the annotations independently, for example by creating gaps in the data that the model is then supposed to predict. This is a major advantage, as data with annotations is usually in short supply. The self-supervised learning of foundation models is essential for the success of generative AI.

  • Tokens are the smallest linguistic units in large language models. Tokens are common word components. The total number of tokens influences the size of the model. The number of tokens in an input is often relevant for billing when using the model via a programme (API). A tokenizer breaks down texts into tokens.

  • Trustworthy AI applications function reliably, operate transparently and comprehensibly, respect data protection, act fairly and without discrimination, and are subject to human supervision. They should minimize risks, respect fundamental rights, and be used responsibly.