eKI4DS – Explainable AI for Dynamic Stability
- Contact:
- Project Group:
- Funding:
BMWK, grant agreement 03EI1092C
- Startdate:
2025-01-01
- Enddate:
2027-12-31
Collaborative project: 01273890/1 – eKI4DS – Explainable AI for Dynamic Stability; Subproject: Artificial Intelligence and Explainability Approaches
The aim of the project is to develop, validate and demonstrate methods of explainable artificial intelligence for predicting the dynamic security of the network and evaluating possible stabilizing measures in a practical context. Artificial intelligence (AI) here generally refers to methods that can learn new correlations based on empirical or synthetic data. With the help of these learned correlations, an AI can make predictions for new situations that are not directly represented in the data. Explainability means that we are not only interested in the AI's predictions, but also in the correlations that it learns. We want to know why the AI arrives at the prediction it makes. The focus on explainable AI should make it clear to network operators and network planners why the AI predicts possible stability problems and identifies countermeasures. This step is essential for the use of AI in a safety-critical environment. A quick and transparent analysis of the network's safety margins, which is made possible by such AI, enables more efficient and safer operation as well as the early consideration of stability and safety aspects in planning. This makes the system more secure, makes better use of existing infrastructure and avoids unnecessary system expansion. The project involves both the development of new explainable AI methods and the creation of extensive open data sets that map possible stability problems. Such high-quality open data sets are a prerequisite for the research and development of data-intensive methods for use in the future energy system. This will create a permanent basis for the rapid transfer of research in the field of AI/big data into practice.