FAIR-AI
FAIR-AI addresses the research gap posed by the requirements of the upcoming European AI Act and the obstacles of its implementation in the daily development and management of AI-based projects and its AI Act conform application. These obstacles are multilayered due to technical reasons (e.g., inherent technical risks of current Machine Learning such as data shift in a non-stationary environment), engineering and management challenges (e.g., the need for highly skilled labor, high initial costs and project risks at a project management level), and socio-technical factors (e.g., need for risk-awareness in applying AI including human factors such as cognitive bias in AI-assisted decision making). In this context we consider the detection, monitoring and, if possible, the anticipation of risks at all levels of system engineering and application as a key factor. Instead of claiming a general solution to this problem our approach follows a bottom-up strategy by selecting typical pitfalls in a specific engineering and application context to come up with a repository of instructive self-contained “mini”-projects. Going beyond the state of the art we explore ways of risk anticipation and its integration into a recommender system that is able to give active support and guidance.