Process for securing Trustworthy AI - solutions
Background and goals
Arcada's Trustworthy AI Lab teaches and prepares concrete assessment processes for
assessment processes for developers of AI solutions, which is essential to build trust.
This assessment is done in combination with an understanding of the ethical practices for the use of the of the technology from a human perspective. To establish methods for assessing the development of AI solutions, we need to develop, together with end-users, businesses, communities and others, a
common practical view on how complex technologies should be allowed to interact with society. This is particularly important in sensitive environments such as healthcare. We use the Z-inspection (Zicari et al.
al, 2021) process to assess Trustworthy AI, and create AI solutions that are safe, reliable and sustainable. This is done by combining a holistic and analytical qualitative process to
assess the safety of AI solutions and research on the practices of AI and other solutions.
Objectives and benefits
The application of AI requires a deep understanding of safe use and ethical considerations. The differences between AI-based solutions and traditional IT software require a refined approach to integrating solutions that are not deterministic. We study such differences to gain insights into how different processes need to be created and adapted to suit the introduction and facilitation of AI. This is not just a technical task, but stretches the boundary of applied ethics into engineering practice in dealing with models that can sometimes be difficult to interpret. Our activity also aims to critically examine the ethical and subjective consequences that may arise from the use of such technologies, by initiating a long-term inductive data collection in the field.
Results
Qualitative processes and methods to assess the trustworthiness of AI solutions.
Societal impact
Detta är ett projekt med avsikt att skapa en bestående verksamhet. Allt vi lär oss under projektets
gång och de nya nätverk vi skapar kommer att vara viktiga för att vi också i framtiden kan
bedöma pålitligheten i AI-lösningar. Arcadas målsättning är att vara föregångare i Finland, vi har
redan nu det enda laboratoriet i Pålitlig AI som använder denna kvalitativa process.