News from Arcada's Laboratory for Trustworthy AI
Arcada's Laboratory for Trustworthy AI presents the latest news within the research community.
The latest updates from Arcada's Laboratory for Trustworthy AI presented according to date.
22 May 2024: Lessons Learned in Performing a Trustworthy AI and Fundamental Rights Assessment
The Trustworthy AI Laboratory at Arcada participated in the Pilot Project for “Responsible use of AI” in collaboration with the Province of Fryslân, Rijks ICT Gilde in the Netherlands and the Z-Inspection® Initiative, and a report has has now been released.
The pilot project took place from May 2022 to January 2023. During the pilot, the practical application of a deep learning algorithm from the Dutch province of Frŷslan was assessed. The AI maps heathland grassland by means of satellite images for monitoring nature reserves.
“This report is made public. The results of this pilot are of great importance for the Dutch government, serving as a best practice with which public administrators can get started, and incorporate ethical and human rights values when considering the use of an AI system and/or algorithms. It also sends a strong message to encourage public administrators to make the results of AI assessments like this one, transparent and available to the public.” (quoting from the report).
The report "Lessons Learned in Performing a Trustworthy AI and Fundamental Rights Assessment" can be read via this link External link.
8 February 2024: New projects to further strengthen Arcada's Laboratory for Trustworthy AI
The three new projects are presented below:
DeployAI – Development and Deployment of the European AI-on-demand Platform
Time frame: January 1, 2024 - December 31, 2027
Funding organisation: EU Digital
Arcada lead: Magnus Westerlund
The primary goal of DeployAI is to build, deploy, and launch a fully operational AI-On-Demand platform (AIoDP) promoting trustworthy, ethical, and transparent European AI solutions for use in the industry, mainly for SMEs, and in the public sector. The development of the AIoDP will be based on the requirements of the Pre-PAI and the ongoing AI4Europe projects. DeployAI will provide a comprehensive and Trustworthy AI (TAI) resource catalogue and marketplace, which offers responsible AI resources, and tools, ensuring easy access for end-users (SMEs, public sector) and asset developers, and meeting industrial standard requirements. AIoDP will allow the rapid prototyping of TAI applications and their deployment to a variety of cloud/edge/HPC infrastructures. To lower the entry barrier of using AI and to offer advanced AI capabilities, responsible European LLMs will be integrated in the AIoDP to enable services for downstream tasks, fine-tuning and other complex GPAI workflows. The AIoDP will be embedded in the European AI ecosystem, especially to EDIHs, TEFs, Dataspaces, SIMPL, and HPC/Cloud/Edge infrastructure. Interfaces to European initiatives and industrial AI-capable cloud platforms will be implemented, including an open API, to enable interoperability. A significant number of TAI resources will be made available on the AIoDP which will be qualified and labelled by an established process. Further, DeployAI will establish a viable AIoDP engagement strategy for AI resource providers and AI users and stimulate the European AI innovation landscape with its FSTP programme. Active stakeholder engagement will be ensured by providing matchmaking services and an interactive landscape tool. Finally, the project will provide a sustainable business model and a viable long-term strategy for the AIoDP. Governance structures responsible for the AIoDP ongoing operations will be put in place, while a permanent legal entity to own and operate the future AIoDP will be established.
MANOLO: Trustworthy Efficient AI for Cloud-Edge Computing
Time frame: January 1, 2024 - December 31, 2026
Funding organisation: EU Horizon
Arcada lead: Magnus Westerlund
MANOLO will deliver a complete stack of trustworthy algorithms and tools to help AI systems reach better efficiency and seamless optimization in their operations, resources and data required to train, deploy and run high-quality and lighter AI models in both centralised and cloud-edge distributed environments. It will push the state of the art in the development of a collection of complementary algorithms for training, understanding, compressing and optimising machine learning models by advancing research in the areas of: model compression, meta-learning (few-shot learning), domain adaptation, frugal neural network search and growth and neuromorphic models. Novel dynamic algorithms for data/energy efficient and policy-compliance allocation of AI tasks to assets and resources in the cloud-edge continuum will be designed, allowing for trustworthy widespread deployment. To support these activities a data management framework for distributed tracking of assets and their provenance (data, models, algorithms) and a benchmark system to monitor, evaluate and compare new AI algorithms and model deployments will be developed. Trustworthiness evaluation mechanisms will be embedded at its core for explainability, robustness and security of models while using the Z-Inspection methodology for TrustworthyAI assessment, helping AI systems conform to the new AI Act regulation. MANOLO will be deployed as a toolset and tested in lab environments via Use Cases with different distributed AI paradigms within cloud-edge continuum settings; it will be validated in verticals such as health, manufacturing, and telecommunications aligned with ADRA identified market opportunities, and with a granular set of embedded devices covering robotics, smartphones, IoT as well as using Neuromorphic chips. MANOLO will integrate with ongoing projects at EU level developing the next operating system for cloud-edge continuum, while promoting its sustainability via the AI-on-demand platform and EU portals.
Applied Ethical AI on Nordic Patient Records
Time frame: 2023 - 2025
Funding organisation: Nordic Innovation
Arcada lead: Magnus Westerlund
The goal of the project is to develop and demonstrate an ethical algorithm capable of reading both digital and analogue patient journals, across medical health record systems and across Nordic borders and Nordic languages.
The project will develop and demonstrate an ethical AI-based solution for reading Danish and Norwegian patient records stored in various medical health record systems. To achieve this, the project will utilize cutting-edge deep learning methods and ethical AI algorithms, specifically transformer-based neural network architectures, capable of interpreting unstructured health data such as clinical notes and reports, from different medical record systems.
The collaboration of multiple teams will address a single use case, utilizing their respective datasets with variations in features. The primary goal is to handle the data ethically, responsibly, and securely within the legal framework, while providing evidence for a new approach in training AI models using data from different Nordic repositories.
The project seeks to overcome privacy and security concerns, enabling the utilization of federated learning solutions in commercial projects and facilitating future Nordic collaborations without the need to share sensitive data across borders. Ultimately, the project aims to enhance healthcare practices and promote the ethical and trustworthy application of AI technologies in the medical domain.
28 November 2023: New Pilot Project: Assessing Trustworthiness of the use of Generative AI for higher Education
The Laboratory for Trustworthy AI is participating to this pilot project of the Z-inspection® initiative which aims at assessing the use of Generative AI in higher level education considering specific use cases.
For this pilot project, we will assess the ethical, technical, domain-specific (i.e. education) and legal implications of the use of Generative AI-product/service within the university context.
We follow the UNESCO guidance for policymakers on AI and education. In particular the policy recommendation 6.:Pilot testing, monitoring and evaluation, and building an evidence base.
Approach
An interdisciplinary team of experts will assess the trustworthiness of Generative AI for selected use cases in High Education using the Z-Inspection® process.
More info about the pilot project here External link
31 May 2023: First world Z-inspection® conference
The interdisciplinary meeting in March 10-11, 2023 in Ateneo Veneto, Venice, Italy, welcomed over 60 international scientist and experts from AI, ethics, human rights and domains like healthcare, ecology, business or law.
At the conference the practical use of the Z-Inspection® process to assess real use cases for the assessment of trustworthy AI were presented. Among them were:
- The Pilot Project: “Assessment for Responsible Artificial Intelligence” together with Rijks ICT Gilde – part of the Ministry of the Interior and Kingdom Relations (BZK)- and the province of Fryslân (The Netherlands).
- The assessment of the use of AI in times of COVID-19 at the Brescia Public Hospital (“ASST Spedali Civili di Brescia“).
Two panel discussions on “Human Rights and Trustworthy AI” and “How do we trust AI?“ provided an interdisciplinary view on the relevance of data and AI ethics in the human rights and business context.
The main message of the conference was the need of a Mindful Use of AI (#MUAI). This premiere World Z-Inspection® Conference was held in cooperation with Global Campus of Human Rights and Venice Urban Lab and was supported by Arcada University of Applied Science, Merck, Roche and Zurich Insurance Company.
Download the conference reader External link
Video of the conference impressions (on LinkedIn) External link
13 June 2022: Publication collaboration
The lab collaborated for an IAIL2022 publication on “Using Sentence Embeddings and Semantic Similarity for Seeking Consensus when Assessing Trustworthy AI. Dennis Vetter, Jesmin Jahan Tithi, Magnus Westerlund, Roberto V. Zicari, and Gemma Roig.
9 June 2022: Workshop on trustworthy AI and Z-inspection®
The Trustworthy AI Laboratory conducted a workshop on trustworthy AI and Z-inspection® at the Finnish Tax Administration, Helsinki.
7 June 2022: Teaching certificate received
Dr. Magnus Westerlund, director of the Lab, received a teaching certificate for the Z-Inspection® process. Z-Inspection® is process to assess Trustworthy AI.
3 June 2022: Member accepted in doctoral programme in human rights
Lab member MSc. Elina Sagne-Ollikainen was accepted into the human rights doctoral programme at Åbo Akademi University.
16 May 2022: Kick-off held
The kick-off of the pilot ‘Assessment for Trustworthy AI’ with the Province of Fryslân, Z-Inspection initiative, and the UBR Rijks ICT Gilde jointly investigating the reliability of AI applications and their responsible use. The lab personnel including Magnus Westerlund participate as a technical expert in the assessment. You can access more information on this pilot on the Z-inspection homepage External link.