Project on trustworthy and efficient AI for cloud-edge computing
Published: 06.06.2024 / Blog / Artificial intelligence
Humanity and trust are the prisms through which we see ourselves, others, and the world. It is the foundation upon which our civil society and democracy itself are built. Simply put, trust is, and has always been, the bedrock of our reality. In recent years, the rapid development of AI has challenged this very reality, because with immense possibilities and advantages comes equally immense challenges and risks.
To maximise the benefits and minimise the risks of AI the European Commission established a High-Level Expert Group on Artificial Intelligence (AI HLEG 2019) to chart the course of Trustworthy AI and is channelling substantial funds into the research and development of trustworthy AI.
At Arcada, we consider it our responsibility to be proponents of trust between technology, science, and our fellow human beings. We are pleased to announce our participation in the MANOLO project, funded by the European Union under the Horizon Europe Research and Innovation programme. The project began in January 2024 and will continue until the end of 2026. The consortium includes 18 partners from academia, research institutes, and industry, with a budget of 8.6 million euros. MANOLO aims to contribute to the future of Trustworthy and Efficient AI.
The aim of the project
The ultimate aim of the MANOLO project is to deliver a complete stack of trustworthy algorithms and tools to help AI systems reach better efficiency and seamless optimisation in their operations, resources and data required to train, deploy and run high-quality and lighter AI models in both centralised and cloud-edge distributed environments.
The project will push state of the art in the development of a collection of complementary algorithms for training, understanding, compressing and optimising machine learning models by advancing research in the areas of model compression, meta-learning (few-shot learning), domain adaptation, frugal neural network search and growth, and neuromorphic models. Novel dynamic algorithms for being data/energy efficient and policy-compliance allocation of AI tasks to assets and resources in the cloud-edge continuum will be designed, allowing for trustworthy deployment.
To support these activities, a data management framework for distributed tracking of assets and their provenance (data, models, algorithms) and a benchmark system to monitor, evaluate and compare new AI algorithms and model deployments will be developed. Trustworthiness evaluation mechanisms will be embedded at its core for explainability, robustness and security of models while using the Z-Inspection process for Trustworthy AI self-assessment, helping partners to conform to the new AI Act regulation. MANOLO will be deployed as a toolset and tested in lab environments via use cases within different distributed AI paradigms and the cloud-edge continuum setting. Further, the deliverables will be validated in verticals such as health, manufacturing, and telecommunications aligned with ADRA-identified market opportunities and with a granular set of embedded devices covering robotics, smartphones, IoT, as well as using Neuromorphic chips.
MANOLO will integrate with ongoing projects at EU level developing the next operating system for cloud-edge continuum, while promoting its sustainability via the AI-on-demand platform and EU portals.
Arcada's role
The Laboratory for Trustworthy AI at Arcada is a transdisciplinary, and international research community dedicated to advancing the responsible use of artificial intelligence. Our mission is to train organisations and key stakeholders to critically assess AI applications, ensuring they align with ethical standards and societal values. Connecting academia and civil society, our lab brings together a diverse group of AI developers, students, end-users, researchers, and stakeholders. We work for a human-centric approach to AI and towards closing the gap between ethically sound AI development and the technical and methodological practices. Through our work, we support organisations in mapping out socio-technical scenarios, enabling them to evaluate potential risks and implement AI technologies that benefit society as a whole.
Building on this expertise, Arcada's role in the MANOLO project is to ensure that trustworthiness is embedded throughout the entire pipeline, from the cloud to the end user. This will be achieved by using Z-Inspection as a co-design process for the project, from ideation to validation. Our mission is to ingrain transparency and reliability at every stage to make sure that it is humanity that is shaping the future of AI and not AI that will shape the future of humanity.
We are proud to embark on this mission with the MANOLO consortium.