Drive innovation in Artificial Intelligence applications
- December 2024
Health & Life science
Artificial Intelligence (AI) is fast becoming a key driver of economic development, and will play a major role in shaping global competitiveness and productivity over the next years.
Hand in hand with the ever-increasing expansion in AI applications, such as intelligent personal assistants, video/audio surveillance systems, smart city applications, self-drive vehicles and Industry 4.0, there is the growing need to optimise computer resources used in data collection and processing, and in online analytics, while safeguarding data privacy and increasing data security.
AI-SPRINT “Artiﬁcial intelligence in Secure Privacy-preserving computing continuum” is poised to create a novel framework to develop and operate AI applications and their associated data, which will exploit computing continuum environments. Key outputs include new tools for the development of AI applications, their secure execution and easy deployment, as well as tools for runtime management and optimisation. These tools will help to overcome the current technological difficulties to exploit resources in the edge-to-cloud continuum, in terms of ﬂexibility, scalability of analytics, interoperability, energy efﬁciency, security and privacy.
AI-SPRINT will thus meet the digitalisation needs of businesses and the public sector, proving its competitive edge and replicability in three real situations, Farming 4.0, maintenance and inspection, and personalised healthcare, in line with the EU’s priority areas for AI investment.
But it does not stop here. This initiative will be backed by AI-SPRINT Alliance, which is being set up to engage with IT operators, small SW houses and cloud providers, and will support the AI and edge computing ecosystem. Alliance members will reap tangible benefits from AI-SPRINT by joining its planned Acceleration Club (A3C). They will be able to use AI SPRINT design tools and its runtime environment to provide advanced design solutions hosted on their platforms, deploy applications and manage resources from their internal cloud-to-edge servers and AI enabled sensors or end-user devices. This will enable them to optimise the seamless execution of applications across the computing continuum, and minimise their energy consumption.
The role of the Foundation
Fondazione Politecnico di Milano is working with Politecnico di Milano in the coordination of the initiative and in managing the communication side
The focus of AI-SPRINT is to create a novel framework to develop and operate AI applications that can make an efficient use of computing resources in the edge-to-cloud continuum.
AI SPRINT will offer novel tools for AI development, a secure execution environment, easy deployment and optimised runtime management. AI SPRINT will specifically be able to trade-off application performance (in terms of E2E latency and throughput), energy efficiency and AI model accuracy, while providing assurances of security and privacy. The AI-SPRINT framework will support data protection for AI applications, architecture improvement, agile delivery and runtime optimisation, and continuous adaptation.
Building on popular or de-facto standards and trusted execution environments (TEEs), AI-SPRINT will enable the following technological innovations with compelling market differentiators:
- Create novel tools for developing AI applications (including machine learning and deep learning applications, and large-scale analytics) which exploit resources at the computing continuum.
- Provide advanced strategies to design and optimally partition AI models, considering model accuracy, application performance, security and privacy constraints.
- Deliver solutions to support the agile delivery and secure automatic deployment of AI applications and models and their secure execution across the cloud-edge continuum, protecting the users’ privacy of data.
- Implement a runtime environment that will:
- monitor the execution of applications;
- coordinate the deployment of applications and the dynamic allocation of their components on edge and cloud resources, including through the FaaS model;
- provide resilient execution of applications;
- implement advanced management strategies to optimise and adapt the computing continuum to cope with load variations to sensor data streams or component failures.
- Support continuous model training and extension to the architecture by adding fresh field-based data to AI applications and exploiting new AI-enabled edge sensors.