top of page

Privacy-preserving and Trusted Machine Learning:

Edge AI and Federated Learning for collectively trainning models

State of art: Edge AI (or “edge intelligence”), being the intersection between edge computing and AI, has attracted significant interest in recent years (leading to creation of foundations like “tinyML”), due to factors like advances in hardware (especially mobile and IOT devices) that enable applications based on deep learning to run on edge devices, advances in AI that allow the distillation of large models into smaller ones (parameter efficient neural networks) – without significant losses in their accuracy – enhancing applicability in domains with limited computational resources like edge devices, and of course efficiency (as in low latency and bandwidth requirements). At the same time, the main motivation behind edge AI in several domains of applications is privacy preservation and security, (as collected data storage is located where the actual analysis happens and does not leave the device), often important factors that enable trustworthiness. An important advantage of edge AI is that it is a perfect enabler for federated learning and swarm intelligence, either through reinforcement learning or online/continuous learning.


Challenge: Edge AI presents some important benefits and opportunities that TITAN ambitions to capitalise upon. At the same time, most of the tools TITAN will integrate in its ecosystem rely on traditional machine learning approaches, requiring transformation to support Edge AI. In addition, TITAN addresses the citizens, constituting privacy preservation and security major factors; TITAN also wants to exploit implicit and explicit user feedback in improving its solution. Federated (decentralised) learning, although challenging, provides an opportunity for collectively training models without the need for the data to leave the edge device.


Going beyond: There are several technologies that TITAN will explore for transforming a selected set of tools into edge AI tools. Learning parameter efficient neural networks like “MobileNets”, “SqueezeNet”, pruning and truncation, and distillation – training smaller networks using larger networks as “teachers” – are all viable approaches for model transformation, along with the facilities provided by TensorFlow Lite to convert a TensorFlow model for on-device inference. Regarding federated learning, TITAN will study approaches that involve parameter servers and to a lesser degree asynchronous SGD.


State-of-the-Art Gallery

eu flag.png

Thanks for subscribing!

Clause de non-responsabilité

TITAN a reçu un financement du programme de recherche et d'innovation Horizon 2020 de l'UE au titre de l'accord de subvention n° 101070658, et du ministère britannique de la Recherche et de l'Innovation au titre des subventions de garantie de financement Horizon du gouvernement britannique numéros 10040483 et 10055990.

 

Ce site web représente uniquement les points de vue du projet TITAN. En saisissant votre adresse e-mail ci-dessus, vous acceptez de recevoir des communications par e-mail du projet TITAN. Votre adresse e-mail est utilisée uniquement pour vous tenir au courant de nos travaux. Elle n'est partagée avec aucun tiers. Si vous ne souhaitez plus recevoir d'informations de notre part, vous pouvez nous envoyer un message de désabonnement à tout moment.

© 2023 par TITAN

bottom of page