top of page

Privacy-preserving and Trusted Machine Learning:

Edge AI and Federated Learning for collectively trainning models

State of art: Edge AI (or “edge intelligence”), being the intersection between edge computing and AI, has attracted significant interest in recent years (leading to creation of foundations like “tinyML”), due to factors like advances in hardware (especially mobile and IOT devices) that enable applications based on deep learning to run on edge devices, advances in AI that allow the distillation of large models into smaller ones (parameter efficient neural networks) – without significant losses in their accuracy – enhancing applicability in domains with limited computational resources like edge devices, and of course efficiency (as in low latency and bandwidth requirements). At the same time, the main motivation behind edge AI in several domains of applications is privacy preservation and security, (as collected data storage is located where the actual analysis happens and does not leave the device), often important factors that enable trustworthiness. An important advantage of edge AI is that it is a perfect enabler for federated learning and swarm intelligence, either through reinforcement learning or online/continuous learning.


Challenge: Edge AI presents some important benefits and opportunities that TITAN ambitions to capitalise upon. At the same time, most of the tools TITAN will integrate in its ecosystem rely on traditional machine learning approaches, requiring transformation to support Edge AI. In addition, TITAN addresses the citizens, constituting privacy preservation and security major factors; TITAN also wants to exploit implicit and explicit user feedback in improving its solution. Federated (decentralised) learning, although challenging, provides an opportunity for collectively training models without the need for the data to leave the edge device.


Going beyond: There are several technologies that TITAN will explore for transforming a selected set of tools into edge AI tools. Learning parameter efficient neural networks like “MobileNets”, “SqueezeNet”, pruning and truncation, and distillation – training smaller networks using larger networks as “teachers” – are all viable approaches for model transformation, along with the facilities provided by TensorFlow Lite to convert a TensorFlow model for on-device inference. Regarding federated learning, TITAN will study approaches that involve parameter servers and to a lesser degree asynchronous SGD.


State-of-the-Art Gallery

eu flag.png

Thanks for subscribing!

Project Information

Objectives

Work Packages

Deliverables

Consortium

Cluster Projects

Disclaimer

TITAN has received funding from the EU Horizon 2020 research and innovation programme under grant agreement No.101070658, and by UK Research and innovation under the UK governments Horizon funding guarentee grant numbers 10040483 and 10055990.

 

This website represents the views of the TITAN project only.  ​By entering your email address above, you agree to be sent email communications from the TITAN project. Your email address is being used only to keep you updated with our work. It is not being shared with any other third party. If you wish to no longer receive info from us, you can send us an unsubscribe message anytime.

© 2023 by TITAN

bottom of page