Security of decentralized deep learning -- integrity and confidentiality of embedded deep neural network model

In the context of research actions in the field of the Security of Artificial Intelligence, we propose the following PhD subject : Security of decentralized deep learning : integrity and confidentiality of embedded deep neural network models

Context

  • Large-scale deployment of AI
  • New decentralized training processes
  • Model security : a complex attack surface -Very active scientific community: Adversarial Machine Learning / Privacy-Preserving Machine Learning (main actors : Google, Meta, MIT, Stanford, Toronto, Tubingen, ETH, …)

Objective

Definition of the different threats models for embedded and decentralized training (focus on Federated Learning)

  • Caracterization of attacks targeting models integrity, availability and confidentiality of models and data.
  • Improve and propose protection schemes
  • Propose evaluation methods for model robustness

contact : pierre-alain.moellic@cea.fr