Research

High Quality Research

The Private AI Collaborative Research Institute

The Private AI Collaborative Research Institute was originally established by Intel’s University Research & Collaboration Office, which then invited Avast, a global leader in digital security and privacy products, and VMWare, a Technology company and provider of software solutions for cloud computing and virtualization of data center infrastructures, to collaborate on the institute.

Need for Decentralized Analytics at the Edge

Industry is trending toward intelligent edge systems. Algorithms such as neural networks and distributed ledgers are gaining traction at the edge on the device level without reliance on cloud infrastructure. To be effective, this requires huge amounts of data that is often sensed at the edge, such as vehicle routing, industrial monitoring, security threat monitoring, or search term predictions.

However, training for AI models is centralized with large amounts of data pooled in the data center of a trusted provider. To perform classification, the resulting model is then distributed to the edge. In many cases, today’s centralized approach is limiting performance. For example, health data is siloed and cannot be used for centralized training due to privacy and regulatory constraints. Autonomous cars generate terabytes of traffic data where bandwidth prevents centralized training. Personal computers and phones in billions of homes generate vast amounts of data daily, which cannot be uploaded due to privacy concerns.

Research at the Private AI Collaborative Research Institute will address secure, trusted, and decentralized analytics, and compute at the edge. By decentralizing AI, the institute plans to liberate data from silos, protect privacy and security, and maintain efficiency.

Our Visionpaper

The goal of the Private AI Collsborative Institute is to push the state of the art on decentralized and privacy-preserving machine learning.
Read more in our visionpaper.

Research teams

Carnegie Mellon University
Unified Framework for the Competing Constraints in Federated Learning

CMU will focus on federated learning systems with competing constraints, such as accuracy, fairness, robustness, and privacy. CMU will investigate statistical heterogeneity as a root cause for tension between these constraints. In particular, CMU will tackle heterogeneity via a unified framework for robust, privacy-preserving multi-task learning — unlocking a new generation of FL systems that can holistically address the constraints of realistic federated networks.

Principal Investigator: Virginia Smith

National University of Singapore
Robust and Privacy-Preserving Knowledge Transfer for Heterogeneous Decentralized Learning

When decentralized learning algorithms exchange model parameters, it limits the scalability of such algorithms, prevents the support of heterogeneous networks, and introduces many robustness and privacy issues. NUS will design knowledge transfer algorithms which are robust, privacy-preserving, and support heterogeneous networks. NUS also will focus on designing a theoretical framework and algorithms for certifiably robust and differentially private knowledge transfer algorithms for decentralized learning. The team will evaluate and test its scalability and efficiency for secure multi-party computation algorithms.

Principal Investigator: Reza Shokri

Technical University of Darmstadt and University of Wuerzburg
Decentralized Trustworthy Federated Learning (TRUFFLE)

The Systems Security lab at TU Darmstadt and Secure Software Systems at the University of Wuerzburg will design a framework for FL that provides comprehensive security and privacy. The design will be resilient against crucial threats, such as data and model poisoning. It will also incorporate privacy-enhancing technologies based on decentralized aggregators and advanced crypto-based primitives to address privacy requirements of FL. Moreover, TRUFFLE will considers the integration of hardware-assisted security and trusted execution environments of varying capabilities.

Principal Investigator: Alexandra Dmitrienko and Ahmad-Reza Sadeghi

Technical University of Darmstadt
Engineering Private AI Systems (EPAI)

For EPAI, the ENCRYPTO group at TU Darmstadt will develop basic technologies to build private AI systems, investigate their orchestration strategies to optimize efficiency and costs on a given network and compute infrastructure, and systematically validate these to allow for automatically selecting the most efficient solution for a specific usage scenario. As underlying technologies, the ENCRYPTO group will mix different building blocks from cryptography and hardware, including secure multi-party computation, hardware acceleration, and trusted execution environments.

Principal Investigator: Thomas Schneider

Université Catholique de Louvain
Federated Private Learning on Heterogeneous Devices

Université Catholique de Louvain (UCLouvain) is focused on the detection, classification, and analysis of malware. With malware, one must gather and analyze diverse and incomplete information to build a unified understanding of a sample program. Many heterogeneous devices can be exposed to a new malware sample and data from these devices can be combined to learn a highly accurate model of the sample. UCLouvain will focus on how to learn from diverse information from heterogeneous devices while ensuring privacy.

Principal Investigators: Axel Legay and Thomas Given-Wilson

University of California, San Diego
Private Decentralized Analytics on the Edge (PriDEdge)

The training and data management in FL, especially when executed through secure protocols, entails a large amount of computation that makes its practical deployment a challenge. To ease the burden on these computations, the University of California, San Diego (UCSD) team focuses on evaluation of the cryptographic primitives and devising new hardware-based primitives that complement the existing resources on Intel processors. The new primitives include accelerators for homomorphic encryption, Yao’s garbled circuit, and Shamir’s secret sharing. Placing several cryptographic primitives on the same chip will ensure optimal usage by enabling resource sharing among these primitives. Furthermore, the UCSD team plans to design efficient systems through the co-optimization of the FL algorithms, defense mechanisms, cryptographic primitives, and the hardware primitives.

Principal Investigator: Farinaz Koushanfar

University of Southern California
Secure and Privacy Preserving Machine Learning: Foundations and Scalable System Design (PPML)

With PPML, USC will address critical requirements of decentralization, security, and scalability in distributed machine learning (ML). USC will also expand the PPML framework by leveraging trusted execution environments to enhance the security and improve the performance of the approach. USC will demonstrate how multiple data-owners can jointly train a machine learning model while keeping individual datasets private and secure.

Principal Investigators: Salman Avestimehr and Murali Annavaram

University of Toronto
Cryptography in Privacy-Preserving Machine Learning

The predictions of ML systems often reveal private information contained in training data, necessitating learning algorithms that provide confidentiality and privacy guarantees. U of T will focus on the collaborative training of ML models across few participants with sensitive datasets and will construct a protocol for collaborative ML providing both confidentiality and privacy. U of T will rely on cryptography for participants to query others without revealing the input queried. In conjunction, differential privacy will prevent the querying participant from learning about other participants’ data.

Principal Investigator: Nicolas Papernot

University of Waterloo
Confidence in Distributed AI Systems

University of Waterloo will focus on protecting the confidentiality of two types of sensitive data involved in machine learning: model parameters and training data. Real-world deployment of ML-based systems requires convincing confidentiality protection. UW will devise leakage-resistant aggregation mechanisms and effective model watermarking techniques for federated learning systems. The team will also explore design options for side-channel-resistant accelerator architectures for deep learning.

Principal Investigator: N. Asokan and Florian Kerschbaum