Secure Aggregator

Summary: A secure aggregator can ensure secure aggregation in federated learning through the use of multiparty computation techniques.

Type of pattern: Product pattern

Type of objective: Trustworthiness

Target users: Data scientists

Impacted stakeholders: RAI governors, AI users, AI consumers

Lifecycle stages: Design

Relevant AI ethics principles: Privacy protection and security

Context: Federated learning is a type of collaborative learning that trains models locally on client devices and aggregates the results to create a global model. While the local data remains on the client devices, the local model parameters can be inferred to reveal sensitive information.

Problem: How can we ensure the data privacy when aggregating local model updates in federated learning?

Solution: Secure multi-party computation can be applied in federated learning to protect the data privacy during model exchanges and aggregations. When multi-party computation is used, local model updates from individual client devices can be aggregated while keeping the privacy of the model parameters. Through the use of encryption, participating clients can securely exchange their encrypted model updates and have access only to the final aggregation result, which is performed on the secret-shared model parameter data.


  • Data privacy and security: Secure multi-party computation enables secure data sharing in model exchanges and aggregation. Models are protected by encryptions; this prevents them from being attacked by adversarial parties to avoid data leakage.
  • Decentralized control: Federated learning allows for the training of models on distributed devices, which enables decentralized control of data and models.
  • Compliance with regulations: Secure multi-party computation can help organizations comply with RAI regulations that require the protection of personal data.


  • Aggregation inefficiency: The efficiency of the aggregation process is affected because additional encryption steps are needed every round for each participating client device.
  • Lack of scalability: Multi-party computation can incur significant computation and communication cost when being applied in a large-scale federated learning system.

Related Patterns:

  • Federated learner: Secure multi-party computation is a homomorphic encryption technique that can be applied to the aggregation process in federated learning.
  • Trainer over encrypted data: Secure multi-party computation techniques used in secure aggregation are an extension of homomorphic encryption technique for multi-party model training.

Known Uses:

  • Google’s SecAgg is a secure aggregation protocol that uses multi-party computation to carry out the summations of model parameter updates received from client devices in an encrypted manner.
  • OpenMined’s PyGrid is a peer-to-peer platform that uses multi-party computation to protect the privacy of data and model parameters in federated learning.
  • IBM Federated Learning supports multi-party computation to enable the private aggregation of model updates.