Federated Learner

Summary: The federated learner preserves data privacy by training models locally on the client devices and formulating a global model on a central server based on the local model updates.

Type of pattern: Product pattern

Type of objective: Trustworthiness

Target users: Architects, data scientists

Impacted stakeholders: Developers, operators

Relevant AI ethics principles: Privacy protection and security, reliability and safety

Mapping to AI regulations/standards: EU AI Act.

Context: Despite the widely deployed mobile or IoT devices generating massive amounts of data, lack of training data is still a challenge given the increasing concern in data privacy.

Problem: How can we train AI models without moving the data to a central place to protect data privacy?

Solution: As shown in Figure 1, federated learning is a technique that trains an AI model across multiple edge devices or servers with local data samples. The federated learner preserves the data privacy by training models locally on the client devices and formulating a global model on a central server based on the local model updates—for example, training the visual perception model locally in each vehicle. Decentralized learning is a variant of federated learning, which could use blockchain to remove the single point of failure and coordinate the learning process in a fully decentralized way.

Fig.1 Federated learner

Benefits:

  • Privacy: The federated learner preserves data privacy because the model is trained locally on devices without exchanging the data.
  • Increased reliability: A distributed system like federated learning could provide better reliability compared with a fully centralized system.

Drawbacks:

  • Sampling bias: Because of the decentralized nature of federated learning, data distribution and size of the datasets are heterogeneous.
  • Performance penalty: The federated learner trains local models on multiple devices and then aggregates the local models into a global model. This process may take longer compared with a centralized AI technique.

Related patterns:

  • Co-versioning registry: Co-versioning registry could be applied to federated learning for co-versioning of local models and global models.
  • Incentive registry: Incentive registry could be applied to Federated learner to incentivize more devices to join the learning process.
  • Global view auditor: In a decentralized environment of federated learning, global-view auditor could be applied.
  • Secure aggregator: Secure multi-party computation is a homomorphic encryption technique that can be applied to the aggregation process in federated learning.
  • Random noise data generator: Differential privacy can be applied to randomly generate noise data in the local train-ing and aggregation process in federated learning.
  • Multi-level co-versioning: Federated learning requires co-versioning of local models and global models.

Known uses:

  • TensorFlow Federated (TFF) is an open-source framework for machine learning on decentralized data sources.
  • FATE is an open-source project that supports the federated AI ecosyste
  • Flywheel applies federated analytics and federated learning to optimize AI development.