2PM.Network
WebsiteXDiscordLinkedinGithub
  • Overview
    • What is 2PM.Network
    • Architecture
    • FAQ
    • Official Social Channels
  • 2PM Data VSIES Service
    • What is Data VSIES and why is it important
    • [V] Data Validation (ZK)
    • [SI] Data Standardization and Index
    • [E] Data Encryption Client (FHE)
    • [S] Data Storage and Access
    • Data VSIES SDK
  • Node Framework
    • Modular Architecture
    • Federated Learning
      • Horizontal Federated Learning Task
      • Logistic Regression Task
      • On-chain Secure Aggregation
      • Typical Scenarios
    • FHE Machine Learning
      • Built-in Models
      • Deep Learning
      • Typical Scenarios
    • Task Submission
    • Running a 2PM Node
      • Installation
      • Chain Connector Configuration
      • Data Preparation
      • Joining a Subnet
  • Security and Verification
    • Node Staking and Slash Mechanism
    • Running Verification Client
      • EigenLayer
      • Mind Network
    • Restaking and Delegation
  • Model Inference
    • 2PM Node Inference API
    • Posting Request to a Subnet Model
    • Getting Inference Results on Chain
      • Oracle Adapters
  • Monetization and Incentives
    • AI Model IP Assets
    • Distribution Algorithm
  • 2PM DAO
    • Build Subnets
      • Establishing New Subnets
      • General Requirements
      • Data Schema Definition
      • Model Selection
      • Task Implementation
    • $DMP Token
  • Deployed Smart Contracts
    • Subnets on Testnets
    • Local Deployment Guideline
  • Ecosystem
    • Partners
    • Use Cases
      • Private Personalized Recommendation
Powered by GitBook
On this page
  • How Horizontal Federated Learning Works
  • Key Components
  • Phases of Horizontal Federated Learning
  • Security Measures in Federated Learning
  1. Node Framework

Federated Learning

PreviousModular ArchitectureNextHorizontal Federated Learning Task

Last updated 10 months ago

Horizontal Federated Learning (HFL) is a privacy-preserving computational technology and one of the most mature implementations of federated learning. Originally proposed by Google, HFL has been extensively utilized in , where it significantly enhances predictive text capabilities by using historical typing data from users without compromising their privacy.

How Horizontal Federated Learning Works

HFL operates by distributing the machine learning model across numerous clients (such as mobile phones or personal computers) that all possess similar types of data. Instead of pooling data into a central server, each client uses its own data to train the model locally. The updates from these local models (typically gradients of the model's parameters) are then sent to a central server which aggregates them to improve a global model.

Key Components

  1. Server:

    • Role: Maintains and updates the global machine learning model and coordinates the training process across multiple clients.

    • Functionality: The server initiates training rounds, selects sufficient clients for each training phase, and aggregates the gradients provided by clients to update the global model.

  2. Clients:

    • Role: Each client holds its private data locally and participates in the model training by using this data.

    • Functionality: Clients train the local version of the model using their data and send the model updates back to the server.

Phases of Horizontal Federated Learning

  1. Selection Phase:

    • Clients connect to the server when they have sufficient data to participate in training.

    • The server either accepts or rejects these connections based on predefined criteria, such as the number of participants needed for a training round.

  2. Training Phase:

    • Once selected, clients receive the current model parameters and training configurations from the server.

    • Each client trains the model locally using its data and computes the gradients.

  3. Update Phase:

    • Clients send their computed gradients to the server.

    • The server aggregates these gradients, typically by averaging, and updates the global model parameters accordingly.

Security Measures in Federated Learning

Ensuring the privacy and security of training data during the aggregation process is crucial. This is addressed by:

  • Secure Aggregation: Clients encrypt their gradients before sending them to the server. The server then performs aggregation on encrypted data, ensuring that no sensitive information from individual clients is exposed.

  • Privacy Enhancements: Techniques such as differential privacy may be applied where noise is added to the data or gradients to prevent potential leakage of sensitive information.

Google's Gboard