2PM.Network
WebsiteXDiscordLinkedinGithub
  • Overview
    • What is 2PM.Network
    • Architecture
    • FAQ
    • Official Social Channels
  • 2PM Data VSIES Service
    • What is Data VSIES and why is it important
    • [V] Data Validation (ZK)
    • [SI] Data Standardization and Index
    • [E] Data Encryption Client (FHE)
    • [S] Data Storage and Access
    • Data VSIES SDK
  • Node Framework
    • Modular Architecture
    • Federated Learning
      • Horizontal Federated Learning Task
      • Logistic Regression Task
      • On-chain Secure Aggregation
      • Typical Scenarios
    • FHE Machine Learning
      • Built-in Models
      • Deep Learning
      • Typical Scenarios
    • Task Submission
    • Running a 2PM Node
      • Installation
      • Chain Connector Configuration
      • Data Preparation
      • Joining a Subnet
  • Security and Verification
    • Node Staking and Slash Mechanism
    • Running Verification Client
      • EigenLayer
      • Mind Network
    • Restaking and Delegation
  • Model Inference
    • 2PM Node Inference API
    • Posting Request to a Subnet Model
    • Getting Inference Results on Chain
      • Oracle Adapters
  • Monetization and Incentives
    • AI Model IP Assets
    • Distribution Algorithm
  • 2PM DAO
    • Build Subnets
      • Establishing New Subnets
      • General Requirements
      • Data Schema Definition
      • Model Selection
      • Task Implementation
    • $DMP Token
  • Deployed Smart Contracts
    • Subnets on Testnets
    • Local Deployment Guideline
  • Ecosystem
    • Partners
    • Use Cases
      • Private Personalized Recommendation
Powered by GitBook
On this page
  1. 2PM Data VSIES Service

[S] Data Storage and Access

Previous[E] Data Encryption Client (FHE)NextData VSIES SDK

Last updated 10 months ago

Data Storage Process

  1. Data Slicing: Initially, data is segmented into equal-sized slices to manage the upload process effectively and enhance security.

  2. Hashing: Each data slice is hashed individually. These hashes serve as a unique identifier and verification tool for each slice of data.

  3. Merkle Tree Storage: The individual hashes are stored in a Merkle Tree structure. This approach not only secures the data integrity by providing a comprehensive hash of all data slices but also optimizes data verification processes.

  4. Contract Submission: The root hash of the Merkle Tree, representing the entire data set, is submitted to the on the 0G platform. This step is crucial for tracking and verifying the data integrity before and after storage.

  5. Data Upload to Storage Nodes: Each slice of data is then uploaded sequentially to the storage nodes on the 0G network. The smart contract verifies each slice using its hash before it is considered successfully uploaded. This verification ensures that each piece of data stored in the network is intact, unaltered, and securely validated.

Data Retrieval Process

  • Utilizing Merkle Root Hash: Data retrieval is facilitated by the Merkle Tree's root hash. Users, applications or nodes can request specific data slices by referencing the root hash, which ensures that the retrieved data is correct and unaltered from its original form. The Merkle Tree structure allows for efficient verification of the data slices, ensuring that the integrity of the data is maintained even when accessing small segments of the overall dataset.

Flow smart contract