FHE Machine Learning
2PM.Network integrates Fully Homomorphic Encryption (FHE) into its Node Framework using ZAMA Concrete ML, a cutting-edge, open-source machine learning framework that ensures privacy preservation throughout the data science workflow.
ZAMA Concrete ML provides the backbone for 2PM.Network's FHE capabilities. It’s designed to allow data scientists, regardless of their cryptography background, to apply FHE in their machine learning projects. The main appeal of Concrete ML lies in its ability to interface with popular machine learning libraries like scikit-learn and PyTorch, translating conventional ML models into their FHE counterparts effortlessly.
Key Features of 2PM FHE Machine Learning
Automatic Model Conversion:
Supports Linear, Tree-Based, and Neural Network Models: Concrete ML enables the conversion of a wide array of machine learning models into FHE-compatible versions. This includes linear models, tree-based models, and even complex structures like neural networks.
Use of Familiar APIs: By supporting familiar APIs from libraries such as scikit-learn and PyTorch, Concrete ML allows seamless integration into existing workflows, making the transition to FHE-based models straightforward for data scientists.
Training on Encrypted Data:
Privacy by Design: The fundamental advantage of FHE is the capability to perform computations on encrypted data. This means models trained via 2PM.Network can operate on data without ever exposing the raw, decrypted data, thereby maintaining strict data confidentiality.
No Compromise on Features: Despite the encrypted nature of the operations, there is no compromise on the functionalities of the applications. Models can still perform complex computations and return accurate results, all while the data remains encrypted.
Encrypted Data Pre-Processing:
DataFrame Paradigm: Similar to unencrypted data workflows, 2PM.Network supports pre-processing of encrypted data using a DataFrame-style interface, making it intuitive for those familiar with data science toolkits like pandas.
Typical Workflow
Training the Model:
Initially, models are trained on plaintext (unencrypted) data using conventional methods provided by libraries like scikit-learn. This step is crucial for establishing the model's accuracy and effectiveness before encryption.
Compiling the Model:
Quantization: Since FHE operates over integers, the trained model is quantized, meaning it is adjusted to operate with integer values only, preparing it for encryption.
Conversion and Compilation: Post-quantization, the model is converted into a Concrete Python program and compiled into its FHE equivalent, ready for deployment on encrypted data.
Performing Inference:
The FHE version of the model can perform inference directly on encrypted data. This is particularly useful in sensitive applications where data privacy is paramount, such as in healthcare or financial services.
Last updated