opp/ai (Optimistic Privacy-Preserving AI), invented by ORA, is an onchain AI framework designed to address the challenges of privacy and computational efficiency in blockchain-based machine learning systems. [1] It integrates the privacy guarantees of Zero-Knowledge Machine Learning (zkML) with the efficiency of Optimistic Machine Learning (opML), creating a hybrid model for secure AI services onchain. [2] The framework is designed to be flexible, meaning that advances in the underlying zkML technology can be directly incorporated into opp/ai.
The opp/ai framework is a hybrid that combines two core technologies to balance privacy with efficiency.
Zero-Knowledge Machine Learning (zkML) is used in the opp/ai framework to provide privacy. It leverages zero-knowledge proofs (ZKPs) to verify computations without revealing the underlying sensitive data or model parameters. [1] This allows for the protection of confidential information during AI inference. However, generating ZKPs is computationally intensive and can be costly, which is a primary reason for opp/ai's hybrid approach. [2]
Optimistic Machine Learning (opML) is used to ensure computational efficiency. Unlike zkML, opML uses a fraud-proof system where ML results are executed off-chain and submitted to the blockchain with an optimistic assumption of their correctness. [1] These results are subject to a challenge period during which they can be disputed. This method significantly reduces the onchain computational load, offering a more scalable and efficient solution for integrating ML with blockchain technology. [2]
The opp/ai framework operates by strategically partitioning a machine learning model into different submodels based on its privacy requirements. This creates a hybrid execution model that balances privacy and efficiency. [2]
The outputs from the zkML submodels can serve as inputs for the opML submodels, allowing for a seamless integration of the two systems. For the opML components, disputes are resolved through an interactive game that utilizes a Fraud Proof Virtual Machine (FPVM) to verify computation steps onchain. [1]
A primary application of this is concealing specific fine-tuning weights of a model where the majority of the weights are already public. For example, the proprietary LoRA weights in the attention layers of an open-source model like Stable Diffusion can be protected using the opp/ai framework. This preserves the competitive advantage of the unique adaptations while the base model remains accessible. [1]