System Architecture
Last updated
Last updated
Decentralized Learning: The nodes can collaborate to obtain the updated model, helping to avoid the issues associated with a single server that arise in centralized learning. In this setup, model updates are shared among interconnected nodes without relying on a central system. The model's performance is entirely influenced by the network topology we choose.
The demand release module primarily consists of model demanders, who are responsible for submitting demand tasks and paying both the model training fee and the verification fee.
Once the model demander pays the fees and supplies the initial model data, the system will distribute the relevant information about the initial model to each edge server, allowing local devices to download it.
The aggregation module consists of edge servers near local devices or data sources, like base stations with computing and storage capabilities. These servers store local model gradient parameters and uploaded block data, aggregating the global model while verifying the accuracy of the gradients to prevent dishonest devices from submitting incorrect information.
In the blockchain consensus process, local devices do not participate; instead, edge servers in the aggregation module engage in consensus to reduce communication delays.
The training module consists of local devices that train local models using local data samples. During model training, the system matches local devices with edge servers based on our proposed algorithm. Each local device uploads its model parameters only to its associated edge server and downloads the global model from that server.
The verification module is made up of select local devices. In each round of global model aggregation, the edge server sends the local model gradient parameters uploaded by the local devices to the verification devices, which use their own datasets to assess the quality of the local models and return the results to their associated edge servers.
Processing:
Step 1: The particular device will download the current model.
Step 2: The model would make improvements from the new data that we got from the device.
Step 3: The model changes are summarized as an update and communicated to the cloud. This communication is encrypted.
Step 4: On the cloud, there are many updates coming in from multiple users. These all updates are aggregated and the final model is built.
Incentive Mechanism
During each round of training, the system will store the verification results returned by the verification device, the number of samples uploaded by the training device, and the training time of the device in the block.
After a round of global model aggregation, the system will read the data saved on the block, calculate the reward of each training device according to the incentive mechanism, and send it to each local device.
The incentive mechanism gives corresponding rewards or punishments according to the contribution of local devices to model training. During the federated learning process, in order to ensure that the verification devices can give honest reports, their verification results can be re-verified by other devices, and dishonest validation behaviors will be punished.
At the same time, in order to improve the fairness of reward distribution, the system will allocate the training fee according to the size of contribution made by each training device.
By installing our lightweight SDKs on devices such as smartphones, computers, cameras, or industrial equipment, data contributors can help train AI models while maintaining their privacy
Efficient and Lightweight: The SDKs are optimized to run on low-power devices without affecting their regular functionality. This is particularly important in settings such as manufacturing or energy management, where continuous performance is essential.
For data consumers, OpenPad provides access to some of the most advanced and precisely optimized models:
How It Works
Model Selection: data consumers can easily explore and choose from a wide range of pre-trained models, which have been continually refined through data and devices from diverse sources. These models are not only current but also optimized for real-world use.
Custom Models: If you have specific requirements, you can publish your own model on the OpenPad platform. This allows you to motivate a network of devices to train your model using localized data, creating a “federated model” that benefits from diverse inputs across various companies and regions.
Collaboration Opportunities: OpenPad fosters unparalleled collaboration between organizations, industries, and even across countries. Federated models can be trained by devices from different entities, all contributing to a shared objective while ensuring data privacy and security. This collaborative model opens new avenues for innovation and ensures that AI models stay at the forefront of technology.
Decentralization of Data: The system eliminates the need to centralize all data, enabling model training on varied datasets while preserving data privacy and minimizing extensive data transfers.
Data Privacy & Protection: The system ensures compliance with privacy regulations, reduces the risk of data breaches, and enables the development of more robust models. These models benefit from a wider and more diverse dataset, which is often unattainable in centralized frameworks.
Monetize AI & Data: OPAD Protocol incentivizes validator nodes and miners with OPAD tokens for maintaining the network. Users who contribute digital data for service or machine learning improvements are also rewarded.
Enhanced Scalability: The system accommodates a wide range of devices and networks, from smartphones to IoT solutions, making it highly scalable. This adaptability allows businesses to deploy machine learning solutions in various contexts.
Model Accuracy and Diversity: By utilizing a variety of data sources, the system creates accurate models that reflect diverse datasets. It also incorporates real-world data variations, improving the robustness and generalizability of machine learning models. Additionally, federated learning enables the inclusion of underrepresented data segments, fostering fairness and inclusivity in machine learning applications.