Clients publish containerized execution models (sources) that include code, dependencies and execution parameters. These serve as reusable definitions for future tasks submitted to the network.
Clients submit tasks referencing their published sources. Each task includes execution parameters, input data and a reward paid in OpenGPU tokens.
Providers scan available sources and register to those matching their GPU capabilities and profitability goals. This forms specialized pools for different workloads and GPU types.
Providers compete to execute tasks and submit the first valid output. This drives:
This competitive supply-and-demand model forms a self-regulated marketplace where high-demand tasks attract more providers, pricing adjusts automatically to network conditions, and providers optimize their hardware to maximize returns. Clients receive the best performance for their budget without manual scheduling. The protocol's native integration with the OpenGPU Layer-1 enables exceptional throughput and transparency compared to typical decentralized compute networks.


Each result undergoes cryptographic checks before acceptance.
Historical accuracy and reliability are tracked on-chain.
Execution environments are isolated and hardened for safety.
For critical tasks, multiple providers validate results to ensure correctness.
This competitive supply-and-demand model forms a self-regulated marketplace where high-demand tasks attract more providers, pricing adjusts automatically to network conditions, and providers optimize their hardware to maximize returns. Clients receive the best performance for their budget without manual scheduling.
The protocol's native integration with the OpenGPU Layer-1 enables exceptional throughput and transparency compared to typical decentralized compute networks.
