The process of running a pre-trained AI model to generate predictions or outputs from new input data. Inference is the most common GPU workload type, used for chatbots, image generation, and more.
AI & ComputeThe process of developing an AI model from scratch or fine-tuning an existing one using large datasets. Training requires significant GPU resources and is a key use case for the OpenGPU Network.
AI & ComputeHandling multiple compute tasks together to maximize GPU utilization and throughput. Batch processing is more efficient than processing tasks one at a time.
AI & ComputeA cross-chain protocol that enables moving tokens between different blockchain networks. BridgeX allows users to transfer OGPU tokens between OpenGPU, Ethereum, and other supported chains.
NetworkCentralized Exchange. A traditional exchange platform operated by a company that facilitates token trading. OGPU is available on CEXes like MEXC and Gate.io.
NetworkA progress save point for long-running jobs. If a provider goes offline, execution can resume from the last checkpoint on a different node rather than starting over.
InfrastructureA validation process where multiple providers independently execute the same task and their results are compared. This ensures correctness for critical workloads.
BlockchainAn isolated execution environment used to run workloads securely on provider hardware. Containers ensure that each task runs independently without interfering with other processes.
InfrastructureDecentralized Application. A software application that runs on a blockchain network rather than centralized servers. OpenGPU's management dApp allows providers to track rewards and monitor node performance.
IntegrationA distributed computing model where compute resources are provided by a network of independent operators rather than a single centralized cloud provider. This reduces costs, improves resilience, and democratizes access to GPU power.
NetworkA blockchain-based approach to building physical infrastructure networks through decentralized incentives. OpenGPU is a DePIN project that incentivizes GPU owners to contribute compute resources.
NetworkDecentralized Exchange. A peer-to-peer marketplace for trading tokens without a central intermediary. OGPU can be traded on DEXes like Uniswap and TakoSwap.
NetworkMarket-driven pricing where compute costs adjust based on supply and demand. Providers compete to offer competitive rates, and the routing system finds optimal price-performance matches.
Token & EconomicsVector representations of data (text, images, etc.) in high-dimensional space. Embeddings enable semantic search, recommendation systems, and similarity matching in AI applications.
AI & ComputeCompatibility with the Ethereum Virtual Machine, meaning the OpenGPU blockchain supports Ethereum tooling, smart contracts written in Solidity, and integration with existing Web3 wallets and dApps.
BlockchainAutomatic rerouting of a workload to another provider if the original provider fails mid-execution. This ensures task completion without restarting from zero, maintaining execution guarantees.
InfrastructureAdapting a pre-trained AI model to perform better on specific tasks by training it on additional, domain-specific data. Fine-tuning requires less compute than training from scratch.
AI & ComputeGraphics Processing Unit. A specialized processor originally designed for rendering graphics, now widely used for parallel computing tasks such as AI training, inference, and scientific simulations. GPUs are the core compute resource in the OpenGPU Network.
InfrastructureAn individual GPU machine registered as a provider in the OpenGPU Network. Each node contributes its compute capacity to the decentralized marketplace and earns rewards for completing workloads.
InfrastructureUsing AI diffusion models (like Stable Diffusion) to create images from text prompts or modify existing images. This GPU-intensive workload is a popular use case on the OpenGPU Network.
AI & ComputeOpenGPU's system that automatically matches workloads to the most suitable GPU providers based on hardware capabilities, latency, cost, and provider reputation. This ensures optimal execution efficiency.
InfrastructureThe Directed Acyclic Graph-based consensus mechanism used by the OpenGPU blockchain. It enables fast finality and high throughput (~10,000 TPS) while maintaining decentralization.
BlockchainThe time delay between a request and its response. Low latency is critical for real-time AI inference, gaming, and interactive applications. OpenGPU's routing minimizes latency by matching tasks to nearby providers.
PerformanceLarge Language Model. An AI model trained on vast amounts of text data that can understand and generate human language. Examples include GPT, LLaMA, and Mistral. LLM inference is a primary OpenGPU workload.
AI & ComputeThe native utility token of the OpenGPU Network. OGPU is used for paying compute costs, rewarding providers, staking, and governance. It operates on the OpenGPU blockchain as an ORC-20 token.
Token & EconomicsThe underlying EVM-compatible blockchain that serves as the settlement layer for the OpenGPU Network. It records task completions, payments, and provider reputation on-chain. Mainnet Chain ID: 1071.
BlockchainThe native token standard on the OpenGPU blockchain, similar to ERC-20 on Ethereum. OGPU and other tokens on the network follow this standard for compatibility with wallets and dApps.
Token & EconomicsAn on-chain verification mechanism that cryptographically proves a workload was completed correctly by a provider before payment is released. It ensures trust in decentralized compute.
BlockchainA GPU owner who contributes compute capacity to the OpenGPU Network. Providers can be datacenters, cloud operators, GPU farms, or individuals with personal rigs. They earn OGPU tokens for completing tasks.
InfrastructureA score tracked on-chain that reflects a provider's historical accuracy, reliability, and uptime. Higher reputation leads to more workload assignments and better rewards.
Token & EconomicsRetrieval-Augmented Generation. An AI architecture that combines a language model with external knowledge retrieval to provide more accurate and up-to-date responses. Used for knowledge-intensive applications.
AI & ComputeAn HTTPS endpoint for submitting workloads to the OpenGPU Network with fiat billing support. The Relay simplifies integration for enterprises that don't want to interact directly with the blockchain.
IntegrationSelf-executing code deployed on the blockchain that automatically enforces agreements between parties. In OpenGPU, smart contracts manage task assignments, payments, and reputation updates.
BlockchainPublishing a containerized execution model or template to the OpenGPU Network. Sources are reusable workload definitions that other users can reference when submitting tasks.
IntegrationThe process of locking OGPU tokens to participate in the network. Providers stake tokens to increase their reputation score and demonstrate commitment, which improves their chances of receiving workloads.
Token & EconomicsA testing blockchain network that mirrors the mainnet but uses test tokens (ToGPU) with no real value. OpenGPU Testnet (Chain ID: 200820172034) is used for development and testing.
BlockchainThe amount of work processed per unit time. In GPU computing, this can refer to tokens per second for LLMs, frames per second for rendering, or transactions per second for blockchain.
PerformanceThe latency between submitting a prompt to an LLM and receiving the first output token. Lower TTFT means faster response initiation, critical for interactive chatbot experiences.
PerformanceConverting video files from one format, resolution, or bitrate to another. GPU-accelerated transcoding is significantly faster than CPU-based processing.
AI & ComputeVideo Random Access Memory. The dedicated memory on a GPU used to store model weights and intermediate computations. VRAM capacity determines which AI models a GPU can run.
AI & ComputeThe next evolution of the internet built on blockchain technology, emphasizing decentralization, user ownership, and permissionless access. OpenGPU integrates with Web3 wallets and protocols.
NetworkA compute task submitted to the OpenGPU Network for execution. Workloads can include AI inference, model training, image generation, video transcoding, 3D rendering, and more.
Infrastructure