Glossary
A
AI Inference
The process of running a pre-trained AI model to generate predictions or outputs from new input data. Inference is the most common GPU workload type, used for chatbots, image generation, and more.
AI & ComputeAI Training
The process of developing an AI model from scratch or fine-tuning an existing one using large datasets. Training requires significant GPU resources and is a key use case for the OpenGPU Network.
AI & ComputeB
Batch Processing
Handling multiple compute tasks together to maximize GPU utilization and throughput. Batch processing is more efficient than processing tasks one at a time.
AI & ComputeBridge
A cross-chain protocol that enables moving tokens between different blockchain networks. BridgeX allows users to transfer OGPU tokens between OpenGPU, Ethereum, and other supported chains.
NetworkC
CEX
Centralized Exchange. A traditional exchange platform operated by a company that facilitates token trading. OGPU is available on CEXes like MEXC and Gate.io.
NetworkCheckpoint
A progress save point for long-running jobs. If a provider goes offline, execution can resume from the last checkpoint on a different node rather than starting over.
InfrastructureComputation Consensus
A validation process where multiple providers independently execute the same task and their results are compared. This ensures correctness for critical workloads.
BlockchainContainer
An isolated execution environment used to run workloads securely on provider hardware. Containers ensure that each task runs independently without interfering with other processes.
InfrastructureD
DEX
Decentralized Exchange. A peer-to-peer marketplace for trading tokens without a central intermediary. OGPU can be traded on DEXes like Uniswap and TakoSwap.
NetworkDecentralized Computing
A distributed computing model where compute resources are provided by a network of independent operators rather than a single centralized cloud provider. This reduces costs, improves resilience, and democratizes access to GPU power.
NetworkDecentralized Physical Infrastructure (DePIN)
A blockchain-based approach to building physical infrastructure networks through decentralized incentives. OpenGPU is a DePIN project that incentivizes GPU owners to contribute compute resources.
NetworkDynamic Pricing
Market-driven pricing where compute costs adjust based on supply and demand. Providers compete to offer competitive rates, and the routing system finds optimal price-performance matches.
Token & EconomicsdApp
Decentralized Application. A software application that runs on a blockchain network rather than centralized servers. OpenGPU's management dApp allows providers to track rewards and monitor node performance.
IntegrationE
EVM Compatible
Compatibility with the Ethereum Virtual Machine, meaning the OpenGPU blockchain supports Ethereum tooling, smart contracts written in Solidity, and integration with existing Web3 wallets and dApps.
BlockchainEmbeddings
Vector representations of data (text, images, etc.) in high-dimensional space. Embeddings enable semantic search, recommendation systems, and similarity matching in AI applications.
AI & ComputeF
Failover
Automatic rerouting of a workload to another provider if the original provider fails mid-execution. This ensures task completion without restarting from zero, maintaining execution guarantees.
InfrastructureFine-tuning
Adapting a pre-trained AI model to perform better on specific tasks by training it on additional, domain-specific data. Fine-tuning requires less compute than training from scratch.
AI & ComputeG
GPU
Graphics Processing Unit. A specialized processor originally designed for rendering graphics, now widely used for parallel computing tasks such as AI training, inference, and scientific simulations. GPUs are the core compute resource in the OpenGPU Network.
InfrastructureGPU Node
An individual GPU machine registered as a provider in the OpenGPU Network. Each node contributes its compute capacity to the decentralized marketplace and earns rewards for completing workloads.
InfrastructureI
Image Generation
Using AI diffusion models (like Stable Diffusion) to create images from text prompts or modify existing images. This GPU-intensive workload is a popular use case on the OpenGPU Network.
AI & ComputeIntelligent Routing
OpenGPU's system that automatically matches workloads to the most suitable GPU providers based on hardware capabilities, latency, cost, and provider reputation. This ensures optimal execution efficiency.
InfrastructureL
LLM
Large Language Model. An AI model trained on vast amounts of text data that can understand and generate human language. Examples include GPT, LLaMA, and Mistral. LLM inference is a primary OpenGPU workload.
AI & ComputeLachesis DAG
The Directed Acyclic Graph-based consensus mechanism used by the OpenGPU blockchain. It enables fast finality and high throughput (~10,000 TPS) while maintaining decentralization.
BlockchainLatency
The time delay between a request and its response. Low latency is critical for real-time AI inference, gaming, and interactive applications. OpenGPU's routing minimizes latency by matching tasks to nearby providers.
PerformanceO
OGPU Token
The native utility token of the OpenGPU Network. OGPU is used for paying compute costs, rewarding providers, staking, and governance. It operates on the OpenGPU blockchain as an ORC-20 token.
Token & EconomicsORC-20
The native token standard on the OpenGPU blockchain, similar to ERC-20 on Ethereum. OGPU and other tokens on the network follow this standard for compatibility with wallets and dApps.
Token & EconomicsOpenGPU Blockchain
The underlying EVM-compatible blockchain that serves as the settlement layer for the OpenGPU Network. It records task completions, payments, and provider reputation on-chain. Mainnet Chain ID: 1071.
BlockchainP
Proof of Execution
An on-chain verification mechanism that cryptographically proves a workload was completed correctly by a provider before payment is released. It ensures trust in decentralized compute.
BlockchainProvider
A GPU owner who contributes compute capacity to the OpenGPU Network. Providers can be datacenters, cloud operators, GPU farms, or individuals with personal rigs. They earn OGPU tokens for completing tasks.
InfrastructureProvider Reputation
A score tracked on-chain that reflects a provider's historical accuracy, reliability, and uptime. Higher reputation leads to more workload assignments and better rewards.
Token & EconomicsR
RAG
Retrieval-Augmented Generation. An AI architecture that combines a language model with external knowledge retrieval to provide more accurate and up-to-date responses. Used for knowledge-intensive applications.
AI & ComputeRelay
An HTTPS endpoint for submitting workloads to the OpenGPU Network with fiat billing support. The Relay simplifies integration for enterprises that don't want to interact directly with the blockchain.
IntegrationS
Smart Contract
Self-executing code deployed on the blockchain that automatically enforces agreements between parties. In OpenGPU, smart contracts manage task assignments, payments, and reputation updates.
BlockchainSource Publication
Publishing a containerized execution model or template to the OpenGPU Network. Sources are reusable workload definitions that other users can reference when submitting tasks.
IntegrationStaking
The process of locking OGPU tokens to participate in the network. Providers stake tokens to increase their reputation score and demonstrate commitment, which improves their chances of receiving workloads.
Token & EconomicsT
Testnet
A testing blockchain network that mirrors the mainnet but uses test tokens (ToGPU) with no real value. OpenGPU Testnet (Chain ID: 200820172034) is used for development and testing.
BlockchainThroughput
The amount of work processed per unit time. In GPU computing, this can refer to tokens per second for LLMs, frames per second for rendering, or transactions per second for blockchain.
PerformanceTime to First Token (TTFT)
The latency between submitting a prompt to an LLM and receiving the first output token. Lower TTFT means faster response initiation, critical for interactive chatbot experiences.
PerformanceV
VRAM
Video Random Access Memory. The dedicated memory on a GPU used to store model weights and intermediate computations. VRAM capacity determines which AI models a GPU can run.
AI & ComputeVideo Transcoding
Converting video files from one format, resolution, or bitrate to another. GPU-accelerated transcoding is significantly faster than CPU-based processing.
AI & ComputeW
Web3
The next evolution of the internet built on blockchain technology, emphasizing decentralization, user ownership, and permissionless access. OpenGPU integrates with Web3 wallets and protocols.
NetworkWorkload
A compute task submitted to the OpenGPU Network for execution. Workloads can include AI inference, model training, image generation, video transcoding, 3D rendering, and more.
Infrastructure