Fluence Launches Global and Affordable GPU Compute for AI
Provided by PR DESK
The Block is hosting this content as a convenience for our readers. The Block does not necessarily endorse any of the statements made within the following announcement.


Zurich, Switzerland – October 3, 2025 – Fluence, the enterprise grade, low cost cloudless computing platform, announced the launch of GPU compute for AI workloads at substantially lower cost than centralized cloud providers. GPU containers are available immediately through the Fluence Platform, with GPU virtual machines and bare metal support planned for the coming weeks. This launch is supported by a partnership with Spheron Network as one of key compute providers for Fluence.
Addressing AI’s Compute Bottleneck
AI projects and companies face rising compute costs and hidden fees from hyperscalers, forcing teams into long-term, rigid pricing structures. Fluence is addressing customer demand for open, low-cost and short-term GPU access by expanding its offering from CPU-based virtual servers into GPUs, giving customers direct access to high-performance hardware at up to 85% lower cost than the large clouds. The addition of GPUs builds on Fluence’s expertise in offering CPUs and adds a key product that allows Fluence to address the growing AI ecosystem.
Fluence’s CPU marketplace currently generates over $1 million in ARR with a pipeline exceeding $8 million in the billion-dollar third-party node provider market. Customers have saved $3.5 million through Fluence compared with centralized clouds.
Fluence’s decentralized infrastructure supports thousands of active blockchain nodes, and customers include Antier — ranked among the world’s largest blockchain service providers — as well as NEO, RapidNode, Zeeve, dKloud, AR.IO, Tashi, and Nodes Garden.
Fluence’s Vision 2026 calls for scaling enterprise-grade decentralized compute and building a global GPU-powered marketplace to support a wide range of features requested by customers. The partnership with Spheron expands the Fluence provider network, already including Kabat, Piknik, and other top data center facilities.
“Meeting the exponentially growing demand for AI requires cost-efficient access to enterprise-grade GPUs. By expanding our network using Spheron’s decentralized GPUs, we give developers that access immediately, making our platform the go-to choice for serious AI builders scaling to the next level,” said Evgeny Ponomarev, Co-Founder of Fluence.
“Access to GPUs has been gated by scarcity and cost. Partnering with Fluence removes those barriers, giving AI teams dependable, decentralized compute power to move faster from research to deployment.,” added Prashant Maurya, Co-Founder of Spheron Network.
GPU Containers Live Today, VMs and Bare Metal Coming Next
GPU containers are live now on the Fluence Console, optimized for fine-grained AI workloads. Support for GPU VMs and bare metal will follow in the coming weeks, expanding options for AI projects and companies seeking decentralized, enterprise-grade performance. Developers can start deploying today at fluence.network/gpu and review documentation at fluence.dev/docs.
Fluence’s entry into GPUs marks a clear, decisive step for DePIN: affordable, enterprise-grade compute delivered through a decentralized marketplace so builders can move faster with fewer constraints.
About Fluence Network
Fluence is a DePIN cloudless computing platform that delivers resilient, enterprise-grade compute at lower cost than centralized clouds. The network aggregates capacity from top tier enterprise grade data centers worldwide, giving builders open access to the resources they need for AI, Web3, and general-purpose applications. Fluence is governed by the Fluence DAO, and its native token FLT powers governance, staking, and coordination across the network.
Press Contact
Nadia Venzhina [email protected]

