0G Labs Trained World’s Largest Decentralized AI Model at 107B Parameters in 2025 – Eight Months Before This Week’s Industry Headlines
Company sets record straight: DiLoCoX-107B is 48% larger than Bittensor’s celebrated Covenant-72B and was trained on standard internet infrastructure in July 2025
San Francisco, CA, March 24, 2026 (GLOBE NEWSWIRE) — While the crypto industry celebrated Bittensor’s Covenant-72B this week as a breakthrough in decentralized AI training, 0G Labs had already trained a 107 billion parameter model eight months earlier – 48% larger than Bittensor’s achievement and the largest decentralized AI model on record.

While the industry celebrated Bittensor’s 72B model this week, 0G had already trained 107B parameters in July 2025 – 48% larger, 8 months earlier.
DiLoCoX-107B was trained in July 2025 using technology developed in collaboration with China Mobile, the world’s largest mobile network operator. The peer-reviewed research, published on arXiv, demonstrated 357x greater communication efficiency than standard AllReduce methods across ordinary 1 Gbps internet connections – proving that frontier AI training doesn’t require billion-dollar data center infrastructure.
“We proved decentralized infrastructure can train a 107 billion parameter model in 2025, before anyone else,” said Ming Wu, CTO of 0G Labs. “This week’s headlines celebrating 72 billion parameters as a milestone missed that we’d already operated at significantly larger scale. The record is clear: 0G set the benchmark for decentralized AI training, and we did it on standard consumer bandwidth.”
The original DiLoCoX-107B training demonstrated that distributed infrastructure could rival centralized approaches at frontier scale. Where OpenAI, Google, and Meta spend billions on GPU clusters, 0G’s approach achieves approximately 95% cost reduction according to Forbes, training across distributed nodes connected by the same internet speeds available to consumers.
Bittensor’s Covenant-72B, trained on Bittensor’s Subnet 3 by 70 contributors, represents important progress for the decentralized AI ecosystem. But the technical reality is that 0G had already proven larger-scale training was possible, with peer-reviewed research to verify the achievement.
Building on the Record: Public Retrain and Open-Source Commitment
Today, 0G Labs announces it has begun publicly retraining DiLoCoX-107B with full transparency and a commitment to open-source release. This forward-looking initiative will set a new standard for verifiable AI development.
The retrained DiLoCoX-107B will be:
- Fully open-source upon completion, with all weights, checkpoints, and benchmarks publicly released
- Verifiable from training to inference, with complete transparency on data provenance, convergence metrics, and TEE-backed verification via zerogAuth
- Documented throughout the training process, with all methodology and results made publicly available
- Designed for distributed training across standard network conditions
“The industry is finally paying attention to decentralized AI,” said a 0G Labs spokesperson. “NVIDIA CEO Jensen Huang recently told the All-In Podcast that distributed, open-source AI training is complementary to centralized approaches and will play a growing role in frontier model development. Bittensor’s work demonstrates real community interest. But the foundation was already laid. 0G trained the largest decentralized model in 2025, and now we’re making it fully open and verifiable for the world to build on.”
Full-Stack Infrastructure for Verifiable AI
Unlike models trained solely for research demonstration, DiLoCoX-107B runs on 0G’s complete blockchain for AI agents – a production-ready stack encompassing an EVM-compatible L1 blockchain, decentralized compute, distributed storage (up to 2 GB/s throughput), and a data availability layer that is 50,000x faster and 100x cheaper than Ethereum DA.
This matters because AI agents don’t need just training. They need verified training, verified inference, verified storage, and onchain settlement. 0G provides the full stack.
Key technical innovations powering DiLoCoX-107B:
- Pipeline Parallelism: Model split across nodes with computation overlapping communication
- Dual Optimizer Policy: Local updates aligned with global model objectives
- One-Step-Delay Overlap: Continuous training without full node synchronization
- Adaptive Gradient Compression: Smaller transmitted updates while preserving accuracy
The retraining process is now underway. All training data, methodology, and results will be publicly documented throughout, setting a new standard for transparency in AI development. Upon completion, all weights and checkpoints will be released under open-source licensing.
“This isn’t about breaking records,” said Ming Wu. “It’s about building AI as a public good. We set the record in 2025. Now we’re opening it up for everyone to verify, use, and build on. That’s what decentralized AI should mean.”
About 0G Labs
0G Labs is the creator of the blockchain for AI agents and one of the best-funded AI infrastructure projects in Web3 with $40 million in seed funding and a $250 million token commitment from investors including Hack VC, Delphi Digital, OKX Ventures, Samsung Next, and Bankless Ventures. 0G’s Aristotle Mainnet, launched in September 2025, powers a full-stack AI infrastructure including an EVM-compatible L1 chain, decentralized compute, distributed storage capable of up to 2 GB per second, and a data availability layer that is 50,000x faster and 100x cheaper than Ethereum DA. The company has 100+ launch partners including Chainlink, Google Cloud, and Alibaba Cloud. The $0G token is listed on Binance, OKX, Bybit, and Gate.io.
About 0G Foundation
The 0G Foundation drives innovation and growth within the 0G ecosystem, maintaining the blockchain for AI agents fueled by $0G. The Foundation supports ecosystem development, funds grants, and enables community governance to advance the mission of making AI as a public good.

107 billion parameters. July 2025. Standard consumer bandwidth.
Press Inquiries
[email protected]
https://0g.ai
Disclaimer: The above press release comes to you under an arrangement with GlobeNewswire. UKNewshour.com takes no editorial responsibility for the same.
