0G Retrains 107B Model in Public as Decentralized AI Enters a New Phase
Summary
0G is publicly retraining its 107 billion parameter model, DiLoCoX-107B, which previously achieved 357 times better communication efficiency than traditional methods but received little attention in mid-2025.
The public retraining effort aims to document every stage, including checkpoints and data sourcing, with verification via Trusted Execution Environments (zerogAuth), ultimately open-sourcing the model weights to prove decentralized AI can be audited and reproduced.
0G argues that model value lies in the full system, highlighting DiLoCoX's communication efficiency techniques and its ability to train on standard 1 Gbps connections, challenging the assumption that frontier training requires specialized networking. Furthermore, 0G claims its system cuts costs by about 95% compared to centralized alternatives, shifting the focus from parameter counts to verifiable output and accessibility.
(Source:BeInCrypto)