Tether Unveils AI System to Run Large Models on Smartphones
Summary
Tether, the issuer of USDT, announced a new AI training framework, part of its QVAC platform, designed to enable fine-tuning of large language models (LLMs) on consumer hardware, including smartphones and non-Nvidia GPUs. The system leverages Microsoft’s BitNet architecture and LoRA techniques to significantly reduce memory and compute requirements, potentially lowering AI development barriers. It supports cross-platform training across AMD, Intel, Apple Silicon, and mobile GPUs from Qualcomm and Apple. Tether engineers successfully fine-tuned models up to 1 billion parameters on smartphones in under two hours, with support for models up to 13 billion parameters. By utilizing the 1-bit BitNet architecture, the framework can cut VRAM needs by up to 77.8% compared to 16-bit models. This development aligns with a broader trend of crypto companies expanding into AI infrastructure, offering use cases like on-device training and federated learning, reducing reliance on centralized cloud services.
(Source:Cointelegraph)