TGIF #26: Covenant AI Weekly Roundup

SUMMARY: Covenant Labs shared a weekly update covering progress across Templar (Subnet 3), Grail (Subnet 81), and Basilica (Subnet 39), highlighting advances in decentralized AI training and inference on Bittensor.  Templar is nearing completion of Covenant 72B post-training, with a focus on improving GPU efficiency and expanding compute through heterogeneous clusters rather than competing directly…

Read More

Covenant AI’s TGIF Recap: Templar Pushes Bigger Training, While Grail Breaks the RL Bandwidth Wall

Covenant AI’s latest TGIF session covered major progress across its three core efforts: Templar, Basilica, and Grail. The session’s overall takeaway was that decentralized AI training is moving faster than most people expected, and the team is actively redesigning incentive systems to keep miners innovating. Watch the full episode here: Templar: Bigger Model Training, But…

Read More

Covenant AI’s PULSE: Making Decentralized RL for LLMs as Fast as Centralized Training

Covenant AI just released PULSE (Patch Updates via Lossless Sparse Encoding), a technique that slashes bandwidth for weight synchronization in decentralized reinforcement learning (RL) by 100×+, while staying completely lossless (bit-identical reconstruction, SHA-256 verified on every sync). The biggest bottleneck in decentralized RL: training can happen on fast interconnects, but inference nodes are spread globally…

Read More

Templar (SN3) Completes Pre-Training of Covenant72B, the Largest Fully Decentralized LLM to Date

Templar, Bittensor Subnet 3, has completed pre-training for Covenant72B, a 72-billion-parameter language model. This is the largest frontier-scale model ever trained in a fully permissionless and decentralized setting. The run was coordinated across a global network of independent GPUs with no central datacenter, no single owner, and no gatekeeping. Pre-training processed roughly 1.2 trillion tokens,…

Read More

TGIF #20: Democratizing AI Training with Heterogeneous SparseLoCo

SUMMARY: On TGIF #20, Covenant Labs announced Templar’s latest evolution in decentralized training, moving toward a “unified training” paradigm that combines data and model parallelism.  By integrating their state-of-the-art SparseLoCO algorithm with new model-sharding techniques, Templar is now capable of harnessing the internet’s “long tail” of compute, allowing consumer-grade GPUs (Graphics Processing Units) and even…

Read More

Decentralized AI Milestone: Covenant + Gradients Create LLM Training Pipeline on Bittensor

SUMMARY: This episode of Covenant’s TGIF community chat walked through a major milestone in building an end-to-end open weights AI model fully trained on Bittensor. The team explained they reached checkpoint two, released a working chat app, and proved that large scale pre-training, post-training, and deployment can happen across collaborating subnets.  In this upgrade, Templar…

Read More