Vidaio (Subnet 85) Set to Advance Decentralized Video Processing in 2026

As video continues to dominate internet traffic, the infrastructure behind video enhancement, compression, and delivery remains largely centralized and expensive. Vidaio, grounded on Subnet 85, is attempting to change that by bringing AI-driven video processing onto Bittensor’s decentralized network. Vidaio is an open-source video processing subnet focused on AI-based compression and upscaling today, post-production automation,…

Read More

Why Non-Deterministic Enrichment is Becoming a Core Primitive for Decentralized AI

As decentralized AI systems mature, a quiet bottleneck is becoming impossible to ignore: high-quality datasets do not scale the way compute does. Inference can be parallelized, training can be distributed, but data generation, especially in open-ended domains, still struggles under one fundamental assumption: that every task must produce a single correct output.  On Bittensor’s Subnet…

Read More

General TAO Ventures Rebrands to General Tensor

General TAO Ventures has officially rebranded to General Tensor (@generaltensor). The team, strategy, and focus remain unchanged, with continued emphasis on building decentralized intelligence infrastructure within the Bittensor ecosystem. In the announcement, General Tensor outlined its track record and current footprint. The team has been active in Bittensor since 2023, owns Subnet 35 (0xMarkets), and…

Read More

SIRE Scales αVault Execution Through Line Diversification

SIRE, powered by Score Vision on Bittensor Subnet 44, is refining its execution framework as it scales αVault toward more consistent, risk-adjusted performance.  January marks a deliberate shift in how the system captures market inefficiencies, prioritizing portfolio-level stability over isolated outcomes. Rather than increasing volume for its own sake, SIRE is accelerating the convergence between…

Read More

TGIF #20: Democratizing AI Training with Heterogeneous SparseLoCo

SUMMARY: On TGIF #20, Covenant Labs announced Templar’s latest evolution in decentralized training, moving toward a “unified training” paradigm that combines data and model parallelism.  By integrating their state-of-the-art SparseLoCO algorithm with new model-sharding techniques, Templar is now capable of harnessing the internet’s “long tail” of compute, allowing consumer-grade GPUs (Graphics Processing Units) and even…

Read More