Chutes AI

Wiki Powered byIconIQ
Chutes AI

我们刚刚发布了 IQ AI.

查看详情

Chutes AI

Chutes AI is a decentralized, distributed serverless AI compute platform designed to enable users to run, scale, and deploy large language models (LLMs) and other artificial intelligence tools. It operates on a pay-per-use model, aiming to provide on-demand access to GPU resources without the complexities of traditional cloud infrastructure management. [1]

Overview

Chutes AI is developed by Rayon Labs, a company focused on advancing decentralized solutions within the Bittensor ecosystem. The platform leverages Bittensor's Subnet 64, a decentralized network where contributors provide compute power and are compensated with $TAO tokens. This infrastructure allows Chutes AI to offer a serverless compute solution, enabling developers, data scientists, and startup founders to deploy and scale AI projects rapidly and cost-effectively.

Since its launch, Chutes AI has demonstrated significant growth in demand. By May 28, 2025, the platform reached a milestone of processing 100 billion tokens per day, equivalent to 3 trillion real-world tokens per month. This represents a 250x increase in demand since January 2025, reflecting the growing adoption of decentralized AI compute solutions. The platform's token-based payment system aligns with a broader industry trend towards usage-based models, potentially offering cost savings of up to 40% compared to traditional providers. [1] [2] [3] [6] [5]

Technology

Chutes AI operates by allowing users to "launch" containers, referred to as "chutes," on a network of decentralized GPU providers. These nodes execute user code and return results efficiently and securely, eliminating the need for extensive cloud engineering expertise. The platform supports the use of custom Docker containers or pre-configured templates for various AI tasks.

The underlying infrastructure is built upon the Bittensor network, which incentivizes miners to contribute their computational resources. This decentralized approach aims to provide a robust and scalable environment for AI model deployment and inference. Users pay only for the compute resources consumed, avoiding idle GPU costs or subscription fees. [1]

Key Features

Chutes AI offers several features designed to streamline AI development and deployment:

  • Serverless Deployment: Enables rapid deployment of AI models within seconds, removing the need for complex DevOps configurations.
  • Decentralized Compute: Utilizes the Bittensor network's distributed GPU resources, promoting a resilient and scalable infrastructure.
  • Open Infrastructure: Supports user-provided Docker containers and offers a selection of pre-configured templates for common AI models like DeepSeek, Meta-Llama, and Mistral.
  • Flexible SDK & CLI: Provides developer-friendly tools for automating deployment and management tasks.
  • Pay-per-use Pricing: Charges users based on actual resource consumption, eliminating fixed costs associated with traditional GPU hosting. [1]

Use Cases and Integrations

Chutes AI is designed for various AI applications, including LLM fine-tuning, embedding generation, and image synthesis. It integrates with popular frontend chat tools such as Janitor AI and KoboldAI, allowing users to connect Chutes AI models by configuring the model name, proxy URL, and API key.

The platform also supports the availability of specific models, such as Qwen 3 Coder 480B A35B, which can be accessed by adding credits to a Chutes AI account. [1] [2]

Startup Accelerator Platform

Chutes AI has launched a Startup Accelerator Platform to support emerging AI projects. This program offers up to $20,000 worth of Chutes and a comprehensive support program for eligible startups. The accelerator targets companies with fewer than 50 employees, pre-Series A funding, less than $2 million raised (or no funding), and a need for high throughput compute The platform actively onboards new startups, with applications being accepted continuously. [2]

Future Developments

While Chutes AI offers advantages in cost efficiency and decentralized access, it faces challenges such as competition from low-cost Web2 solutions and its reliance on Bittensor's economic model. API key management via a Bittensor hotkey can also present a learning curve for some users.

In response to evolving demands and regulatory landscapes, Chutes AI plans to introduce new payment options, integrate secure Trusted Execution Environments (TEE) for enhanced data privacy and security, and streamline workflows. These developments aim to make intelligent compute more secure and accessible, aligning with regulations such as the EU AI Act, which emphasizes transparency, fairness, and accountability. [1] [3]

Parent Company: Rayon Labs

Chutes AI is a product of Rayon Labs, a company dedicated to developing decentralized solutions within the Bittensor ecosystem. Rayon Labs was founded by Namoray and BonOliver and manages three distinct subnets on Bittensor:

  • Chutes (Subnet 64): Focuses on serverless AI compute. As of April 2025, it led in capitalization among Rayon Labs' subnets with $60 million.
  • Gradients (Subnet 56): Aims to make model training accessible to non-experts, with a capitalization of $30 million as of April 2025.
  • Nineteen (Subnet 19): Concentrates on high-frequency inference, with a capitalization of $15 million as of April 2025.

Rayon Labs fosters a remote work environment and utilizes technologies such as Python, Kubernetes, and Docker to build its products. [3] [4] [5] [6]

参考文献

首页分类排名事件词汇表