Find API documentation, integration steps, and workflow examples. Access everything needed to connect, deploy, and optimize with the largest free model catalog and energy-smart routing.
Welcome to CLōD. This guide covers everything you need to go from zero to your first API call, including how the model catalog works, how energy-based pricing keeps your costs low, and how to organize your usage with Projects.
Go to https://app.clod.io and sign up for free. No credit card required.
Your free account gives you immediate access to:
dev-key, n8n-workflow) and optionally assign it to a ProjectThe dashboard also displays the API endpoint alongside your key for quick reference.
Store your key as an environment variable never hardcode it:
Never expose your API key in client-side code, public repositories, or version-controlled config files.
CLōD organizes all models into three categories. You can browse the full catalog at https://app.clod.io/auth/models.
Open-source models available at no cost. Identified by a Free label in the catalog.
Models hosted directly on CLōD's GPU infrastructure across North America. This is where energy-based dynamic pricing applies (see Step 4 below). Many free models are also available under this category.
Proprietary closed-source models (e.g., GPT-4o, Claude) that CLōD proxies through a unified endpoint. These are models CLōD cannot host directly.
The value here is simplicity: instead of managing separate API keys and billing accounts for each provider, you access all of them through a single CLōD API key. A 5% routing fee applies on top of the provider's standard rate.
This is CLōD's unique value proposition for hosted models.
CLōD operates GPU servers at multiple locations across North America. Electricity costs at each location fluctuate throughout the day based on real-time energy market prices. CLōD continuously monitors these costs and dynamically adjusts token pricing to always route your request to the lowest-cost available Data Center.
What this means for you:
Viewing price history: In the Models page of your dashboard, each CLōD-hosted model displays a live pricing chart. You can toggle between 4-hour, 24-hour, and 7-day views to understand pricing trends. If your workload is flexible, scheduling high-volume jobs during low-energy-cost windows can reduce your inference spend materially.
For current token prices, always refer to the live model card in your dashboard , rates update in real time.
CLōD's API is fully OpenAI-compatible. If you already use the OpenAI SDK, you only need to change the base_url and your API key.
Base URL: https://api.clod.io/v1
Endpoint: POST /v1/chat/completions
PythonExample Response
Projects let you group models, API keys, and logs into isolated environments. This is useful for separating teams, use cases, or deployment stages.
To create a project:
Project features:
The /v1/chat/completions API is designed to generate text-based responses from various language models. It's built for flexibility, allowing you to choose your model, adjust settings, and optimize for cost, speed, or token rate.
Integrate OpenClaw with CLōD to route your AI agent inference through CLōD’s OpenAI-compatible API endpoint.
This allows your OpenClaw agents to access multiple models through a single endpoint while using CLōD for unified model access and usage monitoring.
OpenClaw is an open-source AI agent gateway that routes agent inference requests to configurable model providers.
By configuring CLōD as a provider, OpenClaw agents can send inference requests directly through the CLōD API.
There are two ways to configure OpenClaw to use CLōD.
Recommended:
Use the OpenClaw onboarding wizard.
Advanced:
Manually configure the provider in the OpenClaw JSON configuration file.
If you are setting up OpenClaw for the first time, the easiest method is using the built-in onboarding tool.
Run:
During onboarding select:
Provider: Custom Provider
Provider Name:
Base URL:
Note:
The /v1 suffix is required because CLōD uses an OpenAI-compatible API format.
You will also be prompted to enter your CLōD API key and configure your default model.
Once onboarding finishes, OpenClaw automatically creates the required configuration entries.
Before configuring OpenClaw you will need a CLōD API key.
Dashboard:
Copy the API key. You will use it in your OpenClaw configuration.
For better security, store your API key in an environment variable instead of placing it directly inside configuration files.
Example:
export CLOD_API_KEY="your_clod_api_key"
This keeps your API key out of version control and configuration files.
Advanced users can configure CLōD directly in the OpenClaw configuration file.
Open:
Add the CLōD provider configuration.
Example:
Replace:
YOUR_CLOD_API_KEY
with your actual CLōD API key.
Instead of placing the API key directly in the config file, you can reference the environment variable.
Example:
Then define the key:
export CLOD_API_KEY="your_clod_api_key"
CLōD supports multiple models that can be used through OpenClaw.
Model identifiers should match the names listed in the CLōD model catalog.
Model catalog:
https://app.clod.io/user/models
Examples of available models:
• DeepSeek V3
• Llama 3.1 8B
• Minimax M2.5
When configuring a model in OpenClaw, use the exact name provided in the CLōD model list.
Example:
"id": "Llama 3.1 8B"
The contextWindow parameter must match the maximum context supported by the model you are using.
Typical values include:
If you configure a context window larger than what the model supports, inference requests may fail.
Always verify supported context sizes on the CLōD model page.
After configuration is complete, start or restart the OpenClaw gateway.
Example:
OpenClaw will load the custom-api-clod-io provider and route inference requests through CLōD.
Once OpenClaw is running with CLōD, you can monitor inference usage from the CLōD dashboard.
Dashboard:
From the dashboard you can view:
• Request activity
• Token usage
• Model usage
• Total consumption
This helps track how your OpenClaw agents are using inference resources.
This is an n8n community node for the CLōD API, an OpenAI-compatible LLM service.
In your n8n instance, go to Settings > Community Nodes and install:
Or install via npm:
The CLōD node supports chat completions with the following parameters:
role and content
MIT
Use the Kilo Code VS Code extension with CLōD by connecting it to CLōD’s OpenAI-compatible API endpoint.
This allows Kilo Code to route inference requests through CLōD and access any model available in your CLōD account.
Kilo Code is a VS Code extension for agentic coding, allowing AI models to read files, run commands, and assist with development workflows inside your code editor.
By connecting Kilo Code to CLōD, you can use CLōD-hosted models directly inside VS Code.
Follow these steps to connect Kilo Code to CLōD.
Ctrl + Shift + X
Step 2 Configure CLōD as an OpenAI-Compatible Provider
Open the Kilo Code settings and configure the provider.
Set the following values:
Provider:
OpenAI Compatible
Base URL:
Note:
The /v1 suffix is required because CLōD uses an OpenAI-compatible API format.
API Key
Enter your CLōD API key.
If you do not have an API key yet, create one in the CLōD dashboard.
Once the API key is entered, Kilo Code will automatically fetch the models available in your CLōD account.
You can then select a model directly from the Kilo Code interface.
Examples of available models include:
• DeepSeek V3
• Llama 3.1 8B
• Minimax M2.5
The available models depend on what is enabled in your CLōD account.
Kilo Code performs agentic actions such as:
• reading files
• running commands
• editing code
These capabilities rely on tool calling (function calling).
For best results, choose a model that supports tool usage.
You can view available models in the CLōD model catalog:
https://app.clod.io/user/models
Once configured, you can start using Kilo Code with CLōD.
Examples of useful prompts:
Scan this repository and explain the architecture in 10 bullets.
Find the entry point for feature X and show the call chain.
Run tests and fix the first failing test.
Refactor module X and update all related imports.
You can monitor inference usage from the CLōD dashboard.
From the dashboard you can view:
• Request activity
• Token usage
• Model usage
• Overall consumption
Once connected to CLōD, you can:
• Switch between available models
• Use larger-context models when needed
• Monitor inference usage through the CLōD dashboard
• Integrate CLōD into additional developer tools
CLōD provides an OpenAI-compatible API, which means most tools and frameworks that support the OpenAI API can connect to CLōD with minimal configuration.
This includes:
• OpenAI SDK
• LangChain
• LiteLLM
• AI coding assistants
• agent frameworks
• custom applications
To connect to CLōD, configure the API endpoint and your API key.
You can verify your CLōD connection by sending a simple request to the API.
If successful, the API will return a response containing the model’s output.
You can use the official OpenAI SDK with CLōD.
Use the Roo Code VS Code extension with CLōD by connecting it to CLōD’s OpenAI-compatible API endpoint.
This allows Roo Code to route inference requests through CLōD and access any model available in your CLōD account.
Roo Code is a VS Code extension for agentic coding, allowing AI models to analyze projects, generate code, refactor files, and assist with development workflows inside your editor.
By connecting Roo Code to CLōD, you can use CLōD-hosted models directly within VS Code.
Follow these steps to connect Roo Code to CLōD.
Ctrl + Shift + X
Open the Roo Code settings and configure the provider.
Set the following values:
Provider
OpenAI Compatible
Base URL
Note:
The /v1 suffix is required because CLōD uses an OpenAI-compatible API format.
API Key
Enter your CLōD API key.
If you do not have an API key yet, generate one from the CLōD dashboard.
After entering your API key, Roo Code will automatically fetch the models available in your CLōD account.
You can then select a model directly from the Roo Code interface.
Examples of available models include:
• DeepSeek V3
• Llama 3.1 8B
• Minimax M2.5
The models shown depend on what is enabled in your CLōD account.
Roo Code performs agentic actions, including:
• reading project files
• running commands
• modifying code
• refactoring modules
These capabilities rely on tool calling (function calling).
For best results, choose a model that supports tool usage.
You can view available models in the CLōD model catalog:
https://app.clod.io/user/models
Once configured, you can start using Roo Code with CLōD inside VS Code.
Example prompts:
Map the main modules in this repository and explain their purpose.
Find where request X is handled and describe the execution flow.
Refactor this logic into a helper function and update all call sites.
Run lint or tests and fix issues until the build passes.
You can monitor inference usage from the CLōD dashboard.
From the dashboard you can view:
• request activity
• token usage
• model usage
• overall consumption
Once Roo Code is connected to CLōD, you can:
• switch between models available on CLōD
• use higher-context models when working with large repositories
• monitor inference usage through the CLōD dashboard
• integrate CLōD with additional development tools