Content overview for developers

Find API documentation, integration steps, and workflow examples. Access everything needed to connect, deploy, and optimize with the largest free model catalog and energy-smart routing.

Getting Started

Welcome to CLōD's API documentation! Here you'll find everything you need to integrate our unified API into your applications. CLōD simplifies interaction with various Large Language Models (LLMs) by providing a single, consistent API endpoint. This allows you to switch between models and providers with minimal code changes, optimize for cost, latency, or token rate, and leverage advanced features like unified function calling.

Test endpoints & explore further (Swagger Docs)

Prerequisites

  • A CLōD account.
  • A CLōD API key.

API Key & Authentication

Users must authenticate requests using an API key. Your API key carries many privileges, so be sure to keep it secret! Do not share your secret API keys in publicly accessible areas such as GitHub, client-side code, and so forth.

API keys are generated in your CLōD dashboard under the "API Keys" section.

Chat Completions API

/v1/chat/completions

The /v1/chat/completions API is designed to generate text-based responses from various language models. It's built for flexibility, allowing you to choose your model, adjust settings, and optimize for cost, speed, or token rate.

Request Structure

Property Value
HTTP Method POST
Base URL https://newapp.clod.io
Endpoint /v1/chat/completions

Headers

Parameter Description
Authorization Bearer (apikey) Your unique CLōD API key
Content-Type application/json

Request Body

Parameter Type Description
model string Identifier of the model or strategy to use (e.g., "GPT 4o", "GPT 4o@price", "@latency").
messages array An array of message objects, each with role ("user", "assistant", "system") and content.
temperature number Optional. Sampling temperature (0-2). Higher values make output more random. Default varies by model.
max_completion_tokens integer Optional. Maximum number of tokens to generate. Default varies by model.
stream boolean Optional. If true, enables streaming of results. Default: false.
Other OpenAI-compatible parameters are supported.

Example Request

JSON Example
{
  "model": "GPT 4o",
  "messages": [
    {
      "role": "system",
      "content": "You are a helpful assistant."
    },
    {
      "role": "user",
      "content": "What is the capital of France?"
    }
  ],
  "temperature": 0.7,
  "max_completion_tokens": 50
}

CURL Example
curl -X POST "https://api.clod.io/v1/chat/completions" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "GPT 4o",
    "messages": [
      {
        "role": "system",
        "content": "You are a helpful assistant."
      },
      {
        "role": "user",
        "content": "What is the capital of France?"
      }
    ],
    "temperature": 0.7,
    "max_completion_tokens": 50
  }'

Python Example
import requests
import json

url = "https://api.clod.io/v1/chat/completions"

headers = {
    "Authorization": "Bearer YOUR_API_KEY",
    "Content-Type": "application/json"
}

data = {
    "model": "GPT 4o",
    "messages": [
        {
            "role": "system",
            "content": "You are a helpful assistant."
        },
        {
            "role": "user",
            "content": "What is the capital of France?"
        }
    ],
    "temperature": 0.7,
    "max_completion_tokens": 50
}

response = requests.post(url, headers=headers, json=data)
result = response.json()

print(json.dumps(result, indent=2))

Example Response
{
  "id": "chatcmpl-xxxxxxxxxxxxxxxxxxxx",
  "object": "chat.completion",
  "created": 1677652288,
  "model": "GPT 4o", // Actual model used
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "The capital of France is Paris."
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 20,
    "completion_tokens": 8,
    "total_tokens": 28
  }
}

Model Optimization Strategies

The strategy feature allows you to optimize model selection for specific criteria when multiple providers offer the same model. By adding strategy tags to your model parameter, you can prioritize models based on price, latency, or token rate.

Available Strategies

Parameter Type Description
field_name string Describe what this field does. Optional notes go here.
another_field integer Describe this field.
Additional fields supported.

Usage

Strategy tags are appended to the model name using the "@" separator. You can combine multiple strategies in any order:

Single Strategy Examples
{
  "model": "GPT 4o@price",
  "messages": [...]
}

{
  "model": "GPT 4o@latency",
  "messages": [...]
}

{
  "model": "GPT 4o@token_rate",
  "messages": [...]
}

Multiple Strategy Examples

When using multiple strategies, they can be combined in any order:

{
  "model": "GPT 4o@price@latency",
  "messages": [...]
}
{
  "model": "GPT 4o@token_rate@price",
  "messages": [...]
}
{
  "model": "GPT 4o@latency@token_rate@price",
  "messages": [...]
}

Integrations

This is an n8n community node for the CLōD API, an OpenAI-compatible LLM service.

Installation

In your n8n instance, go to Settings > Community Nodes and install:

@clod_io/n8n-nodes-clod

Or install via npm:

npm install @clod_io/n8n-nodes-clod

Credentials

  1. Go to https://app.clod.io and sign in
  2. Navigate to your account settings to get your API key
  3. In n8n, create new credentials of type CLōD API and enter your API key

Usage

The CLōD node supports chat completions with the following parameters:

  • Model (required): The model name to use (e.g., "Llama 3.1 8B")
  • Messages (required): JSON array of message objects with role and content
  • Options:
    • Temperature (0-2)
    • Max Tokens
    • Stream

Example Messages Format

[
  {"role": "system", "content": "You are a helpful assistant."},
  {"role": "user", "content": "Hello!"}
]

License

MIT