Find API documentation, integration steps, and workflow examples. Access everything needed to connect, deploy, and optimize with the largest free model catalog and energy-smart routing.
Welcome to CLōD's API documentation! Here you'll find everything you need to integrate our unified API into your applications. CLōD simplifies interaction with various Large Language Models (LLMs) by providing a single, consistent API endpoint. This allows you to switch between models and providers with minimal code changes, optimize for cost, latency, or token rate, and leverage advanced features like unified function calling.
Test endpoints & explore further (Swagger Docs)
Prerequisites
Users must authenticate requests using an API key. Your API key carries many privileges, so be sure to keep it secret! Do not share your secret API keys in publicly accessible areas such as GitHub, client-side code, and so forth.
API keys are generated in your CLōD dashboard under the "API Keys" section.
/v1/chat/completions
The /v1/chat/completions API is designed to generate text-based responses from various language models. It's built for flexibility, allowing you to choose your model, adjust settings, and optimize for cost, speed, or token rate.
The strategy feature allows you to optimize model selection for specific criteria when multiple providers offer the same model. By adding strategy tags to your model parameter, you can prioritize models based on price, latency, or token rate.
Strategy tags are appended to the model name using the "@" separator. You can combine multiple strategies in any order:
When using multiple strategies, they can be combined in any order:
This is an n8n community node for the CLōD API, an OpenAI-compatible LLM service.
In your n8n instance, go to Settings > Community Nodes and install:
Or install via npm:
The CLōD node supports chat completions with the following parameters:
role and content
MIT