tencent cloud

models.json Configuration Guide-models.json
Last updated:2026-03-02 15:47:14
models.json Configuration Guide-models.json
Last updated: 2026-03-02 15:47:14

Overview

models.json is a configuration file used to customize the model list and control the model dropdown display. This configuration supports two levels:
User-level: ~/.codebuddy/models.json - Global configuration applicable to all projects
Project-level: <workspace>/.codebuddy/models.json - Project-specific configuration with higher priority than user-level

Configuration File Locations

User-level Configuration

~/.codebuddy/models.json

Project-level Configuration

<project-root>/.codebuddy/models.json

Configuration Priority

Configuration merge priority from highest to lowest:
1. Project-level models.json
2. User-level models.json
3. Built-in default configuration
Project-level configuration will override user-level configuration for the same model definitions (based on id field matching). availableModels field: project-level completely overrides user-level, no merging.

Configuration Structure

{
"models": [
{
"id": "model-id",
"name": "Model Display Name",
"vendor": "vendor-name",
"apiKey": "sk-actual-api-key-value",
"maxInputTokens": 200000,
"maxOutputTokens": 8192,
"url": "https://api.example.com/v1/chat/completions",
"supportsToolCall": true,
"supportsImages": true
}
],
"availableModels": ["model-id-1", "model-id-2"]
}

Configuration Field Description

models

Type: Array<LanguageModel>
Define custom model list. You can add new models or override built-in model configurations.

LanguageModel Fields

Field
Type
Required
Description
id
string
Model unique identifier
name
string
-
Model display name
vendor
string
-
Model vendor (e.g., OpenAI, Google)
apiKey
string
-
API key (actual key value, not environment variable name)
maxInputTokens
number
-
Maximum input tokens
maxOutputTokens
number
-
Maximum output tokens
url
string
-
API endpoint URL (must be complete interface path, typically ending with /chat/completions)
supportsToolCall
boolean
-
Whether tool calls are supported
supportsImages
boolean
-
Whether image input is supported
supportsReasoning
boolean
-
Whether reasoning mode is supported
Important Notes:
Currently only supports OpenAI interface format API
url field must be the complete interface path, typically ending with /chat/completions
Examples: https://api.openai.com/v1/chat/completions or http://localhost:11434/v1/chat/completions

availableModels

Type: Array<string>
Control which models are displayed in the model dropdown list. Only model IDs listed in this array will be shown in the UI.
If not configured or empty array, all models are displayed
When configured, only listed model IDs are displayed
Can include both built-in and custom model IDs

Use Cases

1. Add Custom Model

Add new model configuration at user or project level:
{
"models": [
{
"id": "my-custom-model",
"name": "My Custom Model",
"vendor": "OpenAI",
"apiKey": "sk-custom-key-here",
"maxInputTokens": 128000,
"maxOutputTokens": 4096,
"url": "https://api.myservice.com/v1/chat/completions",
"supportsToolCall": true
}
]
}

2. Override Built-in Model Configuration

Modify default parameters of built-in models:
{
"models": [
{
"id": "gpt-4-turbo",
"name": "GPT-4 Turbo (Custom Endpoint)",
"vendor": "OpenAI",
"url": "https://my-proxy.example.com/v1/chat/completions",
"apiKey": "sk-your-key-here"
}
]
}

3. Limit Available Model List

Only display specific models in the dropdown list:
{
"availableModels": [
"gpt-4-turbo",
"gpt-4o",
"my-custom-model"
]
}

4. Project-Specific Configuration

Use different models or API endpoints for specific projects:
Project A (.codebuddy/models.json):
{
"models": [
{
"id": "project-a-model",
"name": "Project A Model",
"vendor": "OpenAI",
"url": "https://project-a-api.example.com/v1/chat/completions",
"apiKey": "sk-project-a-key",
"maxInputTokens": 100000,
"maxOutputTokens": 4096
}
],
"availableModels": ["project-a-model", "gpt-4-turbo"]
}

Hot Reload

Configuration file supports hot reload:
File changes are automatically detected
Uses 1 second debounce delay to avoid frequent reloads
Configuration updates are automatically synced to the application
Monitored files:
~/.codebuddy/models.json (user-level)
<workspace>/.codebuddy/models.json (project-level)

Tagging System

Models added through models.json are automatically tagged with the custom tag for easy identification and filtering in the UI.

Merge Strategy

Configuration uses SmartMerge strategy:
Model configurations with the same ID are overridden
Models with different IDs are appended
Project-level configuration takes priority over user-level configuration
availableModels filtering is executed after all merging is complete

Example Configurations

API Endpoint URL Format

Must use complete path:
All custom model url fields should typically end with /chat/completions.
Correct Examples:
https://api.openai.com/v1/chat/completions
https://api.myservice.com/v1/chat/completions
http://localhost:11434/v1/chat/completions
https://my-proxy.example.com/v1/chat/completions
Incorrect Examples:
https://api.openai.com/v1
https://api.myservice.com
http://localhost:11434

OpenRouter Platform Configuration Example

Using OpenRouter to access various models:
{
"models": [
{
"id": "openai/gpt-4o",
"name": "open-router-model",
"url": "https://openrouter.ai/api/v1/chat/completions",
"apiKey": "sk-or-v1-your-openrouter-api-key",
"maxInputTokens": 128000,
"maxOutputTokens": 4096,
"supportsToolCall": true,
"supportsImages": false
}
]
}

DeepSeek Platform Configuration Example

Using DeepSeek models:
{
"models": [
{
"id": "deepseek-chat",
"name": "DeepSeek Chat",
"vendor": "DeepSeek",
"url": "https://api.deepseek.com/v1/chat/completions",
"apiKey": "sk-your-deepseek-api-key",
"maxInputTokens": 32000,
"maxOutputTokens": 4096,
"supportsToolCall": true,
"supportsImages": false
}
]
}

Complete Example

{
"models": [
{
"id": "gpt-4o",
"name": "GPT-4o",
"vendor": "OpenAI",
"apiKey": "sk-your-openai-key",
"maxInputTokens": 128000,
"maxOutputTokens": 16384,
"supportsToolCall": true,
"supportsImages": true
},
{
"id": "my-local-llm",
"name": "My Local LLM",
"vendor": "Ollama",
"url": "http://localhost:11434/v1/chat/completions",
"apiKey": "ollama",
"maxInputTokens": 8192,
"maxOutputTokens": 2048,
"supportsToolCall": true
}
],
"availableModels": [
"gpt-4o",
"my-local-llm"
]
}

Troubleshooting

Configuration Not Taking Effect

1. Check if JSON format is correct
2. Confirm file path is correct
3. View log output to confirm configuration is loaded
4. Confirm API keys in environment variables are set

Model Not Showing in List

1. Check if model ID is listed in availableModels
2. Confirm models configuration is correct
3. Verify all required fields (id, name, provider) are provided

Hot Reload Not Triggered

Configuration file changes have 1 second debounce delay
Ensure file is actually saved to disk
Check if file watching started normally (view debug logs)
Was this page helpful?
You can also Contact Sales or Submit a Ticket for help.
Yes
No

Feedback