Melious
Integrations

LibreChat

Self-hosted chat UI with Melious as a custom endpoint — YAML config, Docker, auto-discovered models

LibreChat
by Danny Avila and contributorslibrechat.ai

LibreChat is an open-source, self-hostable ChatGPT-style chat UI. Docker-deployed, YAML-configured, multi-tenant, with endpoint switching, conversation history, custom prompts, and plugin support. It speaks OpenAI, Anthropic, Google, and arbitrary OpenAI-compatible providers through its custom endpoint type — the door Melious walks through. If you want a chat UI for a team without running your own LLM gateway, LibreChat plus Melious is a short recipe: one endpoint block in librechat.yaml, one .env key, one docker compose up.

Setup

Install LibreChat

Follow the LibreChat Docker install if you haven't. The rest of this guide assumes docker compose is running against the default setup.

Add the Melious API key

In your project's .env:

.env
MELIOUS_API_KEY=sk-mel-<YOUR_API_KEY>

Configure the endpoint

Edit librechat.yaml (create it next to docker-compose.yml if it doesn't exist):

librechat.yaml
version: 1.3.5
cache: true

endpoints:
  custom:
    - name: "Melious"
      apiKey: "${MELIOUS_API_KEY}"
      baseURL: "https://api.melious.ai/v1"
      models:                                                         
        default:                                                      
          - "glm-5.1"
        fetch: true
      titleConvo: true
      titleModel: "current_model"
      modelDisplayLabel: "Melious"
      dropParams: ["user"]                                            
      # iconURL: any reachable HTTPS SVG; omit for the default LibreChat icon 

fetch: true tells LibreChat to call GET /v1/models on startup and auto-populate the picker; the default list is the fallback.

Mount the config

Add to docker-compose.override.yml:

docker-compose.override.yml
services:
  api:
    volumes:
      - ./librechat.yaml:/app/librechat.yaml

Restart

docker compose down
docker compose up -d

Open LibreChat. The endpoint selector shows "Melious" and the model picker lists whatever we returned from GET /v1/models.

Picking models for the default surface

LibreChat exposes a lot of models to end users. For a shared instance, curate the default list rather than dumping the whole catalog. glm-5.1 is a safe single default for general chat. Add specialized picks (code, reasoning, long-context, small/fast) from melious.ai/hub/models as your team's needs surface.

Let fetch: true keep the full list available for power users, but name the default list explicitly.

Multi-endpoint patterns

If you want Melious alongside a self-hosted Ollama or another cloud provider:

endpoints:
  custom:
    - name: "Melious"
      apiKey: "${MELIOUS_API_KEY}"
      baseURL: "https://api.melious.ai/v1"
      models: { default: ["glm-5.1"], fetch: true }

    - name: "Ollama (local)"
      apiKey: "ollama"
      baseURL: "http://ollama:11434/v1"
      models: { default: ["llama3.2"], fetch: true }

Users switch between them in the endpoint selector. We stay the default for anything cloud-hosted; Ollama handles local work.

What's different

  • No prompt caching cross-user. LibreChat's cache setting is for metadata/YAML reload; per-prompt caching (as OpenAI and Anthropic expose) isn't routed through us.
  • Vision inputs work for vision-capable models. LibreChat passes image blocks unchanged.
  • Agents are first-class. LibreChat ships an Agent Builder with execute_code, file_search, actions, web_search, artifacts, ocr, and context capabilities. Tool calls flow through our OpenAI-compatible tool_calls schema unchanged. Pick models with strong tool support — check _meta.capabilities.tool_use on GET /v1/models?include_meta=true.

When it breaks

  • Endpoint missing from dropdownlibrechat.yaml wasn't mounted. Check docker compose config to see the effective volumes.
  • fetch: true returns empty — your API key doesn't have the inference.models scope. Add it in the dashboard or drop fetch: true and manage default manually.
  • Models show but selection fails with 404 — model IDs in default are stale. Let fetch: true do the work, or rebuild the list from GET /v1/models.

Errors and retry patterns: Errors.

On this page