LibreChat
Self-hosted chat UI with Melious as a custom endpoint — YAML config, Docker, auto-discovered models
LibreChat is an open-source, self-hostable ChatGPT-style chat UI. Docker-deployed, YAML-configured, multi-tenant, with endpoint switching, conversation history, custom prompts, and plugin support. It speaks OpenAI, Anthropic, Google, and arbitrary OpenAI-compatible providers through its custom endpoint type — the door Melious walks through. If you want a chat UI for a team without running your own LLM gateway, LibreChat plus Melious is a short recipe: one endpoint block in librechat.yaml, one .env key, one docker compose up.
Setup
Install LibreChat
Follow the LibreChat Docker install if you haven't. The rest of this guide assumes docker compose is running against the default setup.
Configure the endpoint
Edit librechat.yaml (create it next to docker-compose.yml if it doesn't exist):
version: 1.3.5
cache: true
endpoints:
custom:
- name: "Melious"
apiKey: "${MELIOUS_API_KEY}"
baseURL: "https://api.melious.ai/v1"
models:
default:
- "glm-5.1"
fetch: true
titleConvo: true
titleModel: "current_model"
modelDisplayLabel: "Melious"
dropParams: ["user"]
# iconURL: any reachable HTTPS SVG; omit for the default LibreChat icon fetch: true tells LibreChat to call GET /v1/models on startup and auto-populate the picker; the default list is the fallback.
Mount the config
Add to docker-compose.override.yml:
services:
api:
volumes:
- ./librechat.yaml:/app/librechat.yamlRestart
docker compose down
docker compose up -dOpen LibreChat. The endpoint selector shows "Melious" and the model picker lists whatever we returned from GET /v1/models.
Picking models for the default surface
LibreChat exposes a lot of models to end users. For a shared instance, curate the default list rather than dumping the whole catalog. glm-5.1 is a safe single default for general chat. Add specialized picks (code, reasoning, long-context, small/fast) from melious.ai/hub/models as your team's needs surface.
Let fetch: true keep the full list available for power users, but name the default list explicitly.
Multi-endpoint patterns
If you want Melious alongside a self-hosted Ollama or another cloud provider:
endpoints:
custom:
- name: "Melious"
apiKey: "${MELIOUS_API_KEY}"
baseURL: "https://api.melious.ai/v1"
models: { default: ["glm-5.1"], fetch: true }
- name: "Ollama (local)"
apiKey: "ollama"
baseURL: "http://ollama:11434/v1"
models: { default: ["llama3.2"], fetch: true }Users switch between them in the endpoint selector. We stay the default for anything cloud-hosted; Ollama handles local work.
What's different
- No prompt caching cross-user. LibreChat's
cachesetting is for metadata/YAML reload; per-prompt caching (as OpenAI and Anthropic expose) isn't routed through us. - Vision inputs work for vision-capable models. LibreChat passes image blocks unchanged.
- Agents are first-class. LibreChat ships an Agent Builder with
execute_code,file_search,actions,web_search,artifacts,ocr, andcontextcapabilities. Tool calls flow through our OpenAI-compatibletool_callsschema unchanged. Pick models with strong tool support — check_meta.capabilities.tool_useonGET /v1/models?include_meta=true.
When it breaks
- Endpoint missing from dropdown —
librechat.yamlwasn't mounted. Checkdocker compose configto see the effective volumes. fetch: truereturns empty — your API key doesn't have theinference.modelsscope. Add it in the dashboard or dropfetch: trueand managedefaultmanually.- Models show but selection fails with 404 — model IDs in
defaultare stale. Letfetch: truedo the work, or rebuild the list fromGET /v1/models.
Errors and retry patterns: Errors.