n8n
Workflow automation with Melious via the OpenAI Chat Model node — credentials once, any workflow
n8n is a fair-code-licensed workflow automation platform: a visual node-based editor for gluing together HTTP APIs, SaaS tools, databases, AI models, and custom code into scheduled or event-driven workflows. Self-hostable, source-available, with a strong focus on technical users who'd rather write a JavaScript expression than click through another wizard. For LLM work it ships an OpenAI Chat Model node, an Embeddings OpenAI node, and an AI Agent node — all of which accept custom base URLs via the credential system. Configure a Melious credential once and every workflow — chat, embeddings, agent loops — routes through us. We recommend treating that credential as shared infrastructure, not per-workflow: rotate the key in one place, every workflow follows.
Setup
Create an OpenAI credential
In n8n: Credentials → New → OpenAI. Name it Melious.
Set the base URL and key
- API Key:
sk-mel-<YOUR_API_KEY> - Base URL:
https://api.melious.ai/v1
Save. n8n tests the credential by calling GET /v1/models. A green check means we're wired.
Use it in a workflow
Add an OpenAI Chat Model node (the one AI-agent nodes use), set Credential to Melious, and pick a model from the dropdown — it auto-populates from GET /v1/models against your credential's Base URL — or type the model ID directly (e.g. glm-5.1).
Workflow patterns that work well
Classify incoming emails.
Email Trigger (IMAP)→ fires on new mailOpenAI Chat Model→ system prompt:Classify this email into one of: billing, support, sales, other. Respond with JSON {"category": "..."}; user message: the email bodySwitch→ routes oncategory- Per-branch: forward, file, auto-reply
glm-5.1 handles this with a short prompt.
Summarize and Slack-post RSS feeds.
RSS ReadOpenAI Chat Modelwith prompt "Summarize in two sentences"Slack
Semantic search over documents.
HTTP Requestto your doc sourceEmbeddings OpenAI(credential = Melious, model =bge-m3)- Store in
Vector Store (Qdrant / Pinecone / Supabase) - Query endpoint: another
Embeddings OpenAIthen a vector-store search, then anOpenAI Chat Modelfor the answer
AI Agent node
n8n's AI Agent node (Advanced AI category) accepts an OpenAI-compatible chat model. Wire it to the Melious credential, connect tool sub-nodes, and n8n orchestrates the tool-calling loop:
[AI Agent]
├── Chat Model: Melious (glm-5.1)
├── Tool: HTTP Request (to your API)
├── Tool: Google Sheets (append row)
└── Tool: SerpAPI (web search)Prefer models with strong tool-calling behavior — check _meta.capabilities.tool_use on GET /v1/models?include_meta=true or filter at melious.ai/hub/models. glm-5.1 is a safe default.
Embeddings for vector stores
n8n's Embeddings OpenAI node takes the same credential. Set model to bge-m3 — multilingual, strong baseline, what we recommend for n8n vector workflows. Pair with bge-reranker-v2-m3 if you need a rerank step downstream. The node handles batching up to n8n's default chunk size.
Cost-conscious batch processing
We flag this because n8n workflows tend to pile up. A scheduled run that looked fine at 100 articles becomes a line item at 10,000. For heavy RSS / email / scraping workflows, pick a smaller model from melious.ai/hub/models and keep prompts terse.
What's different
- Older n8n releases hard-coded the OpenAI base URL on the legacy v1 OpenAI node. Current versions (n8n ≥ 1.x) respect the credential's Base URL field, including DALL·E image generation. If you hit a node that doesn't, fall back to an
HTTP Requestnode against/v1/images/generations. - Rate limits are per plan, not per key or per n8n instance. Concurrent workflows share the bucket. See Rate limits.
- Response cost fields (
environment_impact,billing_cost) come through in raw API responses. The OpenAI node strips them from its typed output — use aHTTP Requestnode if you need them inline for logging.
When it breaks
- Credential test fails — key or base URL wrong. Use
https://api.melious.ai/v1exactly, no trailing slash. A trailing slash can cause path doubling like/v1/v1/modelsdepending on n8n version, and the credential test fails with 404. 404 Not Foundon/v1/responses— the OpenAI Chat Model node has a "Use Responses API" toggle. If enabled, traffic lands on the Responses API, which we don't implement. Keep it off (Chat Completions is the default).- Workflow 429 under load — you hit per-plan rate limits. Add a
Waitnode (30s) inside a loop, or upgrade the plan. - AI Agent loops forever — the model isn't tool-calling reliably. Switch to
glm-5.1and capMax Iterationson the agent node.
Errors and retry patterns: Errors.