Melious
Integrations

n8n

Workflow automation with Melious via the OpenAI Chat Model node — credentials once, any workflow

n8n
by n8n GmbHn8n.io

n8n is a fair-code-licensed workflow automation platform: a visual node-based editor for gluing together HTTP APIs, SaaS tools, databases, AI models, and custom code into scheduled or event-driven workflows. Self-hostable, source-available, with a strong focus on technical users who'd rather write a JavaScript expression than click through another wizard. For LLM work it ships an OpenAI Chat Model node, an Embeddings OpenAI node, and an AI Agent node — all of which accept custom base URLs via the credential system. Configure a Melious credential once and every workflow — chat, embeddings, agent loops — routes through us. We recommend treating that credential as shared infrastructure, not per-workflow: rotate the key in one place, every workflow follows.

Setup

Create an OpenAI credential

In n8n: Credentials → New → OpenAI. Name it Melious.

Set the base URL and key

  • API Key: sk-mel-<YOUR_API_KEY>
  • Base URL: https://api.melious.ai/v1

Save. n8n tests the credential by calling GET /v1/models. A green check means we're wired.

Use it in a workflow

Add an OpenAI Chat Model node (the one AI-agent nodes use), set Credential to Melious, and pick a model from the dropdown — it auto-populates from GET /v1/models against your credential's Base URL — or type the model ID directly (e.g. glm-5.1).

Workflow patterns that work well

Classify incoming emails.

  1. Email Trigger (IMAP) → fires on new mail
  2. OpenAI Chat Model → system prompt: Classify this email into one of: billing, support, sales, other. Respond with JSON {"category": "..."}; user message: the email body
  3. Switch → routes on category
  4. Per-branch: forward, file, auto-reply

glm-5.1 handles this with a short prompt.

Summarize and Slack-post RSS feeds.

  1. RSS Read
  2. OpenAI Chat Model with prompt "Summarize in two sentences"
  3. Slack

Semantic search over documents.

  1. HTTP Request to your doc source
  2. Embeddings OpenAI (credential = Melious, model = bge-m3)
  3. Store in Vector Store (Qdrant / Pinecone / Supabase)
  4. Query endpoint: another Embeddings OpenAI then a vector-store search, then an OpenAI Chat Model for the answer

AI Agent node

n8n's AI Agent node (Advanced AI category) accepts an OpenAI-compatible chat model. Wire it to the Melious credential, connect tool sub-nodes, and n8n orchestrates the tool-calling loop:

[AI Agent]
  ├── Chat Model: Melious (glm-5.1)
  ├── Tool: HTTP Request (to your API)
  ├── Tool: Google Sheets (append row)
  └── Tool: SerpAPI (web search)

Prefer models with strong tool-calling behavior — check _meta.capabilities.tool_use on GET /v1/models?include_meta=true or filter at melious.ai/hub/models. glm-5.1 is a safe default.

Embeddings for vector stores

n8n's Embeddings OpenAI node takes the same credential. Set model to bge-m3 — multilingual, strong baseline, what we recommend for n8n vector workflows. Pair with bge-reranker-v2-m3 if you need a rerank step downstream. The node handles batching up to n8n's default chunk size.

Cost-conscious batch processing

We flag this because n8n workflows tend to pile up. A scheduled run that looked fine at 100 articles becomes a line item at 10,000. For heavy RSS / email / scraping workflows, pick a smaller model from melious.ai/hub/models and keep prompts terse.

What's different

  • Older n8n releases hard-coded the OpenAI base URL on the legacy v1 OpenAI node. Current versions (n8n ≥ 1.x) respect the credential's Base URL field, including DALL·E image generation. If you hit a node that doesn't, fall back to an HTTP Request node against /v1/images/generations.
  • Rate limits are per plan, not per key or per n8n instance. Concurrent workflows share the bucket. See Rate limits.
  • Response cost fields (environment_impact, billing_cost) come through in raw API responses. The OpenAI node strips them from its typed output — use a HTTP Request node if you need them inline for logging.

When it breaks

  • Credential test fails — key or base URL wrong. Use https://api.melious.ai/v1 exactly, no trailing slash. A trailing slash can cause path doubling like /v1/v1/models depending on n8n version, and the credential test fails with 404.
  • 404 Not Found on /v1/responses — the OpenAI Chat Model node has a "Use Responses API" toggle. If enabled, traffic lands on the Responses API, which we don't implement. Keep it off (Chat Completions is the default).
  • Workflow 429 under load — you hit per-plan rate limits. Add a Wait node (30s) inside a loop, or upgrade the plan.
  • AI Agent loops forever — the model isn't tool-calling reliably. Switch to glm-5.1 and cap Max Iterations on the agent node.

Errors and retry patterns: Errors.

On this page