Melious
Integrations

Vercel AI SDK

createOpenAICompatible with our base URL — streaming, tool calls, generateObject, all on European infrastructure

Vercel
by Vercelai-sdk.dev

The Vercel AI SDK is a framework-agnostic TypeScript toolkit for building AI apps. Primitives: streamText, generateText, generateObject, embed, plus React/Svelte/Vue hooks like useChat and useCompletion. Providers are pluggable — @ai-sdk/openai, @ai-sdk/openai-compatible, @ai-sdk/anthropic, plus community modules — each accepting a baseURL so any OpenAI-shape server slots in. That's why Melious drops in without any SDK fork: create one provider with our base URL, import it anywhere, and the SDK primitives that hit Chat Completions all work unchanged.

This guide targets AI SDK v5. npm install ai resolves to v5; older v4 examples (e.g. parameters:, maxSteps, toDataStreamResponse) won't compile against current packages.

Setup

Install

npm install ai @ai-sdk/openai-compatible zod

(Substitute pnpm or bun as needed.)

Add your API key to .env.local:

.env.local
MELIOUS_API_KEY=sk-mel-<YOUR_API_KEY>

Create the provider

One file, reused across your app:

lib/ai/melious.ts
import { createOpenAICompatible } from '@ai-sdk/openai-compatible';

export const melious = createOpenAICompatible({
  name: 'melious',
  baseURL: 'https://api.melious.ai/v1',            
  apiKey: process.env.MELIOUS_API_KEY,             
});

Every melious(modelId) call returns a Chat Completions model instance the SDK's primitives accept.

We recommend @ai-sdk/openai-compatible over @ai-sdk/openai because the OpenAI provider's default factory in v5 targets the Responses API (/v1/responses), which Melious does not implement. The compatible provider always speaks Chat Completions. If you'd rather stay on @ai-sdk/openai, use melious.chat('<MODEL_ID>') everywhere instead of melious('<MODEL_ID>') to force the chat shape.

First call

import { streamText } from 'ai';
import { melious } from '@/lib/ai/melious';

const result = streamText({
  model: melious('<MODEL_ID>'),
  prompt: 'Name three Hanseatic cities.',
});

for await (const chunk of result.textStream) {
  process.stdout.write(chunk);
}

Hamburg, Lübeck, Bremen — if that's roughly what streams back, the provider is wired.

Inside a Next.js route handler

Return the stream directly. In v5 the SDK's UI message stream replaces the old data stream:

app/api/chat/route.ts
import { streamText, convertToModelMessages, type UIMessage } from 'ai';
import { melious } from '@/lib/ai/melious';

export const runtime = 'edge';

export async function POST(req: Request) {
  const { messages }: { messages: UIMessage[] } = await req.json();
  const result = streamText({
    model: melious('<MODEL_ID>'),
    messages: convertToModelMessages(messages),
  });
  return result.toUIMessageStreamResponse();
}

The useChat hook on the client side picks up that stream with no changes.

Structured outputs

generateObject validates responses against a Zod schema:

import { generateObject } from 'ai';
import { z } from 'zod';
import { melious } from '@/lib/ai/melious';

const { object } = await generateObject({
  model: melious('<MODEL_ID>'),
  schema: z.object({
    cities: z.array(z.object({
      name: z.string(),
      league: z.string(),
    })),
  }),
  prompt: 'List three Hanseatic cities.',
});

Uses JSON-mode under the hood, which Melious supports against every chat model that claims it.

Tool calling

import { streamText, stepCountIs, tool } from 'ai';
import { z } from 'zod';
import { melious } from '@/lib/ai/melious';

const result = streamText({
  model: melious('<MODEL_ID>'),
  prompt: 'What is the weather in Hamburg?',
  tools: {
    getWeather: tool({
      description: 'Get the weather for a city',
      inputSchema: z.object({ city: z.string() }),         
      execute: async ({ city }) => ({ city, tempC: 14, sky: 'overcast' }),
    }),
  },
  stopWhen: stepCountIs(5),                                
});

Multi-step tool calls, parallel_tool_calls, and tool_choice all pass through to our Chat Completions implementation unchanged. v5 renamed parametersinputSchema and replaced maxSteps: N with stopWhen: stepCountIs(N).

Embeddings

import { embed } from 'ai';
import { melious } from '@/lib/ai/melious';

const { embedding } = await embed({
  model: melious.embeddingModel('<EMBEDDING_MODEL_ID>'),
  value: 'Hello, Hamburg.',
});

Batch variants (embedMany) are supported the same way. Browse melious.ai/hub/models for available embedding models.

Image generation

@ai-sdk/openai-compatible exposes provider.imageModel(...). It calls our POST /v1/images/generations endpoint, which speaks the OpenAI image shape:

import { generateImage } from 'ai';
import { melious } from '@/lib/ai/melious';

const { images } = await generateImage({
  model: melious.imageModel('<IMAGE_MODEL_ID>'),
  prompt: 'A Hanseatic harbor at dawn.',
  size: '1024x1024',
});

See Images reference for parameter details.

What's different

  • No Responses API. The SDK's responses primitive doesn't route through us — we implement Chat Completions and the Anthropic Messages shape, not Responses. Use streamText / generateText / generateObject (and prefer @ai-sdk/openai-compatible, which always emits Chat Completions calls).
  • Custom response fields. environment_impact and billing_cost ride on raw responses but aren't surfaced by the SDK's typed response objects. Use experimental_telemetry to log them, or pull them via a raw fetch if you need them inline. Aggregated values appear in your usage dashboard.
  • Tool calling on supported models only. The OpenAI-compatible tool_calls schema works against any chat model that advertises tool support — check _meta.capabilities.tool_use on GET /v1/models?include_meta=true or filter at melious.ai/hub/models.

When it breaks

  • 404 Not Found on /v1/responses — you're on @ai-sdk/openai v5 and used melious('<MODEL_ID>'), which now defaults to the Responses API. Either switch to @ai-sdk/openai-compatible (recommended) or call melious.chat('<MODEL_ID>') everywhere.
  • OPENAI_API_KEY is not set error — you used the default openai provider instead of the Melious one you created. The SDK's fallback is to look up OPENAI_API_KEY. Import melious explicitly everywhere.
  • Schema validation fails in generateObject — the model returned something not matching the Zod schema. Stronger models hit schemas more reliably; check the structured-output capability flag for each model on melious.ai/hub/models.
  • Rate limit errors under load — every request counts against the same per-plan bucket. See Rate limits.

Errors and retry patterns: Errors.

On this page