DedalusDedalus Labs
Marketplace
Pricing
Blog
Docs
Dedalus API

One API.
Every model.

A unified, model-agnostic gateway that routes to OpenAI, Anthropic, Google, and more. Add MCP tools from our marketplace with a single parameter.

Read the docsGet an API key
main.py
from dedalus_labs import Dedalus

client = Dedalus(api_key="your-api-key")

response = client.chat.completions.create(
    model="openai/gpt-5",
    messages=[
        {"role": "user", "content": "Search for the latest AI news"}
    ],
    mcp_servers=["tsion/exa"],
    stream=True,
)

for chunk in response:
    if chunk.choices and chunk.choices[0].delta.content:
        print(chunk.choices[0].delta.content, end="")
import Dedalus, { DedalusRunner } from 'dedalus-labs';

const client = new Dedalus();
const runner = new DedalusRunner(client);

const response = await runner.run({
  input: "Ship a release",
  model: ['gpt-5.2', 'claude-opus-4.5'],
  mcpServers: ['github', 'brave-search'],
  tools: ['search_files', 'find_image']
});
https://api.dedaluslabs.ai

How the API works

Your app talks to one endpoint. We handle authentication, provider routing, MCP tool execution, and streaming -- all behind a single OpenAI-compatible interface.

{ }
Your AppPython / TypeScript
Dedalus
Dedalus API
AuthAPI Keys & RBAC
RouterModel selection
MCPTool execution
StreamSSE responses

AI Providers

Anthropic Claude
Google Gemini
DeepSeek
Mistral

MCP Servers

Slack
GithubGitHub
NotionNotion
LinearLinear
Request flow

Everything you need in one gateway

Drop-in OpenAI compatibility plus the features you actually need for production agents.

Multi-provider routing

Access OpenAI, Anthropic, Google, xAI, Mistral, DeepSeek, and more through a single endpoint. Swap models with one parameter change.

Native MCP support

Attach hosted MCP servers by slug. The API handles discovery, auth, and tool execution server-side so your client stays thin.

Bring Your Own Key

Pass your own provider keys via headers. Skip our billing, use your own quotas, and still benefit from MCP orchestration and streaming.

Structured outputs

Enforce JSON schemas on responses. Works with strict and non-strict validation across OpenAI and Google models.

Vision & multimodal

Send images alongside text. Supports GPT-5 vision, Gemini 3.0 multimodal, Claude Opus 4.6 vision, and image generation via DALL-E.

Real-time streaming

SSE streaming across all providers. Receive incremental deltas as they're generated -- works identically whether you use our key or yours.

Multi-model handoffs

Chain models within a single conversation. Route between fast and capable models based on task complexity with configurable turn limits.

Production-ready

Tier-based rate limiting, request ID tracking, structured error codes, usage metering, and organization-level isolation out of the box.

Every major provider, one interface

Switch between models with a single parameter. No SDK changes, no provider-specific code. The same request format works everywhere.

OpenAI

GPT-5, o3, DALL-E, Whisper

Anthropic Claude

Anthropic

Opus 4.6, Sonnet, Haiku

Google Gemini

Google

Gemini 3.0 Flash, Pro

xAI

Grok 4, Grok 4 Mini

DeepSeek

DeepSeek

Chat, Coder, Reasoner

Mistral

Mistral

Large, Medium

Plus Groq, Fireworks, and more. New providers added regularly.

Dedalus Auth

Your secrets never leave your machine

DAuth is our managed authorization system. Remote MCP servers use your local credentials without ever seeing them -- credentials are isolated in a sealed execution boundary.

Zero secret leakage

Credentials are encrypted client-side and decrypted only inside a sealed execution boundary. Your code never sees raw secrets.

Sender-constrained tokens

Demonstrating Proof-of-Possession (DPoP) binds tokens cryptographically to the client. A stolen token is useless without the private key.

Networkless execution

Credential decryption and API calls happen entirely within an isolated enclave. Raw secrets never traverse the network.

Learn more about DAuth
DAuth
OAuth 2.1
Enclave
DPoP

Lifecycle of a request

Every API call follows the same five-stage pipeline. Click a stage to see what happens under the hood.

1
Authenticate

Validate API key, check org status, load tier limits and rate quotas.

2
Route

Select the target provider, map model parameters, apply BYOK overrides if present.

3
Execute tools

Resolve MCP slugs, establish server connections, run tool calls server-side.

4
Stream

SSE stream incremental deltas back to the client as they are generated.

5
Respond

Meter token usage, emit rate-limit headers, return the final structured response.

Hover or click a stage to see details

Start in minutes

Drop-in compatible with the OpenAI SDK. Switch your base URL and you're done.

from dedalus_labs import Dedalus

client = Dedalus(api_key="your-api-key")

response = client.chat.completions.create(
    model="openai/gpt-5",
    messages=[
        {"role": "user", "content": "Search for the latest AI news"}
    ],
    mcp_servers=["tsion/exa"],
    stream=True,
)

for chunk in response:
    if chunk.choices and chunk.choices[0].delta.content:
        print(chunk.choices[0].delta.content, end="")
import Dedalus, { DedalusRunner } from 'dedalus-labs';

const client = new Dedalus();
const runner = new DedalusRunner(client);

const response = await runner.run({
  input: "Ship a release",
  model: ['gpt-5.2', 'claude-opus-4.5'],
  mcpServers: ['github', 'brave-search'],
  tools: ['search_files', 'find_image']
});
curl https://api.dedaluslabs.ai/v1/chat/completions \
  -H "Authorization: Bearer $DEDALUS_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "openai/gpt-5",
    "messages": [
      {"role": "user", "content": "Search for the latest AI news"}
    ],
    "mcps": ["tsion/exa"],
    "stream": true
  }'
# Bring Your Own Key -- use your provider credentials
# while still leveraging MCP tools and streaming

curl https://api.dedaluslabs.ai/v1/chat/completions \
  -H "Authorization: Bearer $DEDALUS_API_KEY" \
  -H "X-Provider: anthropic" \
  -H "X-Provider-Key: $ANTHROPIC_API_KEY" \
  -H "X-Provider-Model: anthropic/claude-opus-4-6" \
  -H "Content-Type: application/json" \
  -d '{
    "messages": [
      {"role": "user", "content": "What is the weather in Tokyo?"}
    ],
    "mcps": ["tsion/exa"]
  }'
from dedalus_labs import Dedalus

client = Dedalus(api_key="your-api-key")

response = client.chat.completions.create(
    model="openai/gpt-5",
    messages=[
        {"role": "user", "content": "Search for the latest AI news"}
    ],
    mcp_servers=["tsion/exa"],
    stream=True,
)

for chunk in response:
    if chunk.choices and chunk.choices[0].delta.content:
        print(chunk.choices[0].delta.content, end="")

Endpoints at a glance

Chat completions, embeddings, image generation, audio, OCR, and more. Every endpoint follows the same auth and streaming patterns.

Core

POST/v1/chat/completionsChat with any model, stream responses, call MCP tools
GET/v1/modelsList all available models across providers
POST/v1/embeddingsGenerate vector embeddings with OpenAI or Google

Media

POST/v1/images/generationsGenerate images with DALL-E and GPT Image
POST/v1/audio/speechText-to-speech, transcription, and translation
POST/v1/ocrExtract text from images and documents

Management

POST/v1/private/keysCreate, rotate, and manage API keys
GET/v1/private/subscription/statusCheck subscription tier, rate limits, and usage

Ship your first request in 30 seconds

Get an API key, pick a model, attach MCP tools. That's it.

Get startedRead the docs
DedalusDedalus Labs

The drop-in MCP gateway that connects any LLM to any MCP server, local or fully-managed on our marketplace. We take care of hosting, scaling, and model hand-offs, so you can ship production-grade agents without touching Docker or YAML.

Product

  • Pricing
  • API
  • Documentation

Company

  • About
  • Blog
  • Careers
  • Contact

Legal

  • PrivacyPrivacy Policy
  • TermsTerms of Service

© 2026 Dedalus Labs. All rights reserved.

Command Palette

Search for a command to run...

© 2026 Dedalus Labs. All rights reserved.