Open source · BYOK · optional hosted

{The unified interface for LLMs -> on your infrastructure}

Route traffic across providers and models through one OpenAI-compatible API. Bring your own keys, define fallbacks and policies, and scale from a single binary to a fleet — or let us run it for you.

Inspired by the same ideas as commercial routers — breadth of models, reliability, cost control, and clear data boundaries — with full source availability and no mandatory vendor lock-in for keys or hosting.

OpenAI-compatible · one base URL
POST /v1/chat/completions
Authorization: Bearer $YOUR_API_KEY
X-LLM-Provider: anthropic / openai / google ...

Model routing

Policies, fallbacks, A/B

BYOK

Keys stay on your side

Self-host or cloud

Same stack, your choice

Plug & play

Integrate in a minute

Point the OpenAI SDK at your router’s base URL — same chat completion shape as other OpenAI-compatible gateways, with routing and BYOK handled on your infrastructure.

from openai import OpenAI

client = OpenAI(
    base_url="https://your-router.company.tld/v1",
    api_key="sk-your-provider-or-router-key",
)

response = client.chat.completions.create(
    model="anthropic/claude-opus-4",
    messages=[{"role": "user", "content": "Hello!"}],
)

print(response.choices[0].message.content)

One API

OpenAI-compatible surface

Many models

Map names to any upstream

BYOK

Keys never required in our SaaS

OSS + hosted

Same core, your deployment choice

One API for any model

Point your existing OpenAI SDK or HTTP client at Andain Router. Swap model IDs, attach routing headers, and reach Anthropic, Google, Bedrock, open-weight stacks, or your private inference — without rewriting client code for each vendor.

Unified API surface

Unified surface

Keep a single base URL and request shape across chat, embeddings, and tools-capable models where upstreams support them.

Model routing visualization

Model routing

Visualize and control how requests map to providers: weighted splits, canary releases, and per-tenant overrides.

Global upstream mesh

Broad upstream support

Compose gateways, VPC endpoints, and regional providers the same way you would in a managed router — but with configs you can version in Git.

Higher availability

When a region or provider degrades, automatic fallbacks and health-aware routing keep applications responsive — you define the ladder; the router executes it transparently to callers.

Fallback ladder

Resilience policies

Retry with backoff, alternate models, and circuit breaking tuned for streaming and long-context workloads.

Low-latency path

Performance where you need it

Run at the edge of your network or inside your VPC so latency between your apps and the router stays minimal.

Price, performance — and your keys

With BYOK you pay providers directly at their published rates. The router focuses on observability, policy, and safe multi-tenant operation — not reselling tokens.

Keys stay yours

Bring your own keys

Supply API keys per workspace, environment, or customer. Optional envelope encryption and external secret stores for regulated teams.

Policy gates

Custom data policies

Pin prompts and completions to approved model lists, geographies, or on-prem inference — so data only flows where your policies allow.

Deploy anywhere. Route everywhere.

Self-host the same stack thousands of companies trust for unified LLM access, or choose managed hosting for upgrades, backups, and SLO-backed operations — still with BYOK as the default trust model.

Self-hosted

Run containers or binaries on your cloud, air-gapped network, or Kubernetes. Audit every line of the router; integrate with your identity, logging, and cost tooling.

Managed hosting

We operate HA clusters, patching, and capacity — you keep provider keys and data-plane controls. Ideal when you want SaaS convenience without giving up BYOK.

Why teams choose Andain Router

Product and platform teams that already standardize on OpenAI-compatible clients use Andain Router to consolidate governance, reduce bespoke glue code, and keep provider choice open.

  • Ship faster

    One integration path for new models and providers; route changes happen server-side.

  • Stay compliant

    Model allowlists, residency rules, and audit logs align with security reviews.

  • Control spend

    Per-key budgets, quotas, and usage telemetry without proxying billing through a black box.

  • Operate transparently

    Open source means your SREs can trace behavior, contribute fixes, and pin releases.

  • Scale patterns

    Multi-tenant headers, per-customer routes, and canary traffic splits out of the box.

  • Community & extensibility

    Plugins and upstream modules evolve in public; hosted tiers track stable releases.

Connect the providers you already use

Mix commercial APIs, sovereign clouds, and self-managed inference behind one router — names shown for illustration; exact connectors depend on release and configuration.

OpenAI-compatibleAnthropicGoogle AIAmazon BedrockAzure OpenAIGroqOpenAI-compatibleAnthropicGoogle AIAmazon BedrockAzure OpenAIGroq
TogetherMistralDeepSeekOllama & vLLMCustom upstreamTogetherMistralDeepSeekOllama & vLLMCustom upstream

Familiar with unified routers like OpenRouter? Andain Router applies the same broad ideas — one API, many models, routing and resilience — as open source software with BYOK and your choice of hosting.

Ready to unify your LLM traffic?

Clone the repository for self-hosting, or talk to us about managed hosting with the same BYOK guarantees.