Skip to content

See the new AI dev workflow first at NTL DEPLOY. Join live October 1st →

AI Gateway Overview

Use popular AI models in your code, without needing to manage API keys or external accounts.

This feature is in Beta and is available on Credit-based plans only, including the Free, Personal, and Pro plans. If you are on an Enterprise plan and you’re interested, reach out to your Account Manager.

The AI Gateway service simplifies technical and operational concerns when using AI inference in your code, by removing the need to:

  • Open an account with each provider you want to use.
  • Maintain a separate credit balance with each provider.
  • Copy the API key from each provider to your projects on Netlify.

By default, Netlify automatically sets the appropriate environment variables that AI client libraries typically use for configuration, in all Netlify compute contexts (e.g., Netlify Functions, Edge Functions, Preview Server, etc.).

These variables include:

  • API keys for OpenAI, Anthropic, and Google Gemini.
  • A custom base URL for each provider, to route requests via the AI Gateway service.

These variables are picked up by the official client libraries of these providers, so no extra configuration is necessary. Alternatively, if you make AI calls via a provider’s REST API, these values are easy to incorporate in your code.

When receiving a request from a client, the AI Gateway makes the call to the AI provider on your behalf. Then, it bills your Netlify account by converting the actual token usage in the request into credits, using your existing credit quota.

The AI Gateway does not store your prompts or model outputs. Learn more about Security and Privacy for AI features.

To opt-out of the AI Gateway, see Using the AI Gateway below.

When you develop server-side code with any web framework supported by Netlify (e.g., Astro; Tanstack Start; Next.js, Gatsby, Nuxt, etc.), your code is packaged in Netlify Functions and Edge Functions under the hood, as part of the build process.

Therefore, the above environment variables are available as when explicitly using Netlify compute primitives, without any further settings required.

For a quickstart, check out our Quickstart for AI Gateway.

The AI Gateway is available by default in all credit-based plans, unless:

  1. Netlify AI Features are turned off in your team settings.
  2. The environment variable AI_GATEWAY_INJECTION is set to false for a project or team.
  3. If you have API keys set via environment variables for any of the AI providers supported by the gateway (OpenAI, Anthropic, Google Gemini), these are not replaced - Netlify does not override usage of your own keys. You can set your own keys, or remove these, at any point.

For full information on which environment variables are automatically set, and how to control this behavior, see here.

If you’re using any of the following libraries, no configuration is required:

  1. OpenAI TypeScript and JavaScript API Library
  2. Anthropic TypeScript API Library (for Claude models)
  3. Google Gen AI SDK for TypeScript and JavaScript (for Google Gemini models)
// Anthropic Claude API
const ANTHROPIC_API_KEY = process.env.ANTHROPIC_API_KEY;
const ANTHROPIC_BASE_URL = process.env.ANTHROPIC_BASE_URL;
async function callAnthropic() {
const response = await fetch(`${ANTHROPIC_BASE_URL}/v1/messages`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'x-api-key': ANTHROPIC_API_KEY,
'anthropic-version': '2023-06-01'
},
body: JSON.stringify({
model: 'claude-sonnet-4-5-20250929',
max_tokens: 1024,
messages: [{ role: 'user', content: 'Hello!' }]
})
});
return await response.json();
}

If you are using a client library that does not work out-of-the-box with the environment variables set for the AI Gateway, you need to manually pass the API key and base URL as arguments to the library.

This is similar to manually reading & passing variable values when using a provider’s REST API. See Using official REST APIs above for the relevant variable names.

If you have already set an API key or base URL at the project or team level, Netlify will never override it.

When a Netlify Function or Edge Function is initialized, the following environment variables are set to the appropriate values for the AI Gateway:

  1. OPENAI_API_KEY and OPENAI_BASE_URL - unless any of these is already set by you at the project or team level.
  2. ANTHROPIC_API_KEY and ANTHROPIC_BASE_URL - unless any of these is already set by you.
  3. GEMINI_API_KEY and GOOGLE_GEMINI_BASE_URL- unless any of these is already set by you, or if either GOOGLE_API_KEY or GOOGLE_VERTEX_BASE_URL are set.

For each provider, the above check is done separately. Meaning, if you have only set OPENAI_API_KEY to your own API key, it will not be overridden (and neither would OPENAI_BASE_URL be set) - but the values for Anthropic and Google will be set.

AI_GATEWAY_API_KEY and AI_GATEWAY_BASE_URL environment variables are always injected into the AI Gateway-supported runtimes. If you want to mix different setups with your own keys and Netlify’s or you want to be explicit about using AI Gateway keys in your calls, use these env variables as they will never collide with other environment variables values.

To prevent any variables from being automatically set, you can:

  1. Disable AI Features at the Team Settings level (this requires Team Owner permissions),
  2. Or, create an environment variable named AI_GATEWAY_INJECTION and set its value to false. You can define this variable at either the project or the team level.

The AI Gateway supports the following AI providers and models.

AI ProviderModel
OpenAIgpt-5
OpenAIgpt-5-codex
OpenAIgpt-5-mini
OpenAIgpt-5-nano
OpenAIgpt-4.1
OpenAIgpt-4.1-mini
OpenAIgpt-4.1-nano
OpenAIgpt-4o
OpenAIgpt-4o-mini
OpenAIo4-mini
OpenAIo3
OpenAIo3-mini
OpenAIcodex-mini-latest
Anthropicclaude-opus-4-1-20250805
Anthropicclaude-opus-4-20250514
Anthropicclaude-sonnet-4-5-20250929
Anthropicclaude-sonnet-4-20250514
Anthropicclaude-3-7-sonnet-20250219
Anthropicclaude-3-7-sonnet-latest
Anthropicclaude-3-5-haiku-20241022
Anthropicclaude-3-5-haiku-latest
Anthropicclaude-3-haiku-20240307
Googlegemini-2.5-pro
Googlegemini-flash-latest
Googlegemini-2.5-flash
Googlegemini-2.5-flash-preview-09-2025
Googlegemini-flash-lite-latest
Googlegemini-2.5-flash-lite
Googlegemini-2.5-flash-lite-preview-09-2025
Googlegemini-2.5-flash-image-preview
Googlegemini-2.0-flash
Googlegemini-2.0-flash-lite

You can also programatically access the up-to-date list in JSON format via a public API endpoint: https://api.netlify.com/api/v1/ai-gateway/providers.

To understand pricing for AI Gateway, check out our Pricing for AI features docs.

The AI Gateway has two types of limits: Requests Per Minute (RPM) and Tokens Per Minute (TPM). Limits are per model and differ by your account plan.

The rate limit is scoped to your account. Meaning, requests made and tokens used for any project in your account are counted together towards your limits.

Enterprise customers have extended limits - contact your Account Manager to learn more.

AI ProviderModelFree planPersonal planPro plan
OpenAIgpt-563060
OpenAIgpt-5-codex63060
OpenAIgpt-5-mini105080
OpenAIgpt-5-nano50100150
OpenAIgpt-4.163060
OpenAIgpt-4.1-mini105080
OpenAIgpt-4.1-nano50100150
OpenAIgpt-4o63060
OpenAIgpt-4o-mini50100150
OpenAIo4-mini63060
OpenAIo33620
OpenAIo3-mini63060
OpenAIcodex-mini-latest63060
Anthropicclaude-opus-4-1-202508053620
Anthropicclaude-opus-4-202505143620
Anthropicclaude-sonnet-4-5-2025092963060
Anthropicclaude-sonnet-4-2025051463060
Anthropicclaude-3-7-sonnet-2025021963060
Anthropicclaude-3-7-sonnet-latest63060
Anthropicclaude-3-5-haiku-20241022105080
Anthropicclaude-3-5-haiku-latest105080
Anthropicclaude-3-haiku-2024030750100150
Googlegemini-2.5-pro63060
Googlegemini-flash-latest105080
Googlegemini-2.5-flash105080
Googlegemini-2.5-flash-preview-09-2025105080
Googlegemini-flash-lite-latest50100150
Googlegemini-2.5-flash-lite50100150
Googlegemini-2.5-flash-lite-preview-09-202550100150
Googlegemini-2.0-flash50100150
Googlegemini-2.0-flash-lite50100150
Googlegemini-2.5-flash-image-preview3620

For TPM, both input and output tokens are counted towards the limit.

However, cached input tokens are excluded for Anthropic models, and included for other providers.

AI ProviderModelFree planPersonal planPro plan
OpenAIgpt-518,00090,000180,000
OpenAIgpt-5-codex18,00090,000180,000
OpenAIgpt-5-mini60,000300,000480,000
OpenAIgpt-5-nano300,000600,000900,000
OpenAIgpt-4.118,00090,000180,000
OpenAIgpt-4.1-mini50,000250,000400,000
OpenAIgpt-4.1-nano250,000500,000750,000
OpenAIgpt-4o18,00090,000180,000
OpenAIgpt-4o-mini250,000500,000750,000
OpenAIo4-mini30,000150,000300,000
OpenAIo390,000180,000600,000
OpenAIo3-mini30,000150,000300,000
OpenAIcodex-mini-latest30,000150,000300,000
Anthropicclaude-opus-4-1-202508051,8003,60012,000
Anthropicclaude-opus-4-202505141,8003,60012,000
Anthropicclaude-sonnet-4-5-2025092918,00090,000180,000
Anthropicclaude-sonnet-4-2025051418,00090,000180,000
Anthropicclaude-3-7-sonnet-2025021918,00090,000180,000
Anthropicclaude-3-7-sonnet-latest18,00090,000180,000
Anthropicclaude-3-5-haiku-202410221,2006,0009,600
Anthropicclaude-3-5-haiku-latest1,2006,0009,600
Anthropicclaude-3-haiku-202403076,00012,00018,000
Googlegemini-2.5-pro24,000120,000240,000
Googlegemini-flash-latest8,00040,00064,000
Googlegemini-2.5-flash8,00040,00064,000
Googlegemini-2.5-flash-preview-09-20258,00040,00064,000
Googlegemini-flash-lite-latest50,000100,000150,000
Googlegemini-2.5-flash-lite50,000100,000150,000
Googlegemini-2.5-flash-lite-preview-09-202550,000100,000150,000
Googlegemini-2.0-flash50,000100,000150,000
Googlegemini-2.0-flash-lite50,000100,000150,000
Googlegemini-2.5-flash-image-preview3,0006,00020,000

The AI Gateway has the following limitations at this time:

  1. Built-in tool usage (a.k.a server tools - tools that the provider manages and runs on their server, such as web search) is not currently supported.
    • Note: custom tools (a.k.a. client tools), which you run in your own code, are supported.
  2. The context window (input prompt) is limited to 200k tokens.
  3. Prompt caching:
    • Anthropic Claude: only the default 5-minute ephemeral cache duration is supported for Claude.
    • OpenAI: the AI Gateway sets a per-account prompt_cache_key.
    • Google Gemini: explicit context caching is not supported.
  4. The AI Gateway does not pass through any request headers (and thus you cannot enable proprietary experimental features via headers).
  5. Batch inference is not supported.
  6. Priority processing (an OpenAI feature) is not supported.