Platform primitives /

Request chain

Each request served by Netlify goes through a series of steps and transformations until a response is delivered back to the client.

That sequence of steps varies with the type of request, the site configuration, any web frameworks being used as well as their own configuration.

The diagram below is a representation of what those steps are, what order they run in and under what conditions.

You can expand each of the steps for more details, including links to the relevant documentation. They are divided into four categories:

Security

Compute

Caching

Static routing

Firewall Traffic Rules

When a request hits Netlify, it first goes through any Firewall Traffic Rules. These rules let you control who can access a site based on their IP address or geographic location, including granular targeting by country or region.

You can define rules for a specific site or for all sites on your team. Site-level rules take precedence.

If the request matches a block rule, a Netlify-branded 404 page is served and the request chain is stopped.

Otherwise, it continues down the request chain.

Web Application Firewall

The request then goes through the Web Application Firewall (WAF), another enterprise-grade security feature that protects your site from common attacks.

Netlify offers a managed ruleset so you can benefit from best-in-class protection with minimal setup. You can configure the behavior for each rule based on the specific traffic patterns of your site, to ensure no legitimate requests are unintentionally blocked.

If the request is not blocked, it continues on.

Rate Limiting

Whereas the last two stages let you specify requests to always block, the rate limiting stage lets you set a limit to certain requests that you generally allow.

You can specify highly-customizable rate limiting rules to mitigate DDoS attacks, protect your backend, reduce bot traffic, prevent web scraping, optimize bandwidth usage, and more . These rules can be defined in the UI, in the netlify.toml file or directly in the functions or edge functions files they are protecting.

If the request doesn't exceed the threshold of rate limiting rules, it continues down the request chain.

Site Protection

Site Protection offers a turn-key solution for gating access to your site, so that only authorized users can visit it. You can set up a basic password or an advanced team login protection, which includes support for single sign-on (SSO).

When enabled, access to the site will be validated at this point. Unauthorized requests will be blocked and the request chain will stop.

Otherwise, the request continues on.

Edge Functions

By default, Edge Functions are evaluated immediately after a request has passed the security checks and is matched against the request based on their their path and method configuration. By acting as the the entry point to a Netlify site, they allow developers to implement bespoke routing logic and advanced edge caching patterns.

If multiple edge functions match the incoming request, they will be invoked one after the other, according to the declaration order rules. If an edge function returns a Response object, that will be served to the client and the request chain is stopped. If an edge function returns undefined, the execution moves on to the next matching edge function (if applicable) or to the next element on the request chain.

At any point during its exeuction, an edge function can retrieve the contents of the next element on the request chain by calling context.next().

If no edge functions match the request, or if no edge functions have returned a response, the request continues on.

Edge Cache

The Netlify CDN is comprised of many servers distributed across different regions throughout the globe. This infrastructure ensures that clients always connect to a server that is close to them, reducing the network latency.

Each one of those servers has an HTTP cache, which is the next element on the request chain. If the request matches any response that has been previously stored in the cache, we serve it immediately and the request chain is stopped.

Leveraging our Edge Cache is a great way to achieve the best possible performance and reduce costs. Our caching infrastructure is framework-agnostic and works with standard cache control headers, so anyone can take full advantage of it regardless of their tech stack.

If the request doesn't match any response in the cache, it continues on.

Edge Functions

We have established how edge functions, by default, run immediately after a request has passed security checks and before any caching layer. This is ideal in a lot of scenarios, but other use cases would benefit from leveraging the Edge Cache and skip the invocation of the edge function entirely if it matches something that has been generated before.

You can change this behavior by configuring the edge function for caching, which under the hood changes the place in our request chain where the invocation takes place.

Typically, you will benefit from running an edge function before the cache if you want to run any middleware-type workload, like modifying a request on the way in or transforming a response on its way out. You might want to run an edge function after the cache if the edge function is responsible for generating the final response.

You can have both types of edge functions running for the same request.

If no edge functions are matched, the request continues on.

If the request doesn't match any response in the cache, it continues on.

Durable Cache

While Netlify Serverless Functions are the next primitive that we try to match, before actually invoking anything, we have one additional caching layer to check. As a rule of thumb, serving something from a cache is always faster than computing a fresh response, no matter how streamlined and optimized your code is.

If we're this far down on the request chain, it means the request didn't match anything in the Edge Cache. But we saw how that cache is local to each of the CDN servers, so the fact that we didn't match doesn't necessarily mean that we never cached a response from the function we're about to invoke — it could just mean that it was cached on a different server.

This is where the Durable Cache comes in. When you return a response with the durable directive in the cache control headers, we'll propagate that response across our CDN to ensure that we can serve it regardless of which server picks up the request.

If there is no entry in the durable cache that matches the request, we continue on.

Serverless Functions

At this point, we'll check whether the request matches a serverless function. Just like edge functions, you can configure serverless functions to match requests on a specific path or a range of paths.

If the request matches, there's one last check we'll do before invoking the function. If you want your function to act as a fallback or a "catch-all" handler, you can configure it so that any static file that also matches the request will be served, running the function only if no matching static file exists.

To opt-in to this behavior, you must set preferStatic: true in your function configuration. Unless you enable this configuration, any matching function is always invoked.

If we didn't match any functions, the request continues on.

Redirects

Next up, we evaluate any redirects and rewrites. These are a simple yet powerful way of creating custom routing rules, based on different types of conditions, to any type of destination (the same site, a different Netlify site, or any site on the web).

When you target the same site, we employ some special rules that are important to know. By default, redirects behave as fallbacks, which means that a redirect rule from /foo to /bar actually means serve /bar if /foo doesn't match anything. This is typically known as shadowing and it's equivalent to the preferStatic configuration we saw earlier.

If you'd like to opt out of this behavior and always serve the redirect destination regardless of whether the source matches something else, you can set the force property to true for redirects declared in netlify.toml, or use an exclamation mark in the _redirects file.

If the redirect matches a function, we'll follow the same evaluation logic covered in the previous step, including the check for preferStatic, so you can think of it as going back one step in the request chain. It's important to note that this is not a recursive operation, as we won't evaluate redirects again.

When no redirect rules match, the request continues on.

Static files

Now we try to match any static files that you or your framework have placed in the publish directory during the build process.

We match the request's URL path against filenames, so <your-site>.netlify.app/about matches files like about.html or about/index.html.

This is typically how static assets like HTML pages, images or fonts are served.

404 handler

If the request didn't match anything up until this point, it means we don't have anything to serve. This can happen if the client requests a path that doesn't exist.

You can provide your own 404 page. If you don't, we'll serve a generic one.