Skip to content

Netlify Blobs

With Netlify Blobs, you can store and retrieve blobs and unstructured data. You can also use this feature as a simple key/value store or basic database.

Netlify Blobs is a highly-available data store optimized for frequent reads and infrequent writes.

For maximum flexibility, it offers a configurable consistency model. If multiple write calls to the same key are issued, the last write wins.

We automatically handle provisioning, configuration, and access control for you. This integrated zero-configuration solution helps you focus on building business value in your project rather than toil on setting up and scaling a separate blob storage solution.

Each blob belongs to a single site. A site can have multiple namespaces for blobs. We call these stores. This allows you to, for example, have the key my-key exist as an object in a store for file-uploads and separately as an object in a store for json-uploads with different data. Every blob must be associated with a store, even if a site is not using multiple namespaces.

You can perform CRUD operations for Netlify Blobs from the following Netlify features:

You can also:

Netlify Blobs is a platform primitive that developers and frameworks can use as a building block for many different purposes. Here are a few examples of powerful patterns that you can use:

  • Data store for functions. With Background Functions, you can trigger asynchronous serverless workflows for long-running operations like generating a site map, processing media assets, or sending emails in bulk. You can then use Netlify Blobs to persist the output of those computations.
  • Processing user uploads. If your application takes user submissions, like reviews on a product page or image files for a gallery, Netlify Blobs can store that data. When paired with Functions or Edge Functions, you can create an endpoint to receive an upload, validate the contents, and persist the validated data.

For more advanced use cases — such as those that require complex queries, concurrency control, or a relational data model — explore our integrations with the best-in-class database vendors.

To use the Netlify Blobs API, first install the @netlify/blobs module using the package manager of your choice:

Terminal window
npm install @netlify/blobs

Then use the below methods in your functions, edge functions, or build plugins.

Opens a site-wide store for reading and writing blobs. Data added to that store will be persisted on new deploys, available on all deploy contexts and accessible from from Functions, Edge Functions and Build Plugins.

const store = getStore(name, { siteID, token })
  • name: the name of the store; this can be any string that adheres to the store naming requirements
  • siteID (optional): the ID of the Netlify site associated with the store; this is set automatically when you use Blobs from Functions, Edge Functions or Build Plugins. You can also set the siteID to the ID of another site you own to access its blobs via the getStore method.
  • token (optional): a Netlify Personal Access Token that grants access to Blobs on the given site; this is set automatically when you use Blobs from Functions, Edge Functions or Build Plugins

An instance of a store on which you can get, set or delete blobs.

Opens a deploy-specific store for reading and writing blobs. Data added to that store will be scoped to a specific deploy, available on all deploy contexts and accessible from from Functions, Edge Functions and Build Plugins.

const store = getDeployStore(name, { deployID, region, siteID, token })
  • name: the name of the store; this can be any string that adheres to the store naming requirements
  • deployID (optional): the ID of the Netlify deploy associated with the store; this is set automatically when you use Blobs from Functions, Edge Functions or Build Plugins
  • region (optional): the region associated with the store; this is set automatically when you use Blobs from Functions, Edge Functions or Build Plugins
  • siteID (optional): the ID of the Netlify site associated with the store; this is set automatically when you use Blobs from Functions, Edge Functions or Build Plugins.
  • token (optional): a Netlify Personal Access Token that grants access to Blobs on the given site; this is set automatically when you use Blobs from Functions, Edge Functions or Build Plugins

An instance of a store on which you can get, set or delete blobs.

Creates an object with the given key and value. If an entry with the given key already exists, its value is overwritten.

await store.set(key, value, { metadata, onlyIfMatch, onlyIfNew })
  • key: a string representing the object key
  • value: the value as an ArrayBuffer, Blob, or string
  • metadata (optional): a JSON object with arbitrary metadata to attach to the object
  • onlyIfMatch (optional): when set, the write will only succeed if the entry exists and has an ETag matching this value
  • onlyIfNew (optional): when set, the write will only succeed if the entry does not exist

A Promise that resolves with an object containing the following properties:

  • modified (boolean): Whether the operation has actually generated a new entry
  • etag (string): The ETag of the entry, if the operation has generated a new entry; if not, this property will be omitted

This example shows how you might use Netlify Blobs to persist user-generated uploads. For a more in-depth explanation, refer to this guide.

import { getStore } from "@netlify/blobs";
import type { Context } from "@netlify/functions";
import { v4 as uuid } from "uuid";
export default async (req: Request, context: Context) => {
// Accessing the request as `multipart/form-data`.
const form = await req.formData();
const file = form.get("file") as File;
// Generating a unique key for the entry.
const key = uuid();
const uploads = getStore("file-uploads");
await uploads.set(key, file, {
metadata: { country: context.geo.country.name }
});
return new Response("Submission saved");
};

The example below shows how you can use the onlyIfMatch and onlyIfNew properties to do atomic, conditional writes.

import { getStore } from "@netlify/blobs";
import type { Context } from "@netlify/functions";
export default async (req: Request, context: Context) => {
const emails = getStore("emails");
const { modified } = await emails.set(
"jane@netlify.com",
"Jane Doe",
{ onlyIfNew: true }
);
if (modified) {
return new Response("Submission saved");
}
return new Response("Email already exists", { status: 400 });
};

Convenience method for creating a JSON-serialized object. If an entry with the given key already exists, its value is overwritten.

setJSON(key, value, { metadata, onlyIfMatch, onlyIfNew })
  • key: a string representing the object key
  • value: any value that is serializable to JSON
  • metadata (optional): a JSON object with arbitrary metadata to attach to the object
  • onlyIfMatch (optional): when set, the write will only succeed if the entry exists and has an ETag matching this value
  • onlyIfNew (optional): when set, the write will only succeed if the entry does not exist

A Promise that resolves with an object containing the following properties:

  • modified (boolean): Whether the operation has actually generated a new entry
  • etag (string): The ETag of the entry, if the operation has generated a new entry; if not, this property will be omitted

This example shows how you might use Netlify Blobs to persist user-generated data.

import { getStore } from "@netlify/blobs";
import type { Context } from "@netlify/functions";
import { v4 as uuid } from "uuid";
export default async (req: Request, context: Context) => {
// Expecting the request body to contain JSON.
const data = await req.json();
// Generating a unique key for the entry.
const key = uuid();
const uploads = getStore("json-uploads");
await uploads.setJSON(key, data, {
metadata: { country: context.geo.country.name }
});
return new Response("Submission saved");
};

Retrieves an object with the given key.

await store.get(key, { consistency, type })
  • key: a string representing the object key
  • consistency (optional): a string representing the consistency model for the operation
  • type (optional): the format in which the object should be returned — the default format is a string but you can specify one of the following values instead:
    • arrayBuffer: returns the entry as an ArrayBuffer
    • blob: returns the entry as a Blob
    • json: parses the entry as JSON and returns the resulting object
    • stream: returns the entry as a ReadableStream
    • text: default, returns the entry as a string of plain text

A Promise that resolves with the blob in the format specified by type: ArrayBuffer, Blob, Object, ReadableStream or a string.

If an object with the given key is not found, a Promise that resolves with null is returned.

This example shows how you might read user-generated data that has been previously uploaded to Netlify Blobs.

import { getStore } from "@netlify/blobs";
import type { Context } from "@netlify/functions";
export default async (req: Request, context: Context) => {
// Extract key from URL.
const { key } = context.params;
const uploads = getStore("file-uploads");
const entry = await uploads.get(key);
if (entry === null) {
return new Response(`Could not find entry with key ${key}`, {
status: 404
});
}
return new Response(entry);
};

Retrieves an object along with its metadata.

This method is useful to check if a blob exists without actually retrieving it and having to download a potentially large blob over the network.

await store.getWithMetadata(key, { consistency, etag, type })
  • key: a string representing the object key
  • consistency (optional): a string representing the consistency model for the operation
  • etag (optional): an opaque quoted string, possibly prefixed by a weakness indicator, representing the ETag value of any version of this blob you may have cached — this allows you to do conditional requests
  • type (optional): the format in which the object should be returned — the default format is a string but you can specify one of the following values instead:
    • arrayBuffer: returns the entry as an ArrayBuffer
    • blob: returns the entry as a Blob
    • json: parses the entry as JSON and returns the resulting object
    • stream: returns the entry as a ReadableStream
    • text: default, returns the entry as a string of plain text

A Promise that resolves with an object containing the following properties:

  • data: the blob contents in the format specified by the type parameter, or null if the etag property is the same as the etag parameter (meaning the cached object is still fresh)
  • etag: an opaque quoted string, possibly prefixed by a weakness indicator, representing the ETag value of the object
  • metadata: object with arbitrary metadata

If an object with the given key is not found, a Promise that resolves with null is returned.

This example shows how you might read metadata from user-generated submissions that have been previously uploaded to Netlify Blobs.

import { getStore } from "@netlify/blobs";
import type { Context } from "@netlify/functions";
export default async (req: Request, context: Context) => {
// Extract key from URL.
const { key } = context.params;
const uploads = getStore("file-uploads");
const { data, metadata } = await uploads.getWithMetadata(key);
if (entry === null) {
return new Response(`Could not find entry with key ${key}`, {
status: 404
});
}
return new Response(entry, { headers: { "X-Country": metadata.country } });
};

You can use object metadata to create client-side expiration logic. To delete blobs you consider expired, do the following:

  1. set your objects with metadata that you can base the expiration logic on, such as a timestamp
  2. getWithMetadata to check if an object is expired
  3. delete the expired object

Retrieves the metadata for an object, if the object exists.

This method is useful to check if a blob exists without actually retrieving it and having to download a potentially large blob over the network.

await store.getMetadata(key, { consistency, etag, type })
  • key: a string representing the object key
  • consistency (optional): a string representing the consistency model for the operation
  • etag (optional): an opaque quoted string, possibly prefixed by a weakness indicator, representing the ETag value of any version of this blob you may have cached — this allows you to do conditional requests
  • type (optional): the format in which the object should be returned — the default format is a string but you can specify one of the following values instead:
    • arrayBuffer: returns the entry as an ArrayBuffer
    • blob: returns the entry as a Blob
    • json: parses the entry as JSON and returns the resulting object
    • stream: returns the entry as a ReadableStream
    • text: default, returns the entry as a string of plain text

A Promise that resolves with an object containing the following properties:

  • metadata: object with arbitrary metadata
  • etag: an opaque quoted string, possibly prefixed by a weakness indicator, representing the ETag value of the object

If an object with the given key is not found, a Promise that resolves with null is returned.

This example shows how you might read metadata from user-generated submissions that have been previously uploaded to Netlify Blobs.

import { getStore } from "@netlify/blobs";
import type { Context } from "@netlify/functions">
export default async (req: Request, context: Context) => {
// Extracting key from URL.
const { key } = context.params;
const uploads = getStore("file-uploads");
const entry = await uploads.getMetadata(key);
if (entry === null) {
return new Response("Blob does not exist");
}
return Response.json({
etag: entry.etag,
metadata: entry.metadata
});
};

Returns a list of blobs in a given store.

await store.list({ directories, paginate, prefix })
  • directories (optional): a boolean that indicates whether keys with the / character should be treated as directories, returning a list of sub-directories at a given level rather than all the keys inside them
  • paginate (optional): a boolean that specifies whether you want to handle pagination manually — by default, it is handled automatically
  • prefix (optional): a string for filtering down the entries; when specified, only the entries whose key starts with that prefix are returned

A Promise that resolves with an object containing the following properties:

  • blobs: an array of blobs that match the query parameters, shown as objects with etag and key properties, which represent an object’s ETag value and key, respectively
  • directories: an array of strings representing any directories matching the query parameters

This example shows how you might list all blobs in a given store, logging the key and etag of each entry.

import { getStore } from "@netlify/blobs";
import type { Context } from "@netlify/functions">
export default async (req: Request, context: Context) => {
const uploads = getStore("file-uploads");
const { blobs } = await uploads.list();
// [ { etag: "\"etag1\"", key: "9b1deb4d-3b7d-4bad-9bdd-2b0d7b3dcb6d" }, { etag: "\"etag2\"", key: "1b9d6bcd-bbfd-4b2d-9b5d-ab8dfbbd4bed" } ]
console.log(blobs);
return new Response(`Found ${blobs.length} blobs`);
};

Optionally, you can group blobs together under a common prefix and then browse them hierarchically when listing a store. This is similar to grouping files in a directory. To browse hierarchically, do the following:

  1. Group keys hierarchically with the / character in your key names.
  2. List entries hierarchically with the directories parameter.
  3. Drill down into a specific directory with the prefix parameter.

For performance reasons, the server groups results into pages of up to 1,000 entries. By default, the list method automatically retrieves all pages, meaning you’ll always get the full list of results.

To handle pagination manually, set the paginate parameter to true. This makes list return an AsyncIterator, which lets you take full control over the pagination process. This means you can fetch only the data you need when you need it.

Returns a list of stores for a site. Does not include deploy-specific stores.

await listStores({ paginate })
  • paginate (optional): a boolean that specifies whether you want to handle pagination manually — by default, it is handled automatically

A Promise that resolves with an object containing the following properties:

  • stores: an array of strings representing any stores matching the query parameters

This example shows how you might list all stores for a given site.

import { listStores } from "@netlify/blobs";
import type { Context } from "@netlify/functions">
export default async (req: Request, context: Context) => {
const { stores } = await listStores();
// [ "file-uploads", "json-uploads" ]
console.log(stores);
return new Response(`Found ${stores.length} stores`);
};

For performance reasons, the server groups results into pages of up to 1,000 stores. By default, the listStores method automatically retrieves all pages, meaning you’ll always get the full list of results.

To handle pagination manually, set the paginate parameter to true. This makes listStores return an AsyncIterator, which lets you take full control over the pagination process. This means you can fetch only the data you need when you need it.

Deletes an object with the given key, if one exists.

await store.delete(key)
  • key: a string representing the object key

A Promise that resolves with undefined.

import { getStore } from "@netlify/blobs";
import type { Context } from "@netlify/functions">
export default async (req: Request, context: Context) => {
// Extract key from URL.
const { key } = context.params;
const uploads = getStore("file-uploads");
await uploads.delete(key);
return new Response("Blob has been deleted");
};

With file-based uploads, you can write blobs to deploy-specific stores after the build completes and before the deploy starts. This can be useful for authors of frameworks and other tools integrating with Netlify as it does not require a build plugin.

To make file-based uploads, place blob files in .netlify/blobs/deploy in your site’s base directory. Netlify uploads these files to blob storage maintaining their directory structure. Here is an example file tree:

.netlify/
├─ blobs/
| ├─ deploy/
│ | ├─ dogs/
│ │ | └─ good-boy.jpg
│ | ├─ cat.jpg
│ | └─ mouse.jpg

This uploads the following blobs:

  • dogs/good-boy.jpg
  • cat.jpg
  • mouse.jpg

To attach metadata to a blob, include a JSON file that prefixes the corresponding blob filename with $ and has a .json extension. For example:

.netlify/
├─ blobs/
| ├─ deploy/
│ | ├─ dogs/
│ │ | ├─ good-boy.jpg
│ │ | └─ $good-boy.jpg.json
│ | ├─ cat.jpg
│ | ├─ mouse.jpg
│ | └─ $mouse.jpg.json

This uploads the following blobs:

  • dogs/good-boy.jpg with the metadata from dogs/$good-boy.jpg.json
  • cat.jpg without metadata
  • mouse.jpg with the metadata from $mouse.jpg.json

Metadata files must contain valid JSON or the deploy will fail. Here’s an example of valid JSON metadata:

{
"name": "Jerry"
}

By default, the Netlify Blobs API uses an eventual consistency model, where data is stored in a single region and cached at the edge for fast access across the globe. When a blob is added, it becomes globally available immediately. Updates and deletions are guaranteed to be propagated to all edge locations within 60 seconds.

You can configure this behavior and opt-in to strong consistency with the Netlify Blobs API, either for an entire store or for individual read operations. Netlify CLI always uses strong consistency.

Choosing the right consistency model depends on your use case and each option comes with tradeoffs:

  • if it’s important for your application that updates and deletions become immediately available to all readers, you should consider using strong consistency, which comes with the cost of slower reads
  • if that is not a hard requirement and you’re optimizing for fast reads, you should consider using eventual consistency
import { getStore } from "@netlify/blobs";
import type { Context } from "@netlify/functions">
export default async (req: Request, context: Context) => {
const store = getStore({ name: "animals", consistency: "strong" });
await store.set("dog", "🐶");
// This is a strongly-consistent read.
const dog = await store.get("dog");
return new Response(dog);
};

In addition to using the Netlify Blobs API to list and get blobs, you can use the Netlify UI to browse and download blobs.

To explore and retrieve your site’s blobs:

  1. In the Netlify UI, go to the Blobs page for your project.

  2. If your site has more than one store, select the store of interest.

  3. Then, drill into directories to explore the blobs in the store or Download an individual blob to examine it.

You can store sensitive data with Netlify Blobs. To keep your data secure, we encrypt your blobs at rest and in transit.

Your blobs can only be accessed through your own site. You are responsible for making sure the code you use to access your blobs doesn’t allow data to leak. We recommend that you consider the following best practices:

  • Do not allow incoming requests for arbitrary keys if you have sensitive data. Treat user input as unsafe and scope your keys with something that callers cannot tamper with.
  • Review the code of any build plugin you install from the npm public registry to make sure it doesn’t have malicious blob interactions.

Visit our security checklist for general security measures we recommend you consider for your site.

The namespaces you make with getStore are shared across all deploys of your site. This is required when using Netlify CLI and desirable for most use cases with functions and edge functions because it means that a new production deploy can read previously written data without you having to replicate blobs for each new production deploy. This also means you can test your Deploy Previews with production data. This does, however, mean that you should be careful to avoid scenarios such as a branch deploy deleting blobs that your published deploy depends on.

As mentioned above, build plugins and file-based uploads must write to deploy-specific stores. This requirement makes it so that a deploy that fails cannot overwrite production data.

To make a deploy-specific namespace with the Netlify Blobs API, use the getDeployStore method.

import { getDeployStore } from "@netlify/blobs";
import type { Context } from "@netlify/functions";
import { v4 as uuid } from "uuid";
export default async (req: Request, context: Context) => {
// Generating a unique key for the entry.
const key = uuid();
const uploads = getDeployStore("file-uploads");
await uploads.set(key, await req.text());
return new Response(`Entry added with key ${key}`);
};

In general, blobs in deploy-specific stores are managed by Netlify like other atomic deploy assets. This means they’re kept in sync with their relative deploys if you do a rollback and that they’re cleaned up with automatic deploy deletion.

However, downloading a deploy does not download deploy-specific blobs, and locking a published deploy does not prevent you from writing to associated deploy-specific stores.

By default, deploy-specific stores are located in the same region that your functions have been configured to run in. For a list of available regions, check out these region docs.

You can also manually specify which region to connect to, regardless of your function’s region, by passing the region as an option when using the getDeployStore method.

import { getDeployStore } from "@netlify/blobs";
import type { Context } from "@netlify/functions";
import { v4 as uuid } from "uuid";
export default async (req: Request, context: Context) => {
// Generating a unique key for the entry.
const key = uuid();
const uploads = getDeployStore({ name: "file-uploads", region: "ap-southeast-2" });
await uploads.set(key, await req.text());
return new Response(`Entry added with key ${key}`);
};

Keep the following requirements in mind while working with Netlify Blobs:

  • Netlify Blobs uses the web platform fetch() to make HTTP calls, so Fetch API support is required. This is included with Node.js 18. If for some reason you can’t use Node.js 18, you can provide your own Fetch API support by supplying a fetch property to the getStore or getDeployStore method.
  • File-based uploads require continuous deployment or CLI deploys.

Keep the following rules in mind when creating namespaces and blobs:

  • Store names cannot include the / character.
  • Store names cannot include the : character.
  • Store names cannot exceed 64 bytes.
  • Empty keys are not supported.
  • Object keys can include any Unicode characters.
  • Object keys cannot start with the / character.
  • Object keys cannot exceed 600 bytes.
  • An individual object’s total size cannot exceed 5 GB.
  • An individual object’s metadata size cannot exceed 2 KB.

Keep the following limitations in mind when working with Netlify Blobs:

  • Functions written in Go cannot access Netlify Blobs.
  • Local development with Netlify Dev uses a sandboxed local store that does not support file-based uploads. You cannot read production data during local development.
  • Deploy deletion deletes deploy-specific stores only. For other stores, you can create custom expiration logic or delete objects manually as needed.
  • Netlify Blobs is not currently supported as part of our HIPAA-compliant hosting offering. For more information, visit our Trust Center and download our reference architecture for HIPAA-compliant composable sites on Netlify.
  • Last write wins. If two overlapping calls try to write the same object, the last write wins. Netlify Blobs does not include a concurrency control mechanism. To manage the potential for race conditions, you can build an object-locking mechanism into your application.

  • Store access depends on @netlify/blobs module version. If you wrote to site-wide stores with @netlify/blobs version 6.5.0 or earlier, and you then upgrade the module to a more recent version, you will no longer be able to access data in those stores. This is due to an internal change to namespacing logic. You can migrate affected stores by running the following command in the project directory using the latest version of the Netlify CLI.

    Terminal window
    netlify recipes blobs-migrate YOUR_STORE_NAME

    This makes the migrated store accessible with @netlify/blobs module version 7.0.0 and later.