Log Drains

This feature may not be available on all plans.

The Log Drains feature allows you to connect site traffic logs and function logs from Netlify’s CDN to third-party monitoring services for analysis, alerting, and data persistence. Once you’ve configured a log drain for a site, Netlify batches the site’s log records and posts them to an endpoint in JSON/NDJSON format in near real-time. A configured external monitoring provider receives these records from the intake endpoint and makes them available for processing. The site traffic log output tracks visitor requests for assets and pages, while the function log output tracks information such as function invocations.

# Configure a log drain

To set up a log drain, you must be a Netlify team Owner and have an account and API key provisioned with an external monitoring provider. Netlify supports integration with:

# Datadog

  1. For your selected site, go to Site settings > Log Drains and select Enable a log drain.

  2. Select Datadog as the Log drain service.

  3. Select the Log type. You can drain your site’s traffic logs, function logs, or both.

  4. Under Service settings, select the Region where your Datadog site is located.

  5. Enter the unique API key for your logging service provider account. Verify that you are entering your API key instead of the Datadog application key.

  6. Optional: Add key/value pairs under Tags to tag your logs with certain attributes. These become query parameters in requests to the logs intake endpoint.

    Available keys for Datadog:

    Key Tag Description Example value
    ddtag tags associated with logs, grouped into a single list with the value ddtags env:prod
    service name of the application or service generating the log events mysubdomain
  7. Select Connect.

# New Relic

  1. For your selected site, go to Site settings > Log Drains and select Enable a log drain.

  2. Select New Relic as the Log drain service.

  3. Select the Log type. You can drain your site’s traffic logs, function logs, or both.

  4. Under Service settings, select the Region that applies to your New Relic account.

  5. Enter a License API key, also called INGEST-LICENSE, for your New Relic account. Verify that you’re entering your License API key and not your License API key ID or user key.

  6. Optional: To add a tag for your log drain, under Tags, enter the key and value. Then select Add tag. Any tags you add become query parameters in log drain requests to New Relic.

    Example tags for New Relic:

    Key Value Tag description
    environment production environment type
    service mysubdomain name of the application or service generating the log events

For guided help on optimizing your New Relic dashboard for your site’s logs, install the Netlify Logs quickstart on New Relic.

  1. Select Connect.

# Sumo Logic

To configure a log drain that sends logs to your Sumo Logic account, you need to:

  1. Configure your HTTP Logs and Metrics Source in Sumo Logic
  2. Set up the log drain in the Netlify UI

# Configure your HTTP Logs and Metrics Source in Sumo Logic

  1. If you haven’t already, create a hosted collector to collect your data in Sumo Logic.

  2. In the Sumo Logic web app, add and configure your HTTP Logs and Metrics Source using Sumo Logic’s docs.

  3. Ensure that you copy your HTTP Source Address to use in the Netlify UI.

# Set up the log drain in the Netlify UI

For security, Netlify hides the full HTTP Source Address in the Netlify UI. After you configure your log drain, only the base URL is visible in the Netlify UI.

  1. On Netlify, for your selected site, go to Site settings > Log Drains and select Enable a log drain.
  2. Select Sumo Logic as the Log drain service.
  3. Select the Log type. You can drain your site’s traffic logs, function logs, or both.
  4. In the Full URL field, enter the HTTP Source Address you copied from Sumo Logic.
  5. Select Connect.

# Logflare

  1. For your selected site, go to Site settings > Log Drains and select Enable a log drain.
  2. Select Logflare as the Log drain service.
  3. Select the Log type. You can drain your site’s traffic logs, function logs, or both.
  4. Under Service settings, enter your Logflare Ingest API key.
  5. To specify where you want your logs to go, enter the Logflare Source ID for your logs.
  6. Select Connect.

# Amazon S3

To configure a log drain that sends logs to your Amazon S3 account as Gzip-compressed files, you need to:

  1. Create an Amazon S3 bucket and set up a bucket policy
  2. Configure the log drain in the Netlify UI

# Create an Amazon S3 bucket and set up a bucket policy

  1. In the AWS Management Console, create an S3 bucket.
    • Object Ownership for the bucket should be set to either Bucket owner enforced or Bucket owner preferred.
    • Make note of the bucket name to use in your Netlify configuration.
  2. Go to your bucket’s Permissions and under Bucket policy select Edit.
  3. Copy and paste the following bucket policy, replacing YOUR_BUCKET_NAME with the name of your Amazon S3 bucket:
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "NetlifyLogDrains",
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::128866310339:role/log-shipper"
      },
      "Action": "s3:PutObject",
      "Resource": "arn:aws:s3:::YOUR_BUCKET_NAME/*",
      "Condition": {
        "StringEquals": {
          "s3:x-amz-acl": "bucket-owner-full-control"
        }
      }
    }
  ]
}
  1. Select Save Changes.

Bucket lifecycle

For cost control reasons, we recommend deleting logs after a period of 90 days. You can configure your Amazon S3 bucket to delete logs automatically by setting lifecycle rules.

# Configure the log drain in the Netlify UI

  1. For your selected site, go to Site settings > Log Drains and select Enable a log drain.
  2. Select Amazon AWS S3 as the Log drain service.
  3. Select the Log type. You can drain your site’s traffic logs, function logs, or both.
  4. Under Service settings, select the Bucket region where your bucket is located.
  5. Enter your S3 Bucket name.
  6. Optional: Enter your S3 Bucket path. We recommend using YOUR_BUCKET_NAME/logs/netlify/. If no path is specified, the logs will be written to the root of the S3 bucket.
  7. Select Verify bucket and connect.

  1. Use the provided path to navigate to your S3 bucket’s verification file.
  2. Copy and paste the contents of the file into the Verification token field, then select Verify.

# General HTTP endpoint

Netlify’s General HTTP endpoint support allows you to set up a custom log drain with any external monitoring provider that accepts log drain requests in JSON or NDJSON format.

To set up a custom log drain, you must provide a full URL for your external monitoring provider’s endpoint and include your third-party API key as a query parameter. For example, this URL includes an API key and a tag as query parameters:

http://YOUR_ENDPOINT_RESOURCE_PATH?api-key=YOUR_API_KEY&environment=production

For security, Netlify hides the full endpoint in the Netlify UI. After you configure your log drain, only the base URL is visible in the Netlify UI.

  1. For your selected site, go to Site settings > Log Drains and select Enable a log drain.
  2. Select General HTTP endpoint as the Log drain service.
  3. Select the Log type. You can drain your site’s traffic logs, function logs, or both.
  4. Enter the Full URL for your endpoint, including your API key and any optional tags as query parameters.
  5. Select the Log Drain Format that your endpoint accepts.
  6. Select Connect.

# Edit a log drain

If you need to adjust the settings for an existing log drain, under Site Settings > Log Drains, select Edit settings. Configuration changes become active within approximately five minutes.

To stop sending specific log data to your external monitoring provider, clear the Log type you no longer need and select Verify bucket and save.

# Remove a log drain

To terminate an existing log drain configuration for a site, under Site Settings > Log Drains, select Delete log drain. All log types associated with the site’s log drain will be removed. Saved logs are accessible in your logging service provider account.

# Traffic log output

Drained site traffic logs include the following fields parsed from our CDN logs:

  • account_id: ID of the account.
  • client_ip: IP address of the client.
  • content_type: Content-Type of the request (for example, text/html).
  • deploy_id: ID of the deploy (for example, 61153ae8b0f6a900088386e8).
  • duration: duration of the request in milliseconds.
  • log_type: indicates the type of log. The value is traffic.
  • method: request method.
  • referrer: referrer on the request.
  • request_id: Netlify request ID (for example, 01FDWR77JMF2DA1CHF5YA6H07C).
  • request_size: size of the request in bytes.
  • response_size: size of the response in bytes.
  • site_id: ID of the site.
  • status_code: status code of the HTTP response.
  • timestamp: timestamp of the request, formatted with RFC 3339 (for example, 2021-08-24T18:54:34.831Z).
  • url: URL of the request.
  • user_agent: user-agent that made the request.

# Function log output

Drained function logs include the following fields parsed from our CDN logs:

  • account_id: ID of the account.
  • deploy_id: ID of the deploy (for example, 61153ae8b0f6a900088386e8).
  • duration: amount of time it took for AWS Lambda to execute the function.
  • function_name: name of the function.
  • function_type: type of function. This value will always be normal.
  • level: level of the log line (for example, INFO, ERROR, WARN, REPORT).
  • log_message: log message.
  • log_type: field indicating the type of log. The value is functions.
  • method: method of the request (for example, GET).
  • path: path of the request (for example, /.netlify/functions/your-awesome-function).
  • request_id: Netlify request ID (for example, 01FDWR77JMF2DA1CHF5YA6H07C).
  • site_id: ID of the site.
  • status_code: status code of the HTTP response.
  • timestamp: timestamp of the request, formatted with RFC 3339 (for example, 2021-08-24T18:54:34.831Z).

Note that if a function invocation’s log output exceeds 4 KB, only the last 4 KB of logs will be sent to the logging service and the log message will be truncated.