Operations
#django
#gunicorn

Django Logging Setup for Production

A default Django project does not give you a production-safe logging setup.

Problem statement

A default Django project does not give you a production-safe logging setup.

In real deployments, you need logs that help during incidents: application errors, unexpected warnings, failed requests, and server-side failures must be visible and preserved long enough to investigate. At the same time, debug-style logging in production can create noise, consume disk, and expose sensitive data such as tokens, cookies, or request payloads.

A useful production setup must also fit the rest of the stack. Django logs are only one part of the picture. Gunicorn or Uvicorn will produce process and worker logs, and Nginx or Caddy will produce access and proxy error logs. If these are mixed together or routed inconsistently, troubleshooting becomes slower.

The goal is a Django logging configuration production teams can actually operate: predictable destinations, useful formatting, controlled log levels, and safe handling of sensitive data.

Quick answer

For a practical Django logging setup for production baseline:

  • run with DEBUG=False
  • log Django application events at INFO or WARNING
  • make ERROR and 5xx events easy to find
  • send logs to:
    • stdout/stderr in Docker or supervised environments
    • journald with systemd on Linux servers
    • files only if you also configure rotation and permissions
  • keep Django logs separate from Gunicorn access/error logs and Nginx logs
  • do not log secrets, tokens, cookies, passwords, or full request bodies
  • verify the setup by emitting a test log and forcing a controlled exception

Step-by-step solution

1. Choose a log destination first

Pick one destination model before editing Django settings.

Use stdout/stderr

Best for:

  • Docker
  • Kubernetes
  • platform logging collectors
  • systemd-supervised processes when journald captures service output

Use files

Best for:

  • simple single-server deployments
  • teams already using /var/log/...
  • environments where rotation and retention are managed locally

Use journald or centralized logging

Best for:

  • systemd-based Linux servers
  • searchable service logs
  • correlation across app, worker, and proxy services

If you are on a VM with Gunicorn managed by systemd, journald is usually simpler than app-managed log files.

2. Set production-safe Django logging

In your production settings module, define LOGGING explicitly.

import os
import logging.handlers
from pathlib import Path

BASE_DIR = Path(__file__).resolve().parent.parent.parent

DEBUG = False

LOG_TO_FILE = os.getenv("DJANGO_LOG_TO_FILE", "0") == "1"
LOG_LEVEL = os.getenv("DJANGO_LOG_LEVEL", "INFO")
LOG_DIR = os.getenv("DJANGO_LOG_DIR", "/var/log/myapp")

if LOG_TO_FILE:
    os.makedirs(LOG_DIR, exist_ok=True)

LOGGING = {
    "version": 1,
    "disable_existing_loggers": False,
    "formatters": {
        "standard": {
            "format": "%(asctime)s %(levelname)s %(name)s %(process)d %(message)s",
        },
    },
    "handlers": {
        "console": {
            "class": "logging.StreamHandler",
            "formatter": "standard",
        },
        "file": {
            "class": "logging.handlers.WatchedFileHandler",
            "filename": f"{LOG_DIR}/django.log",
            "formatter": "standard",
        },
    },
    "root": {
        "handlers": ["file"] if LOG_TO_FILE else ["console"],
        "level": LOG_LEVEL,
    },
    "loggers": {
        "django": {
            "handlers": ["file"] if LOG_TO_FILE else ["console"],
            "level": "INFO",
            "propagate": False,
        },
        "django.request": {
            "handlers": ["file"] if LOG_TO_FILE else ["console"],
            "level": "ERROR",
            "propagate": False,
        },
        "django.server": {
            "handlers": ["file"] if LOG_TO_FILE else ["console"],
            "level": "WARNING",
            "propagate": False,
        },
        "myapp": {
            "handlers": ["file"] if LOG_TO_FILE else ["console"],
            "level": LOG_LEVEL,
            "propagate": False,
        },
    },
}

This gives you:

  • consistent formatting
  • one destination selected by environment
  • visible request errors through django.request
  • a dedicated app logger (myapp) for your own code

Use your real project or app name instead of myapp.

Verification check

Run Django shell in production and emit a test line:

python manage.py shell -c "import logging; logging.getLogger('myapp').warning('production log test')"

Then inspect the destination:

journalctl -u <your-gunicorn-service-name> -f

or

tail -f /var/log/myapp/django.log

If nothing appears, stop and fix the destination before deploying further changes.

3. Keep Gunicorn logs separate from Django logs

Gunicorn should manage its own access and error logging. Do not rely on Django to replace that.

Example Gunicorn command:

gunicorn config.wsgi:application \
  --bind 127.0.0.1:8000 \
  --workers 3 \
  --access-logfile - \
  --error-logfile - \
  --log-level info

With -, Gunicorn writes to stdout/stderr, which systemd or Docker can capture.

If you use systemd:

[Unit]
Description=gunicorn for myapp
After=network.target

[Service]
User=myapp
Group=www-data
WorkingDirectory=/srv/myapp/current
Environment="DJANGO_SETTINGS_MODULE=config.settings.production"
Environment="DJANGO_LOG_TO_FILE=0"
ExecStart=/srv/myapp/venv/bin/gunicorn config.wsgi:application --bind 127.0.0.1:8000 --workers 3 --access-logfile - --error-logfile - --log-level info
Restart=always
StandardOutput=journal
StandardError=journal

[Install]
WantedBy=multi-user.target

Reload and restart safely:

sudo systemctl daemon-reload
sudo systemctl restart <your-gunicorn-service-name>
sudo systemctl status <your-gunicorn-service-name>

Verification check

Follow service logs:

journalctl -u <your-gunicorn-service-name> -f

You should see both Gunicorn process logs and Django console logs if Django is writing to stdout.

Rollback note

If the service fails after changing logging or ExecStart, restore the previous unit file and restart:

sudo systemctl daemon-reload
sudo systemctl restart <your-gunicorn-service-name>

Then confirm the service is healthy with:

sudo systemctl status <your-gunicorn-service-name>
journalctl -u <your-gunicorn-service-name> -b

4. If you use file logging, add rotation and permissions

If you write to files, do not leave them unmanaged.

Create the directory with restricted access:

sudo mkdir -p /var/log/myapp
sudo chown myapp:www-data /var/log/myapp
sudo chmod 750 /var/log/myapp

Create a logrotate rule:

/var/log/myapp/*.log {
    daily
    rotate 14
    compress
    delaycompress
    missingok
    notifempty
    create 0640 myapp www-data
}

Save that as:

/etc/logrotate.d/myapp

Test the syntax:

sudo logrotate -d /etc/logrotate.d/myapp

If you rotate logs externally with logrotate, prefer logging.handlers.WatchedFileHandler on Linux. A plain FileHandler may keep writing to the old rotated file until the process restarts or reopens the file.

Watch the app log:

tail -f /var/log/myapp/django.log

Verification check

Confirm the Django process can write to the file after deploy. Permission errors here are common.

If your first log file is created by the app process, check its mode as well:

ls -l /var/log/myapp/django.log

Rollback note

If file writes fail in production, switch back to console or journald by setting:

DJANGO_LOG_TO_FILE=0

Then restart your Gunicorn service.

5. Add security-focused logging without leaking data

You should log security-relevant events, but not sensitive content.

Good things to notice in logs:

  • repeated 403 or permission failures
  • suspicious host header errors
  • admin login failures
  • 5xx spikes
  • unexpected exception traces

Do not log:

  • Authorization headers
  • session cookies
  • passwords
  • reset tokens
  • full request bodies containing user data

A minimal redaction filter for custom logs looks like this:

import logging

class RedactSensitiveDataFilter(logging.Filter):
    SENSITIVE_KEYS = {"password", "token", "access_token", "refresh_token", "authorization", "cookie"}

    def filter(self, record):
        if isinstance(record.args, dict):
            record.args = {
                k: ("***" if str(k).lower() in self.SENSITIVE_KEYS else v)
                for k, v in record.args.items()
            }
        return True

Wire it into Django logging explicitly:

LOGGING = {
    "version": 1,
    "disable_existing_loggers": False,
    "filters": {
        "redact_sensitive": {
            "()": "myapp.logging_filters.RedactSensitiveDataFilter",
        },
    },
    "handlers": {
        "console": {
            "class": "logging.StreamHandler",
            "filters": ["redact_sensitive"],
        },
    },
}

This example is intentionally limited. It only helps when your log call passes a dictionary in record.args. It does not automatically sanitize exception messages, arbitrary strings, headers, or request bodies. The main control is still to avoid logging sensitive fields in application code in the first place.

6. Correlate Django, Gunicorn, and Nginx logs

Each layer should log different things:

Django should log

  • application warnings and errors
  • exception traces
  • important business-process failures
  • selected security-relevant events

Gunicorn or Uvicorn should log

  • worker starts and exits
  • process crashes
  • timeouts
  • access logs if enabled at the app server layer

Nginx should log

  • client access
  • upstream failures
  • 502/504 proxy issues
  • TLS and connection-level problems

Useful inspection commands:

journalctl -u <your-gunicorn-service-name> -f
tail -f /var/log/nginx/error.log
tail -f /var/log/nginx/access.log
grep -R "Traceback" /var/log/myapp /var/log/nginx 2>/dev/null

During an incident, correlate by timestamp, request path, client IP, and upstream status. If you later add request IDs, this becomes easier, but a basic timestamp-based setup is already useful.

7. Verify logging in production

Do not assume logging works because the app starts.

Trigger a test app log

python manage.py shell -c "import logging; logging.getLogger('myapp').error('error-path verification')"

Trigger a controlled exception

Use a temporary internal-only test view or a dedicated management command in a maintenance window, then confirm the exception appears in the Django log destination.

Example temporary view:

from django.http import HttpResponse

def logging_test_error(request):
    raise RuntimeError("controlled production logging test")

Map it to an internal-only URL, request it once, confirm the exception is logged, then remove it.

If you do not want a temporary URL, use a management command that logs with logger.exception(...) inside a handled test block.

Confirm the expected destination

  • journald: journalctl -u <your-gunicorn-service-name> -f
  • file: tail -f /var/log/myapp/django.log
  • Docker: docker logs <container> --follow

Confirm rotation and permissions

For file logging:

  • log files belong to the app user
  • mode prevents world-read access
  • logrotate config is valid
  • disk usage is monitored

Explanation

A good Django production logging setup is mostly about operational clarity.

Sending logs to stdout/stderr works well in containers and supervised environments because the runtime already knows how to capture and forward streams. On Linux VMs with systemd, journald gives you centralized service logs without extra file-handling logic. File logging is still valid, but only if you also manage ownership, rotation, retention, and disk growth.

Keeping Django logs separate from Gunicorn and Nginx logs matters because each layer answers different questions. Django helps with application exceptions, Gunicorn with worker behavior and process failures, and Nginx with edge and proxy failures. Mixing them into one destination without structure makes incident response slower.

disable_existing_loggers=False is usually the safer default in Django production logging because it avoids unexpectedly silencing framework loggers. Setting propagate=False on specific loggers helps prevent duplicate lines when both a named logger and the root logger handle the same event.

Edge cases or notes

  • If you deploy with Docker, prefer stdout/stderr only. Let the container runtime or platform handle rotation.
  • If logs suddenly spike, check for repeated 4xx/5xx loops, health-check noise, or an exception inside middleware.
  • Logging full request bodies is especially risky for forms, authentication flows, and API endpoints.
  • Static and media requests usually belong in reverse-proxy access logs, not Django application logs.
  • If Gunicorn workers are timing out, look at both Gunicorn error logs and Nginx upstream errors; Django alone may not show the full failure path.
  • Forwarded headers and proxy configuration can affect what request context appears in logs. If request scheme, host, or client IP looks wrong, verify your reverse-proxy and Django proxy header settings.
  • Email-based admin alerts can exist as a secondary path, but they are not a replacement for searchable persistent logs.

When manual logging setup becomes repetitive

Once you manage multiple Django services, this setup becomes repetitive fast. The parts worth standardizing first are the LOGGING dict defaults, systemd logging behavior, log directory creation, and logrotate rules. That is a good point to move the manual baseline into reusable templates or deployment scripts so each project starts from the same hardened defaults.

For the broader production baseline, see Django Production Settings Checklist (DEBUG, ALLOWED_HOSTS, CSRF).

If you are building the common Linux stack around this, read Deploy Django with Gunicorn and Nginx and How to configure systemd for Django and Gunicorn.

If you are deciding between deployment interfaces first, see Django WSGI vs ASGI: Which One Should You Deploy?.

If you are already debugging upstream failures, continue with How to troubleshoot 502 Bad Gateway in Django with Nginx and Gunicorn.

FAQ

Should Django production logs go to files or stdout?

Use stdout/stderr for Docker, Kubernetes, and most supervised environments. Use journald on systemd-based Linux servers if you want service-level log collection without managing files directly. Use files only when you also configure permissions, rotation, and retention.

What log level should I use for Django in production?

Start with INFO for the main Django or app logger and ensure django.request captures ERROR. Avoid DEBUG in production unless you are diagnosing a short-lived issue and can control volume and data exposure.

How do I stop Django from logging sensitive data?

Do not log secrets in application code. Avoid logging headers, cookies, passwords, tokens, and full request bodies. If needed, add redaction filters for known sensitive keys, but treat that as a safety layer, not the primary control.

Do I need separate logs for Django, Gunicorn, and Nginx?

Yes. Django logs application behavior, Gunicorn logs worker and process behavior, and Nginx logs edge traffic and proxy failures. Keeping them separate makes troubleshooting much faster.

How do I rotate Django logs safely on Linux?

If Django writes to files, use logging.handlers.WatchedFileHandler with logrotate, verify ownership and file creation mode, and test the config with logrotate -d. If possible, prefer journald or container logging instead of app-managed files on newer deployments.

2026 ยท django-deployment.com - Django Deployment knowledge base