Troubleshooting
#django
#nginx
#gunicorn

Django 502 Bad Gateway: Causes and Fixes

A 502 Bad Gateway in a Django deployment usually means your reverse proxy is reachable, but it cannot get a valid response from the upstream app server.

Problem statement

A 502 Bad Gateway in a Django deployment usually means your reverse proxy is reachable, but it cannot get a valid response from the upstream app server.

In a typical production stack, that means:

  • Nginx or Caddy is accepting the request
  • but Gunicorn or Uvicorn is stopped, unreachable, misconfigured, or crashing
  • or the proxy is pointing to the wrong socket or port
  • or the upstream process starts but fails before serving requests

This guide focuses on real production setups:

  • Linux servers
  • Nginx or Caddy in front of Django
  • Gunicorn or Uvicorn as the app server
  • Docker and non-Docker deployments

If your Django app returns 502 after a deploy, after a restart, or only in production, the fastest fix is to identify which layer failed before restarting everything.

Quick answer

For a fast Django 502 Bad Gateway fix, use this sequence:

  1. Confirm the reverse proxy is running.
  2. Confirm Gunicorn or Uvicorn is running.
  3. Test the upstream directly on its local port or unix socket.
  4. Read proxy logs and app logs before restarting services.
  5. Verify that the proxy target matches the app bind address.
  6. Check for startup failures:
    • missing env vars
    • missing SECRET_KEY
    • import or dependency errors
    • failed migrations
    • bad settings module
  7. Restart only the failing service.
  8. If the 502 started immediately after a release, roll back to the last known good version only after checking schema compatibility.

Useful commands:

sudo systemctl status nginx
sudo systemctl status gunicorn
sudo journalctl -u gunicorn -n 100 --no-pager
sudo journalctl -u nginx -n 100 --no-pager
curl -I http://127.0.0.1:8000/health/
curl --unix-socket /run/gunicorn/gunicorn.sock http://localhost/health/

Step-by-step solution

Step 1 — Confirm which layer is failing

Start by separating proxy problems from app server problems.

Check whether Nginx or Caddy is healthy

For Nginx:

sudo systemctl status nginx
sudo nginx -t

Verification checks:

  • service should be active (running)
  • nginx -t should report syntax is OK

If config validation fails, do not reload Nginx yet. Fix syntax first.

Check whether Gunicorn or Uvicorn is running

For Gunicorn:

sudo systemctl status gunicorn
sudo journalctl -u gunicorn -n 100 --no-pager

For Uvicorn:

sudo systemctl status uvicorn
sudo journalctl -u uvicorn -n 100 --no-pager

Also check listeners:

ss -ltnp
ls -l /run/gunicorn/gunicorn.sock

You are looking for one of these states:

  • app server is not running at all
  • app server is running but bound to the wrong port or socket
  • socket file does not exist
  • service is crash-looping

Distinguish the failure type

A Django 502 usually falls into one of these groups:

  • stopped upstream service
  • wrong upstream target
  • unix socket permission issue
  • app crash on boot
  • timeout or resource exhaustion

Do not restart both proxy and app blindly. That can hide the original error and lengthen the outage.

Step 2 — Read the right logs first

Reverse proxy logs

For Nginx:

sudo journalctl -u nginx -n 100 --no-pager
sudo tail -n 100 /var/log/nginx/error.log

Common patterns:

  • connect() failed (2: No such file or directory)
    Nginx expects a unix socket that does not exist.
  • connect() failed (13: Permission denied)
    Nginx cannot access the socket or parent directory.
  • connect() failed (111: Connection refused)
    nothing is listening on the target port.
  • upstream prematurely closed connection
    app accepted the connection but crashed or exited before completing the response.

App server logs

Typical Gunicorn or Uvicorn startup failures include:

  • bad DJANGO_SETTINGS_MODULE
  • missing .env or environment variable
  • missing SECRET_KEY
  • import error after code deploy
  • missing Python package in the virtualenv or image
  • database connection failure during startup
  • migration-related code path failing on boot

Example:

sudo journalctl -u gunicorn -n 200 --no-pager

If the service exits immediately after ExecStart, focus on the app server unit or the release contents, not Nginx.

systemd journal or container logs

For non-Docker:

sudo journalctl -xe --no-pager

For Docker or Compose:

docker ps
docker logs <container>
docker compose ps
docker compose logs web

Look for:

  • restart loops
  • ExecStart failures
  • OOM kills
  • missing files inside the container
  • healthcheck failures

Step 3 — Fix the most common Django 502 causes

Upstream process is not running

If Gunicorn or Uvicorn is stopped, inspect the unit file and logs before restarting.

Example Gunicorn unit:

[Service]
User=www-data
Group=www-data
WorkingDirectory=/srv/app/current
EnvironmentFile=/srv/app/shared/.env
RuntimeDirectory=gunicorn
ExecStart=/srv/app/venv/bin/gunicorn --workers 3 --bind unix:/run/gunicorn/gunicorn.sock project.wsgi:application
Restart=on-failure

RuntimeDirectory=gunicorn matters if you bind under /run, because /run is temporary and recreated at boot.

Common problems:

  • wrong virtualenv path
  • wrong WorkingDirectory
  • missing EnvironmentFile
  • bad module path such as project.wsgi:application

After fixing the unit:

sudo systemctl daemon-reload
sudo systemctl restart gunicorn
sudo systemctl status gunicorn

Rollback note: if this broke after a deployment, restore the previous release symlink or previous unit file before restarting repeatedly.

Wrong socket or upstream port in reverse proxy config

Your proxy and app bind target must match.

Nginx with TCP:

location / {
    proxy_pass http://127.0.0.1:8000;
    proxy_set_header Host $host;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
}

Nginx with unix socket:

location / {
    proxy_pass http://unix:/run/gunicorn/gunicorn.sock:/;
    proxy_set_header Host $host;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
}

If Gunicorn binds to 127.0.0.1:8000, Nginx cannot use /run/gunicorn/gunicorn.sock, and the reverse is also true.

Validate and reload only after checking:

sudo nginx -t
sudo systemctl reload nginx

Unix socket permission problems

Check the socket and every parent directory:

ls -l /run/gunicorn/gunicorn.sock
namei -l /run/gunicorn/gunicorn.sock
stat /run/gunicorn/gunicorn.sock

Typical issue:

  • socket is owned by a user or group Nginx cannot access
  • /run/gunicorn permissions block traversal

Fix ownership narrowly. Do not make sockets world-writable.

Django app fails on startup

A bad deploy often causes 502 symptoms right after release.

Check:

  • DJANGO_SETTINGS_MODULE
  • secret keys and env vars
  • dependency installation
  • migration state
  • release path correctness

Useful verification:

readlink -f /srv/app/current
sudo -u www-data /srv/app/venv/bin/python /srv/app/current/manage.py check --deploy

If your app touches the database or Redis during import time, AppConfig startup, or early request initialization, verify those dependencies too.

If Django is behind a proxy and you enforce HTTPS redirects, make sure your proxy headers and Django settings are aligned. This is more likely to cause redirect or CSRF problems than a 502, but it is still worth verifying during proxy-related deploy changes.

Timeout or resource exhaustion

If logs show workers dying or requests hanging:

  • startup may be too slow after deploy
  • worker count may be too low
  • memory pressure may be killing workers
  • DB or Redis may be blocking startup

Check for OOM or worker exits in the journal. If this is resource-related, increasing workers alone may not help if memory is already exhausted.

Bad release or incomplete deploy

Common release issues:

  • code updated but virtualenv not updated
  • new image not actually running
  • symlink points to partial release
  • migrations ran in the wrong order
  • stale socket left behind from an old process

If the 502 began immediately after release, rolling back is usually safer than trying to patch production live.

Step 4 — Verify upstream connectivity directly

Test a TCP upstream locally

curl -I http://127.0.0.1:8000/health/

If this fails locally, Nginx is not the primary problem.

Test a unix socket-backed upstream

curl --unix-socket /run/gunicorn/gunicorn.sock http://localhost/health/

Also confirm the socket exists:

ls -l /run/gunicorn/gunicorn.sock

Check a cheap health endpoint

Use a lightweight URL such as /health/ that avoids expensive queries. A successful local response confirms the app server can serve traffic before involving the proxy.

Verification checks:

  • local curl returns 200 or expected status
  • headers return quickly
  • app does not crash during the request

Step 5 — Restart safely and avoid making the outage worse

Restart only the failing unit first

If Gunicorn failed, restart Gunicorn first:

sudo systemctl restart gunicorn
sudo systemctl status gunicorn

Do not restart Nginx unless Nginx config changed or Nginx itself failed.

Reload reverse proxy only after config test passes

sudo nginx -t
sudo systemctl reload nginx

Confirm migrations and static files before retrying full traffic

A restarted app that still depends on missing schema or static assets can fail again.

Check your release process:

  • were migrations applied?
  • was collectstatic run if needed?
  • does the current release contain the expected code?

Roll back if the 502 started immediately after a release

A safe rollback path is critical.

Example checks:

readlink -f /srv/app/current
sudo systemctl restart gunicorn
curl -I http://127.0.0.1:8000/health/

Rollback sequence:

  1. verify whether the failed release included schema changes
  2. repoint to the previous release only if the previous code is still compatible with the current database schema
  3. restart the app service
  4. verify local upstream health
  5. test through Nginx
  6. only then resume normal traffic

Do not roll back application code blindly if the failed release included non-backward-compatible migrations; verify schema compatibility first or restore both code and database from a known-good recovery point.

Step 6 — Deployment-specific fixes

Non-Docker Django deployments

Common causes:

  • virtualenv path mismatch in ExecStart
  • incorrect WorkingDirectory
  • stale symlink target in /srv/app/current
  • env file missing on the server

These problems often appear after manual deploy changes.

Docker and Compose deployments

Check container state first:

docker compose ps
docker compose logs web

Common causes:

  • container exits immediately
  • service name in proxy config no longer matches
  • app binds to the wrong port
  • socket path mounted in the wrong location
  • healthcheck keeps the service out of rotation

If your proxy runs on the host and app runs in Docker, verify the host can actually reach the container port or mounted socket.

Explanation

A 502 is not a Django exception by itself. It is usually a proxy-to-upstream failure.

That is why the fastest fix workflow is:

  1. identify the failing layer
  2. verify the upstream directly
  3. inspect logs before restart
  4. repair the bind target, process, or startup issue
  5. only then reload the proxy

Unix sockets and localhost TCP both work in production. Use the one you can operate reliably. Unix sockets are common on single-host deployments. Localhost TCP is often simpler to debug.

When this process becomes repetitive across multiple apps, it is a good candidate for a reusable script or template. Common items to automate first are config validation, local upstream health checks, service status collection, and rollback when the health check fails.

Notes and edge cases

502 vs 504 vs 500 in Django deployments

  • 502: proxy could not get a valid upstream response
  • 504: upstream took too long
  • 500: Django returned an application error successfully through the proxy

502 only on admin or large requests

This can indicate:

  • app worker crashes under heavier queries
  • upstream timeout behavior
  • memory pressure
  • file upload or body size configuration mismatch

502 after reboot but not after manual restart

This often points to startup ordering, missing runtime directories, or a systemd unit that depends on paths or env files not present at boot.

If you bind Gunicorn to a unix socket under /run, make sure the runtime directory is recreated at boot with RuntimeDirectory= or an equivalent socket-based setup.

502 during zero-downtime deploys

Check that the new instance becomes healthy before traffic shifts. A rolling deploy without a real app health check can put broken instances behind the proxy.

For background, see how Django requests flow through Nginx, Gunicorn, and the app server.

If you need a known-good baseline, use deploy Django with Gunicorn and Nginx and configure systemd for Django Gunicorn services.

If this outage started after a release, follow how to roll back a Django deployment safely.

For related production configuration issues that are not usually 502s, see How to Fix DisallowedHost in Django Production, Django Static Files Not Loading in Production: Fix Guide, and CSRF Verification Failed in Django Production: How to Fix It.

FAQ

Why does Django return 502 Bad Gateway after a deploy?

Usually because the new release did not start correctly. Common causes are missing env vars, missing SECRET_KEY, dependency mismatches, failed imports, wrong socket paths, or migrations that were skipped.

How do I tell whether Nginx or Gunicorn is causing the 502?

Test the upstream directly. If curl http://127.0.0.1:8000/health/ or curl --unix-socket /run/gunicorn/gunicorn.sock http://localhost/health/ fails, the app side is the problem. If local upstream works, inspect Nginx config and logs.

What does connect() to unix:/run/gunicorn/gunicorn.sock failed mean?

Nginx tried to connect to a unix socket and could not. The socket may be missing, the app may not be running, the socket path may be wrong, or permissions may block access.

Should I use a unix socket or localhost port for Django in production?

Either is valid. Unix sockets are common for single-server setups. Localhost TCP is often easier to test and debug. Choose the option your team can verify and maintain consistently.

Why does restarting Nginx not fix a Django 502 error?

Because Nginx is often not the failing component. If Gunicorn or Uvicorn is crashed, misconfigured, or unreachable, restarting the proxy alone will not restore the upstream.

2026 · django-deployment.com - Django Deployment knowledge base