Operations
#django

How to Build a Django Health Check Endpoint for Deployments

A production Django deployment needs a reliable way to answer a simple question: is this instance safe to keep running and safe to receive traffic?

Problem statement

A production Django deployment needs a reliable way to answer a simple question: is this instance safe to keep running and safe to receive traffic?

That is what a Django health check endpoint is for. Deployment systems, load balancers, reverse proxies, container runtimes, and uptime monitors use health endpoints to decide whether a process is alive and whether a release is ready.

Using your homepage or /admin/ as a health check is a poor substitute. Those routes may depend on templates, sessions, authentication, database-heavy queries, redirects, or middleware behavior that has nothing to do with basic process health. During a release, that can cause false failures or hide real ones.

A better pattern is:

  • a liveness endpoint such as /healthz that only confirms the Django process is running
  • a readiness endpoint such as /readyz that confirms the app can serve real traffic, usually including critical dependencies like the primary database

This gives deployment tooling a stable signal without exposing secrets or adding unnecessary load.

Quick answer

Create two endpoints:

  • /healthz for a fast liveness probe
  • /readyz for a slightly deeper readiness check

Keep /healthz dependency-light and return HTTP 200 if the app process is healthy. Use /readyz to test only the dependencies required to serve requests, such as the default database. Return simple status codes:

  • 200 OK when healthy
  • 503 Service Unavailable when not ready

Then verify the endpoints through Django directly, through Gunicorn or Uvicorn, and through your reverse proxy.

Step-by-step solution

1. Define what each endpoint means

Before writing code, decide what each path should report.

  • Liveness: “Is the Django app process up and responding?”
  • Readiness: “Can this instance safely receive production traffic right now?”

In most Django deployments:

  • /healthz should not depend on the database, Redis, or third-party APIs
  • /readyz should check the database if your app cannot serve requests without it, but database reachability alone does not prove a new release is fully safe after schema or migration changes
  • optional services like Redis should only be included if traffic truly depends on them

Do not turn a health check into a full system audit. Health probes run often and should stay fast.

2. Add URL routes

Create explicit routes in your project URL configuration.

# project/urls.py
from django.contrib import admin
from django.urls import path

from .views import healthz, readyz

urlpatterns = [
    path("admin/", admin.site.urls),
    path("healthz", healthz, name="healthz"),
    path("readyz", readyz, name="readyz"),
]

These paths should stay stable because infrastructure will depend on them.

A practical note on slashes: Django will match these routes as /healthz and /readyz. Keep your probe path exactly consistent across Django, Nginx, Docker, and any load balancer. If your project relies on APPEND_SLASH, do not assume a probe will follow redirects the way a browser does.

3. Create a lightweight liveness view

The liveness endpoint should be cheap and predictable.

# project/views.py
from django.http import HttpResponseNotAllowed, JsonResponse

def healthz(request):
    if request.method not in ("GET", "HEAD"):
        return HttpResponseNotAllowed(["GET", "HEAD"])
    return JsonResponse({"status": "ok"}, status=200)

This is enough for a basic Django liveness probe. It confirms that:

  • the app server is running
  • Django can route requests
  • the worker can generate a response

Verify it locally

python manage.py runserver 127.0.0.1:8000
curl -i http://127.0.0.1:8000/healthz

Expected result:

  • HTTP status 200
  • small JSON body such as {"status": "ok"}

If this fails, fix app startup or URL configuration before adding deeper checks.

4. Add a readiness endpoint with a database probe

A readiness check should fail when the app cannot serve normal traffic. For most Django applications, that means testing database connectivity.

# project/views.py
from django.db import connections
from django.db.utils import DatabaseError
from django.http import HttpResponseNotAllowed, JsonResponse

def healthz(request):
    if request.method not in ("GET", "HEAD"):
        return HttpResponseNotAllowed(["GET", "HEAD"])
    return JsonResponse({"status": "ok"}, status=200)

def readyz(request):
    if request.method not in ("GET", "HEAD"):
        return HttpResponseNotAllowed(["GET", "HEAD"])

    try:
        with connections["default"].cursor() as cursor:
            cursor.execute("SELECT 1")
            cursor.fetchone()
    except DatabaseError:
        return JsonResponse(
            {"status": "error", "checks": {"database": "unavailable"}},
            status=503,
        )

    return JsonResponse({"status": "ok", "checks": {"database": "ok"}}, status=200)

This uses Django’s configured default database connection and performs a minimal query.

Verify readiness

curl -i http://127.0.0.1:8000/readyz

Expected result:

  • 200 OK when the database is reachable
  • 503 Service Unavailable when it is not

Rollback note

If a new release returns 503 from /readyz after deployment, do not shift traffic to it. Roll back to the previous known-good version or restore the broken dependency before retrying.

If the failure started after a schema change, check whether migrations were skipped, partially applied, or incompatible with the new code. Database reachability is only one part of release safety.

5. Optionally check Redis or cache

Only add cache or Redis to readiness if your app truly depends on it for request handling. Prefer a low-cost connectivity check, and avoid unnecessary writes on every probe.

Example using Django’s cache backend:

# project/views.py
from django.core.cache import cache
from django.db import connections
from django.db.utils import DatabaseError
from django.http import HttpResponseNotAllowed, JsonResponse

def readyz(request):
    if request.method not in ("GET", "HEAD"):
        return HttpResponseNotAllowed(["GET", "HEAD"])

    checks = {}

    try:
        with connections["default"].cursor() as cursor:
            cursor.execute("SELECT 1")
            cursor.fetchone()
        checks["database"] = "ok"
    except DatabaseError:
        checks["database"] = "unavailable"
        return JsonResponse({"status": "error", "checks": checks}, status=503)

    try:
        cache.set("healthcheck", "ok", timeout=5)
        if cache.get("healthcheck") != "ok":
            raise RuntimeError("cache readback failed")
        checks["cache"] = "ok"
    except Exception:
        checks["cache"] = "unavailable"
        return JsonResponse({"status": "error", "checks": checks}, status=503)

    return JsonResponse({"status": "ok", "checks": checks}, status=200)

Note: this write/read example is simple but can add avoidable probe traffic. Use it only when cache availability is critical and probe frequency is controlled.

Do not call external APIs from readiness probes.

6. Keep the response minimal and safe

A health endpoint should not expose:

  • Django version
  • environment names
  • hostnames
  • secrets
  • stack traces
  • internal exception messages

Good response bodies are small and boring. The status code matters more than detailed output.

If you need to restrict access, prefer doing it at the network or proxy layer instead of adding authentication that can break load balancer probes.

Example Nginx restriction:

location = /readyz {
    allow 10.0.0.0/8;
    allow 127.0.0.1;
    deny all;

    proxy_pass http://127.0.0.1:8000;
    proxy_set_header Host $host;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
}

If a public uptime monitor needs access, either allow its source ranges or expose only the shallow /healthz endpoint publicly.

7. Wire it into deployment infrastructure

Docker health check

If you run Django in a container, use the liveness endpoint for the container health check.

HEALTHCHECK --interval=30s --timeout=5s --start-period=20s --retries=3 \
  CMD curl --fail http://127.0.0.1:8000/healthz || exit 1

This keeps the container probe shallow and avoids tying container restarts to database availability.

This example requires curl in the image. Many slim images do not include it, so either install it explicitly or use a different probe command that exists in the container.

Reverse proxy pass-through

A normal Nginx proxy block is enough:

location = /healthz {
    proxy_pass http://127.0.0.1:8000;
    proxy_set_header Host $host;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
}

location = /readyz {
    proxy_pass http://127.0.0.1:8000;
    proxy_set_header Host $host;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
}

If your app server listens on a Unix socket instead of 127.0.0.1:8000, point proxy_pass at the actual upstream you use in production. Do not copy a TCP localhost example unchanged if your Gunicorn or Uvicorn setup is socket-based.

Then test through Nginx:

curl -i https://example.com/healthz
curl -i https://example.com/readyz

Load balancers and deployment systems

Use /readyz for traffic cutover when possible. A load balancer should consider the target healthy only when it gets HTTP 200 from /readyz.

That helps catch:

  • broken database credentials
  • unreachable database hosts
  • startup states where the process is running but not actually usable

But do not treat /readyz as full release verification. It can show green while a newly deployed build still has application-level issues, pending migrations, or incompatible schema assumptions.

8. Verify during deployment

Run checks at multiple layers.

App-level verification

Check Django or Gunicorn/Uvicorn directly first:

curl -i http://127.0.0.1:8000/healthz
curl -i http://127.0.0.1:8000/readyz

Proxy-level verification

Then check through the public hostname and reverse proxy:

curl -i https://example.com/healthz
curl -i https://example.com/readyz

Then inspect logs:

sudo journalctl -u gunicorn -n 100 --no-pager
sudo tail -n 100 /var/log/nginx/access.log
sudo tail -n 100 /var/log/nginx/error.log

Validate failure behavior too. If the database is intentionally made unavailable, /readyz should fail with 503 while /healthz may still return 200. That distinction is useful: it tells you the app process is alive but not ready for traffic.

If the endpoints work on 127.0.0.1 but fail through the public hostname, check your reverse proxy, TLS termination, firewall rules, upstream configuration, and ALLOWED_HOSTS. A missing host in ALLOWED_HOSTS can cause Django to reject requests with DisallowedHost.

Explanation

This setup works because it separates two different operational signals.

  • Liveness protects against dead or wedged processes.
  • Readiness protects users from being routed to an instance that cannot actually serve requests.

That separation matters in Django deployments with Gunicorn, Uvicorn, Nginx, containers, or load balancers. A process can be alive while migrations are incomplete, database credentials are broken, or the primary database is down.

Not every health endpoint should hit the database. If your orchestrator restarts containers based on failed liveness checks, a database-backed liveness endpoint can create restart loops during a database outage. That is why the database belongs in readiness for most deployments, not liveness.

Also remember that readiness is not the same as full release correctness. A database SELECT 1 proves connectivity, not that your new code is compatible with the current schema, that background workers are healthy, or that static assets were deployed correctly.

When to turn this into a reusable template

If you add the same /healthz and /readyz pattern to every Django project, this is a good candidate for a shared module or deployment template. The repeated parts are the URL include, the standard JSON responses, the proxy rules, and the post-deploy curl verification step. Automating those reduces release mistakes without changing the health-check design.

Notes and edge cases

  • Background workers: Celery or RQ worker health usually needs separate checks. Do not assume the web app health endpoint covers worker availability.
  • Subpath deployments: If your app lives under /api/, expose /api/healthz and /api/readyz, but keep the same semantics.
  • Migrations: If a release requires a schema change, readiness may fail until migrations complete. It can also return 200 even when the app is not truly safe after a bad migration sequence. Plan deployment ordering carefully and keep a rollback path.
  • Timeouts: Health endpoints must stay fast. Avoid expensive ORM queries, external HTTP requests, or checks that can block worker threads.
  • Probe frequency: Readiness checks can be called often. Keep them cheap so they do not create avoidable database or cache load across many instances.
  • Static and media files: A health check does not verify static or media delivery. Test those separately if your release changes proxy or storage configuration.
  • Proxy confusion: If localhost:8000/readyz works but https://example.com/readyz fails, the issue is usually Nginx, TLS termination, firewall rules, or upstream routing rather than Django itself.

For the concepts behind this pattern, see What Is the Difference Between Liveness and Readiness Checks in Django Deployments.

For related implementation guidance, see How to Deploy Django with Gunicorn and Nginx and How to Run Django Migrations Safely During Deployment.

If your release succeeds but traffic still does not shift, see Why Your Django Deployment Passes Build but Fails Health Checks.

FAQ

What is the difference between a Django liveness endpoint and a readiness endpoint?

A liveness endpoint shows that the Django process is up and responding. A readiness endpoint shows that the instance can safely serve production traffic, usually including critical dependency checks like the primary database.

Should a Django health check endpoint query the database every time?

Usually only the readiness endpoint should. The liveness endpoint should stay independent of the database so a temporary database failure does not make the process look dead.

What status code should a readiness endpoint return?

Use 200 OK when ready and 503 Service Unavailable when not ready. Those are simple and widely understood by deployment systems, reverse proxies, and load balancers.

Should health check endpoints be public?

Not always. If possible, restrict access at the network or reverse-proxy layer. If an endpoint must be public, keep it minimal and avoid exposing unnecessary internal details.

Can the same endpoint be used for Docker and a load balancer?

You can, but it is usually better not to. Use /healthz for container liveness and /readyz for traffic routing decisions. That separation avoids restart loops and makes failures easier to diagnose.

2026 · django-deployment.com - Django Deployment knowledge base