How to Dockerize a Django App for Production
A Dockerized Django app that works in development is not automatically safe for production.
Problem statement
A Dockerized Django app that works in development is not automatically safe for production.
Typical development setups run python manage.py runserver, mount local source code into the container, keep DEBUG=True, and rely on ad hoc environment variables. That is fine for local work, but it breaks down in production where you need predictable builds, secret handling, static file strategy, proper process startup, controlled migrations, logs to stdout/stderr, and a rollback path.
A production-ready Django Docker container should handle:
- deterministic image builds
- runtime configuration through environment variables
- non-root execution
- Gunicorn or Uvicorn process startup
- static file collection
- health checks that verify request handling
- controlled database migrations
- external PostgreSQL, Redis, and media storage
- proxy-aware HTTPS settings
Quick answer
The safest way to dockerize a Django app for production is:
- build a minimal image from a slim Python base
- install dependencies predictably from a pinned requirements file
- run Django with Gunicorn
- inject secrets at runtime, not at build time
- collect static files during build or as a controlled release step
- keep the database, Redis, and uploaded media outside the container
- put Nginx, Caddy, or a load balancer in front when you need TLS termination, buffering, and static/media routing
- tag every image version so you can roll back quickly
Step-by-step solution
1. Choose a production container architecture
What should go inside the Django container:
- Django application code
- Python dependencies
- Gunicorn
- optional startup script
What should stay outside:
- PostgreSQL
- Redis
- uploaded media storage
- TLS termination and reverse proxy layer
If your app only serves web requests, one web container may be enough. If you use Celery or scheduled jobs, run separate worker and beat containers from the same image with different commands.
2. Prepare Django settings for containerized production
Use environment variables for all production-specific values.
# settings.py
import os
from django.core.exceptions import ImproperlyConfigured
def env(name, default=None):
value = os.getenv(name, default)
if value is None:
raise ImproperlyConfigured(f"Missing required environment variable: {name}")
return value
DEBUG = False
SECRET_KEY = env("DJANGO_SECRET_KEY")
ALLOWED_HOSTS = [h.strip() for h in env("DJANGO_ALLOWED_HOSTS").split(",") if h.strip()]
csrf_origins = os.getenv("DJANGO_CSRF_TRUSTED_ORIGINS", "")
CSRF_TRUSTED_ORIGINS = [x.strip() for x in csrf_origins.split(",") if x.strip()]
SECURE_PROXY_SSL_HEADER = ("HTTP_X_FORWARDED_PROTO", "https")
USE_X_FORWARDED_HOST = True
SECURE_SSL_REDIRECT = True
SESSION_COOKIE_SECURE = True
CSRF_COOKIE_SECURE = True
DATABASES = {
"default": {
"ENGINE": "django.db.backends.postgresql",
"NAME": env("POSTGRES_DB"),
"USER": env("POSTGRES_USER"),
"PASSWORD": env("POSTGRES_PASSWORD"),
"HOST": env("POSTGRES_HOST"),
"PORT": os.getenv("POSTGRES_PORT", "5432"),
"CONN_MAX_AGE": 60,
}
}
STATIC_URL = "/static/"
STATIC_ROOT = "/app/staticfiles"
MEDIA_URL = "/media/"
MEDIA_ROOT = "/app/media"
LOGGING = {
"version": 1,
"disable_existing_loggers": False,
"handlers": {
"console": {"class": "logging.StreamHandler"},
},
"root": {
"handlers": ["console"],
"level": os.getenv("DJANGO_LOG_LEVEL", "INFO"),
},
}
If your site is HTTPS-only and you control the full domain setup, add HSTS as well:
SECURE_HSTS_SECONDS = 31536000
SECURE_HSTS_INCLUDE_SUBDOMAINS = True
SECURE_HSTS_PRELOAD = True
For CSRF_TRUSTED_ORIGINS, use full origins with scheme, such as https://example.com. Do not use plain hostnames in production.
Add a startup validation check in your release workflow:
python manage.py check --deploy
Verification:
- confirm
DEBUG=False - confirm required variables are enforced
- confirm logs appear in container output
- confirm HTTPS settings match your proxy setup
Rollback note: if a settings change breaks startup, redeploy the previous image tag with the previous environment file.
3. Add a .dockerignore
Do not send local secrets and build junk into the image context.
.git
.gitignore
.env
.env.*
venv
.venv
__pycache__
*.pyc
.pytest_cache
.mypy_cache
node_modules
dist
build
media
staticfiles
4. Write a production Django Dockerfile
This example uses a slim Python base, installs only required system packages, creates a non-root user, and runs Gunicorn.
FROM python:3.12-slim
ENV PYTHONDONTWRITEBYTECODE=1 \
PYTHONUNBUFFERED=1
WORKDIR /app
RUN apt-get update && apt-get install -y --no-install-recommends \
libpq5 \
netcat-openbsd \
&& rm -rf /var/lib/apt/lists/*
COPY requirements.txt .
RUN pip install --no-cache-dir --upgrade pip \
&& pip install --no-cache-dir -r requirements.txt
RUN addgroup --system django && adduser --system --ingroup django django
COPY . /app
RUN chmod +x /app/entrypoint.sh \
&& mkdir -p /app/staticfiles /app/media \
&& chown -R django:django /app
USER django
EXPOSE 8000
HEALTHCHECK --interval=30s --timeout=5s --start-period=30s --retries=3 \
CMD python -c "import urllib.request; urllib.request.urlopen('http://127.0.0.1:8000/healthz', timeout=3)" || exit 1
ENTRYPOINT ["/app/entrypoint.sh"]
CMD ["gunicorn", "--bind", "0.0.0.0:8000", "--workers", "3", "--timeout", "60", "projectname.wsgi:application"]
Replace projectname.wsgi:application with your real WSGI path.
The Docker health check should call a real HTTP endpoint, not manage.py check --deploy. A settings check can pass while Gunicorn is not serving requests.
Add a simple health endpoint in Django, for example:
# urls.py
from django.http import HttpResponse
from django.urls import path
def healthz(request):
return HttpResponse("ok", content_type="text/plain")
urlpatterns = [
path("healthz", healthz),
]
Notes:
libpq5is needed at runtime for PostgreSQL client libraries in many setups.- If your dependencies need compilation, you may also need build packages like
build-essentialandlibpq-dev. In that case, prefer a multi-stage build.
5. Add an entrypoint for startup tasks
Use the entrypoint for controlled startup behavior, not for hidden deployment logic.
#!/bin/sh
set -e
if [ -n "$POSTGRES_HOST" ]; then
echo "Waiting for PostgreSQL at $POSTGRES_HOST:${POSTGRES_PORT:-5432}..."
while ! nc -z "$POSTGRES_HOST" "${POSTGRES_PORT:-5432}"; do
sleep 1
done
fi
python manage.py collectstatic --noinput
if [ "$RUN_MIGRATIONS" = "1" ]; then
python manage.py migrate --noinput
fi
exec "$@"
This gives you explicit control:
- set
RUN_MIGRATIONS=1only for a release where migrations should run - leave it unset for normal restarts
Avoid automatic migrations on every container start in multi-instance deployments. Two containers starting at once can turn startup into a release step you no longer control.
6. Build and run the container locally in production mode
Build the image:
docker build -t myapp:1.0.0 .
Run it with production-like variables:
docker run --rm -p 8000:8000 \
-e DJANGO_SECRET_KEY='replace-me' \
-e DJANGO_ALLOWED_HOSTS='localhost,127.0.0.1' \
-e DJANGO_CSRF_TRUSTED_ORIGINS='http://localhost,http://127.0.0.1' \
-e POSTGRES_DB='appdb' \
-e POSTGRES_USER='appuser' \
-e POSTGRES_PASSWORD='apppassword' \
-e POSTGRES_HOST='host.docker.internal' \
-e POSTGRES_PORT='5432' \
myapp:1.0.0
In production, CSRF_TRUSTED_ORIGINS entries must include the scheme, for example https://example.com.
For local testing this is fine, but in real deployments prefer --env-file, orchestrator secrets, or a secret manager over putting secrets directly in shell history or process arguments.
Verification:
docker ps
docker logs <container_id>
curl -I http://localhost:8000/healthz
curl -I http://localhost:8000/
Then test:
- homepage loads
- admin login works
- static assets return 200
- database-backed pages work
DEBUGpages are not exposed/healthzreturns 200
7. Connect the container to production services
For PostgreSQL and Redis, pass connection details as environment variables from your host, orchestrator, or secret manager. Do not bake them into the image.
For reverse proxy integration, keep Gunicorn bound to an internal port and let Nginx or Caddy handle public traffic.
If Nginx serves static and media from the host, you must mount those paths from the container or sync them during release. A host-level Nginx config is incomplete unless the files actually exist on the host.
Example Nginx upstream:
upstream django_app {
server 127.0.0.1:8000;
}
server {
listen 80;
server_name example.com;
location /static/ {
alias /srv/myapp/staticfiles/;
}
location /media/ {
alias /srv/myapp/media/;
}
location / {
proxy_pass http://django_app;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
}
}
If the proxy serves static and media directly, mount or sync those paths outside the app container. Do not rely on the container filesystem for persistent uploads.
8. Use a controlled release workflow
A practical release flow looks like this:
- build image in CI
- tag it with commit SHA or version
- push it to a registry
- pull it on the server
- run checks
- run migrations as a controlled release step
- start or replace the web container
- verify health and logs
Example tagging:
docker build -t registry.example.com/myapp:gitsha123 .
docker push registry.example.com/myapp:gitsha123
Server-side deploy:
docker pull registry.example.com/myapp:gitsha123
docker run --rm \
--env-file /srv/myapp/.env \
-v /srv/myapp/staticfiles:/app/staticfiles \
-v /srv/myapp/media:/app/media \
registry.example.com/myapp:gitsha123 \
python manage.py check --deploy
docker run --rm \
--env-file /srv/myapp/.env \
-v /srv/myapp/staticfiles:/app/staticfiles \
-v /srv/myapp/media:/app/media \
registry.example.com/myapp:gitsha123 \
python manage.py migrate --noinput
docker run --rm \
--env-file /srv/myapp/.env \
-v /srv/myapp/staticfiles:/app/staticfiles \
-v /srv/myapp/media:/app/media \
registry.example.com/myapp:gitsha123 \
python manage.py collectstatic --noinput
docker stop myapp || true
docker rm myapp || true
docker run -d --name myapp --restart unless-stopped \
--env-file /srv/myapp/.env \
-v /srv/myapp/staticfiles:/app/staticfiles \
-v /srv/myapp/media:/app/media \
-p 127.0.0.1:8000:8000 \
registry.example.com/myapp:gitsha123
Rollback is the same process with the previous known-good tag:
docker pull registry.example.com/myapp:previoussha
docker stop myapp && docker rm myapp
docker run -d --name myapp --restart unless-stopped \
--env-file /srv/myapp/.env \
-v /srv/myapp/staticfiles:/app/staticfiles \
-v /srv/myapp/media:/app/media \
-p 127.0.0.1:8000:8000 \
registry.example.com/myapp:previoussha
Verification after deploy:
docker logs myapp --tail 50
curl -I http://127.0.0.1:8000/healthz
If the failed release included an incompatible migration, application rollback may not be enough. Test backward-compatible migrations where possible, and keep a separate database recovery plan.
When to automate this
Once you are repeating the same build, tag, push, validate, migrate, collect static files, deploy, and health-check steps across environments, convert them into scripts or CI templates. The best first automation targets are image tagging, environment validation, migration execution, post-deploy health checks, and rollback to a previous image tag.
Explanation
This setup works because it separates image contents from runtime state.
The image contains code and dependencies. The environment provides secrets and service endpoints. PostgreSQL, Redis, TLS, and persistent media remain external so the container can be replaced safely at any time.
Containers improve repeatability, but they do not replace production architecture. You still need a reverse proxy strategy, persistent storage decisions, migration discipline, and monitoring.
Secrets should never be baked into images because anyone with image access can inspect them later. Media should not live in the container filesystem because container replacement deletes local writable state unless you add external volumes or object storage.
Rollback depends on two things:
- immutable image tags you can redeploy
- migration changes that do not trap you in a broken schema state
Edge cases or notes
- Celery workers and beat: run them as separate containers from the same image with different commands.
- ASGI apps: if you use Django ASGI features, replace Gunicorn WSGI startup with Gunicorn plus a Uvicorn worker, or run Uvicorn directly where appropriate.
- Static files:
collectstaticcan happen during image build if static output is environment-independent. If storage credentials, manifest generation, or host-mounted files are part of the release process, run it during deployment instead. - Health checks: a real HTTP health endpoint is better than a framework configuration check because it proves the app server is actually responding.
- Trusted proxy headers: only enable forwarded-host and forwarded-proto handling when your reverse proxy is under your control and configured correctly.
- Rootless environments: some platforms enforce non-root containers already. Keep ownership and writable paths explicit.
- Private Python packages: use your CI secret mechanism or build-time secret support carefully. Do not hardcode repository credentials in the Dockerfile.
Internal links
For the settings hardening behind this container setup, see Django Deployment Checklist for Production.
If you want the app server and reverse proxy layer built out fully, see Deploy Django with Gunicorn and Nginx on Ubuntu.
If you are deploying an ASGI stack, see Deploy Django ASGI with Uvicorn and Nginx.
For an alternative reverse proxy with automatic TLS, see Deploy Django with Caddy and Automatic HTTPS.
FAQ
Should collectstatic happen during build or deploy?
If static assets are fully deterministic and do not depend on runtime secrets, build time is fine. If static collection depends on environment-specific settings, external storage credentials, or host-mounted output paths, run it during the release step instead. Keep the choice explicit and documented.
Should migrations run inside the container entrypoint?
Usually not on every startup. It is safer to run migrations as a controlled release step, especially when you run multiple replicas. If you do allow entrypoint migrations, gate them behind an environment variable like RUN_MIGRATIONS=1.
Do I need Nginx or Caddy in front of a Django Docker container?
Often yes. A reverse proxy is useful for TLS termination, buffering, request limits, forwarded headers, and static/media routing. Some managed platforms provide this layer for you, but plain Docker on a server usually needs it.
Can I store uploaded media inside the container?
Do not rely on that in production. Container filesystems are disposable. Use a mounted persistent volume or object storage instead.
How do I roll back a bad Dockerized Django release?
Redeploy the previous image tag and restart the container. This is why every release should be tagged immutably. If the release also changed the database schema, image rollback alone may not be enough, so plan database recovery separately.