Deploy Django on Google Cloud Run
If you want to deploy Django on Google Cloud Run, the basic path is straightforward: build a container and run it. The production-safe part is where most deployments fail.
Problem statement
If you want to deploy Django on Google Cloud Run, the basic path is straightforward: build a container and run it. The production-safe part is where most deployments fail.
A real Django Cloud Run deployment needs more than a working container. You need production settings, secret management, database connectivity, static file handling, health checks, a migration workflow, and a rollback path. Cloud Run is also stateless, so anything that depends on local persistent storage will break.
This guide shows a practical way to deploy Django to Google Cloud Run with:
- Cloud Run for app hosting
- Artifact Registry for images
- Secret Manager for secrets
- Cloud SQL for PostgreSQL
- Gunicorn as the app server
- WhiteNoise for static files inside the container
This guide does not cover Terraform or a full CI/CD pipeline. It focuses on a manual deployment path you can verify first, then automate later.
Quick answer
To deploy Django on Google Cloud Run, use this path:
- Prepare Django production settings
- Add a health endpoint
- Build a Docker image that runs Gunicorn on
$PORT - Push the image to Artifact Registry
- Store secrets in Secret Manager
- Connect Cloud Run to Cloud SQL
- Deploy the service with environment variables and secrets
- Run migrations as a separate step
- Verify health, logs, static files, and database access
- Keep the previous revision available for rollback
Cloud Run fits well for stateless Django apps, APIs, admin panels, and moderate web workloads. It is a weaker fit if you require persistent local disk, long-running in-request jobs, or filesystem-based media storage.
Step-by-step solution
1) Choose the production architecture for Django on Cloud Run
Recommended stack:
- Cloud Run: runs the Django container
- Artifact Registry: stores container images
- Secret Manager: stores
SECRET_KEY, database credentials, and similar secrets - Cloud SQL for PostgreSQL: production database
- Cloud Logging: startup, request, and application logs
Cloud Run starts your container, sends traffic to the port in the PORT environment variable, and expects the app to be stateless. The container filesystem is ephemeral. Do not store uploads or generated files there permanently.
Constraints to design around:
- no persistent local media storage
- cold starts are possible
- requests have a timeout limit
- scaling is automatic and can create multiple app instances
- startup should be fast and deterministic
2) Prepare Django settings for Cloud Run production
Use environment-based settings. A minimal production pattern looks like this:
# config/settings/production.py
import os
from pathlib import Path
import dj_database_url
BASE_DIR = Path(__file__).resolve().parent.parent.parent
DEBUG = False
ALLOWED_HOSTS = [h.strip() for h in os.environ.get("ALLOWED_HOSTS", "").split(",") if h.strip()]
CSRF_TRUSTED_ORIGINS = [o.strip() for o in os.environ.get("CSRF_TRUSTED_ORIGINS", "").split(",") if o.strip()]
SECRET_KEY = os.environ["SECRET_KEY"]
SECURE_PROXY_SSL_HEADER = ("HTTP_X_FORWARDED_PROTO", "https")
SECURE_SSL_REDIRECT = True
SESSION_COOKIE_SECURE = True
CSRF_COOKIE_SECURE = True
SECURE_HSTS_SECONDS = 31536000
SECURE_HSTS_INCLUDE_SUBDOMAINS = True
SECURE_HSTS_PRELOAD = True
DATABASES = {
"default": dj_database_url.parse(os.environ["DATABASE_URL"], conn_max_age=600)
}
For static files with WhiteNoise:
INSTALLED_APPS = [
"django.contrib.staticfiles",
# ...
]
MIDDLEWARE = [
"django.middleware.security.SecurityMiddleware",
"whitenoise.middleware.WhiteNoiseMiddleware",
# ...
]
STATIC_URL = "/static/"
STATIC_ROOT = BASE_DIR / "staticfiles"
STORAGES = {
"staticfiles": {
"BACKEND": "whitenoise.storage.CompressedManifestStaticFilesStorage",
},
}
Media files should not use the Cloud Run filesystem. Use object storage such as Google Cloud Storage.
Add a lightweight health endpoint:
# urls.py
from django.http import JsonResponse
from django.urls import path, include
def health(request):
return JsonResponse({"status": "ok"})
urlpatterns = [
path("health/", health),
path("", include("your_app.urls")),
]
Verification check:
DEBUG=FalseALLOWED_HOSTSincludes the actual Cloud Run service hostname and any custom domainCSRF_TRUSTED_ORIGINSincludes fullhttps://...originspython manage.py check --deploypasses or shows only intentional warnings
Note on HSTS: includeSubDomains and preload are strict settings. Use them only if your domain setup is ready for that policy across all relevant subdomains.
3) Create the Docker image for Django
Use a production Dockerfile that installs dependencies, collects static files, and runs Gunicorn:
FROM python:3.12-slim
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
ENV DJANGO_SETTINGS_MODULE=config.settings.production
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
RUN python manage.py collectstatic --noinput
CMD exec gunicorn config.wsgi:application --bind 0.0.0.0:${PORT:-8080} --workers 2 --threads 4 --timeout 120
Add a .dockerignore:
.git
__pycache__/
*.pyc
.env
venv/
node_modules/
Key requirement: Gunicorn must bind to $PORT. Cloud Run injects that port at runtime.
Verify locally before pushing:
docker build -t django-cloudrun-test .
docker run --rm -p 8080:8080 \
-e PORT=8080 \
-e SECRET_KEY=test-secret \
-e ALLOWED_HOSTS=localhost,127.0.0.1 \
-e DATABASE_URL=postgres://USER:PASSWORD@HOST:5432/DBNAME \
django-cloudrun-test
If your app can answer /health/ without a live database connection, local startup may still succeed with a placeholder database setting. If startup imports or checks the database eagerly, use a real reachable database for this test.
If the container does not start locally, fix that before using Cloud Run.
4) Build and push the image to Artifact Registry
Set the project and enable required APIs:
gcloud auth login
gcloud config set project YOUR_PROJECT_ID
gcloud services enable \
run.googleapis.com \
artifactregistry.googleapis.com \
secretmanager.googleapis.com \
sqladmin.googleapis.com
Create the repository:
gcloud artifacts repositories create django-repo \
--repository-format=docker \
--location=REGION
Configure Docker auth:
gcloud auth configure-docker REGION-docker.pkg.dev
Build and push:
docker build -t REGION-docker.pkg.dev/YOUR_PROJECT_ID/django-repo/django-app:REVISION_TAG .
docker push REGION-docker.pkg.dev/YOUR_PROJECT_ID/django-repo/django-app:REVISION_TAG
Verify the image exists:
gcloud artifacts docker images list REGION-docker.pkg.dev/YOUR_PROJECT_ID/django-repo
Use release-specific tags, not only latest. That makes rollback simpler.
5) Provision the production database and secrets
Create a Cloud SQL PostgreSQL instance using the Google Cloud console or CLI, then create the database and application user.
Store secrets in Secret Manager:
echo -n "your-django-secret-key" | gcloud secrets create django-secret-key --data-file=-
echo -n "postgres://DB_USER:DB_PASSWORD@/DB_NAME?host=/cloudsql/PROJECT:REGION:INSTANCE" | gcloud secrets create django-database-url --data-file=-
That DATABASE_URL format uses the Cloud SQL Unix socket path, which is the normal pattern for Cloud Run + Cloud SQL.
Use a dedicated service account for the Cloud Run service and grant only the permissions it needs. At minimum, the runtime service account typically needs:
roles/secretmanager.secretAccessorroles/cloudsql.client
Verification check:
- the secret names exist in Secret Manager
- the database user can connect
- the Cloud SQL instance connection name matches
PROJECT:REGION:INSTANCE
6) Deploy Django to Cloud Run
First deploy the service with secrets, Cloud SQL attachment, and the settings module. Do not guess the Cloud Run hostname for ALLOWED_HOSTS; get the actual generated URL after deploy.
gcloud run deploy django-app \
--image REGION-docker.pkg.dev/YOUR_PROJECT_ID/django-repo/django-app:REVISION_TAG \
--region REGION \
--platform managed \
--allow-unauthenticated \
--service-account django-cloudrun@YOUR_PROJECT_ID.iam.gserviceaccount.com \
--add-cloudsql-instances PROJECT:REGION:INSTANCE \
--set-env-vars DJANGO_SETTINGS_MODULE=config.settings.production \
--set-secrets SECRET_KEY=django-secret-key:latest \
--set-secrets DATABASE_URL=django-database-url:latest
Capture the service URL and update allowed hosts and CSRF origins:
SERVICE_URL=$(gcloud run services describe django-app --region REGION --format='value(status.url)')
SERVICE_HOST=$(echo "$SERVICE_URL" | sed 's#https://##')
gcloud run services update django-app \
--region REGION \
--set-env-vars ALLOWED_HOSTS=${SERVICE_HOST},yourdomain.com \
--set-env-vars CSRF_TRUSTED_ORIGINS=${SERVICE_URL},https://yourdomain.com
You can also set memory, concurrency, and minimum instances:
gcloud run services update django-app \
--region REGION \
--memory 512Mi \
--concurrency 40 \
--min-instances 1
Use min-instances=1 if cold starts are a problem and the extra baseline cost is acceptable.
Verification check:
- service deploys successfully
- a new revision becomes ready
/health/responds with200- logs show no import, settings, secret, or database errors
If the service should not be public, do not use --allow-unauthenticated.
7) Run database migrations safely
Do not run migrations automatically on every container startup. Cloud Run can start multiple instances, and startup-time migrations create race conditions and fragile releases.
Use a separate controlled step. A practical Cloud Run-compatible option is a Cloud Run Job using the same image, settings, secrets, and Cloud SQL connection.
Create the job:
gcloud run jobs create django-migrate \
--image REGION-docker.pkg.dev/YOUR_PROJECT_ID/django-repo/django-app:REVISION_TAG \
--region REGION \
--service-account django-cloudrun@YOUR_PROJECT_ID.iam.gserviceaccount.com \
--add-cloudsql-instances PROJECT:REGION:INSTANCE \
--set-env-vars DJANGO_SETTINGS_MODULE=config.settings.production \
--set-secrets SECRET_KEY=django-secret-key:latest \
--set-secrets DATABASE_URL=django-database-url:latest \
--command python \
--args manage.py,migrate,--noinput
Run the job:
gcloud run jobs execute django-migrate --region REGION --wait
For later releases, update the job image before running it again:
gcloud run jobs update django-migrate \
--image REGION-docker.pkg.dev/YOUR_PROJECT_ID/django-repo/django-app:REVISION_TAG \
--region REGION
Before sending production traffic to a new release, verify:
- migrations completed successfully
- schema is compatible with the new code
- database-backed pages load correctly
Keep migrations backward-compatible where possible so rollback remains possible.
8) Verify the deployment
Check service and revision state:
gcloud run services describe django-app --region REGION
gcloud run revisions list --service django-app --region REGION
Check logs:
gcloud logs read "resource.type=cloud_run_revision AND resource.labels.service_name=django-app" --limit 50
Test the app:
curl "${SERVICE_URL}/health/"
curl -I "${SERVICE_URL}/static/YOUR_FILE.css"
Smoke test these paths:
- home page
- admin login if used
- one database-backed page
- one static asset URL
If static files return 404, confirm collectstatic ran during the image build and that WhiteNoise is enabled in the production settings module.
9) Configure custom domain, TLS, and basic hardening
Map your custom domain in Cloud Run, then add DNS records as instructed by Google Cloud. Cloud Run handles TLS termination for mapped domains.
After domain mapping:
- add the custom host to
ALLOWED_HOSTS - add the full HTTPS origin to
CSRF_TRUSTED_ORIGINS - verify redirects and secure cookies work correctly
Review ingress settings and service account permissions. If the service should only be reachable internally or through another layer, adjust ingress and authentication accordingly.
10) Rollback and recovery
List revisions:
gcloud run revisions list --service django-app --region REGION
Shift traffic back to a previous good revision:
gcloud run services update-traffic django-app \
--to-revisions REVISION_NAME=100 \
--region REGION
After rollback, verify:
- traffic allocation points to the intended revision
/health/returns200- a database-backed page loads
- logs no longer show the release error
If the release failed because of a migration, a traffic rollback alone may not be enough. You may also need a database recovery step. This is why releases should use:
- immutable image tags
- backward-compatible migrations
- staged verification before full traffic cutover
Explanation
This setup works because it matches Cloud Run’s execution model. Django runs as a stateless HTTP app behind a managed HTTPS proxy. Gunicorn binds to the injected port, WhiteNoise serves static assets bundled into the image, Cloud SQL provides persistent relational storage, and secrets stay out of the image and source code.
Cloud Run is a good fit for Django applications that can stay stateless at the web layer. It is less suitable when the app depends on local persistent files, long-running request work, or in-process background jobs. In those cases, combine Cloud Run with object storage, task queues, and worker services, or choose a different hosting model.
Edge cases / notes
Static files returning 404
Usually caused by one of these:
collectstaticdid not run during build- WhiteNoise middleware is missing
STATIC_ROOTis wrong- the wrong settings module is being used
Media uploads do not persist
Cloud Run container storage is ephemeral. Store uploads in object storage, not /app/media.
Database connection failures
Common causes:
- wrong Cloud SQL instance connection name
- service or job not attached with
--add-cloudsql-instances - invalid socket-based
DATABASE_URL - missing
roles/cloudsql.clienton the runtime service account
Secret injection failures
Common causes:
- wrong secret name or version reference
- missing
roles/secretmanager.secretAccessoron the runtime service account - deploying with a different service account than expected
Request timeout or slow startup
Slow imports, expensive startup logic, or too many worker processes can cause failures. Keep startup fast and move one-time tasks out of container boot.
Cold starts and scaling tradeoffs
Cloud Run scales well, but cold starts can affect latency. min-instances=1 reduces that at the cost of always-on baseline usage.
When manual setup becomes repetitive
Once you repeat this process across environments or services, script the build, push, deploy, secret injection, migration, and smoke test steps. Good template candidates are the Dockerfile, Django production settings, Cloud Run deploy command, and migration job definition.
Internal links
For the settings side, see Django Environment Variables for Production.
For a more traditional VM deployment path, compare Deploy Django with Gunicorn and Nginx on Ubuntu.
For database hardening details, see Configure PostgreSQL for Django Production.
For release recovery procedures, use Django Deployment Rollback Checklist.
FAQ
Do I need Docker to deploy Django on Google Cloud Run?
Yes. Cloud Run runs containers, so your Django app must be packaged as a container image.
How should I serve static files for Django on Cloud Run?
For many apps, WhiteNoise inside the container is the simplest option. For larger assets or CDN-heavy setups, use object storage and a CDN.
Can Django media uploads be stored on the Cloud Run filesystem?
No. The filesystem is ephemeral. Use persistent object storage for uploads.
How do I run Django migrations safely on Cloud Run?
Run migrations as a separate controlled step, not automatically in container startup. A Cloud Run Job using the same image, secrets, and Cloud SQL connection is a practical approach.
What is the fastest way to roll back a failed Cloud Run deployment?
Shift traffic back to the previous healthy revision with gcloud run services update-traffic. If the failure involved a database migration, review database recovery separately.