Deploy Django with Ansible: Step-by-Step
Manual Django deployment usually starts simple and becomes unreliable fast. You SSH into a server, pull code, install packages, update environment variables, run migrations, res...
Problem statement
Manual Django deployment usually starts simple and becomes unreliable fast. You SSH into a server, pull code, install packages, update environment variables, run migrations, restart Gunicorn, reload Nginx, and hope you did not miss a step.
That breaks down in production for three reasons:
- servers drift over time
- releases become hard to repeat exactly
- rollback gets risky when changes are applied manually
If you want to deploy Django with Ansible safely, the goal is not just to copy files to a server. The goal is to automate a complete production path: install dependencies, place secrets securely, deploy code, run Django management commands in the right order, configure Gunicorn and Nginx, and verify the app is healthy before you consider the release done.
This guide shows a practical single-server pattern for Ubuntu or Debian using:
- Django
- PostgreSQL
- Gunicorn
- Nginx
- systemd
- Ansible from a separate control machine
Quick answer
A solid Django deployment with Ansible usually looks like this:
- define inventory and host variables
- bootstrap the server with packages, users, and directories
- deploy code from Git to a versioned release directory
- create a virtualenv and install requirements
- render a protected environment file from encrypted vars
- run
migrate,collectstatic, andcheck --deploywith the production environment loaded - switch the
currentsymlink only after release checks succeed - configure Gunicorn with systemd
- configure Nginx as a reverse proxy
- validate the app with
curl,systemctl, logs, and static asset checks - keep at least one previous release available for rollback
Step-by-step solution
Choose the target architecture before writing the playbook
Use a simple production stack first:
- Ubuntu server
- PostgreSQL local or managed
- Gunicorn bound to
127.0.0.1:8000 - Nginx listening on 80 and 443
- Django code deployed under
/srv/myapp - systemd managing Gunicorn
- Ansible running from your laptop or CI runner
This guide assumes one application server. The same pattern extends to multiple hosts later by splitting groups like web, db, and worker.
Prepare your Ansible project structure
A clean layout helps keep bootstrap and deploy work separate:
ansible/
├── inventory/
│ └── production.ini
├── group_vars/
│ ├── production.yml
│ └── vault.yml
├── playbooks/
│ ├── bootstrap.yml
│ └── deploy.yml
└── templates/
├── gunicorn.service.j2
├── nginx-site.conf.j2
└── env.j2
Define inventory and host variables
inventory/production.ini
[web]
app1.example.com ansible_user=deploy
This assumes the deploy SSH user already exists on the server and has sudo access. If not, create it out of band or use your initial provisioner account for bootstrap.
group_vars/production.yml
app_name: myapp
app_user: myapp
app_group: myapp
app_domain: app1.example.com
deploy_root: /srv/myapp
releases_dir: "{{ deploy_root }}/releases"
shared_dir: "{{ deploy_root }}/shared"
current_dir: "{{ deploy_root }}/current"
media_dir: "{{ shared_dir }}/media"
repo_url: "git@github.com:yourorg/myapp.git"
deploy_version: "main"
venv_path: "{{ shared_dir }}/venv"
env_file: "{{ shared_dir }}/.env"
django_project_dir: "{{ current_dir }}"
django_wsgi_module: "config.wsgi:application"
gunicorn_bind: "127.0.0.1:8000"
python_executable: python3
Store secrets safely
Do not commit Django secrets in plaintext. Use Ansible Vault or another secret backend.
Create the vault file:
ansible-vault create group_vars/vault.yml
Inside it:
django_secret_key: "replace-me"
db_name: "myapp"
db_user: "myapp"
db_password: "replace-me"
db_host: "127.0.0.1"
db_port: "5432"
csrf_trusted_origins:
- "https://app1.example.com"
Your deployed environment file should be readable only by the app user.
Bootstrap the server with Ansible
playbooks/bootstrap.yml
- hosts: web
become: true
tasks:
- name: Install system packages
apt:
name:
- python3
- python3-venv
- python3-pip
- python3-dev
- build-essential
- libpq-dev
- git
- nginx
- ufw
state: present
update_cache: true
- name: Create app group
group:
name: "{{ app_group }}"
state: present
- name: Create app user
user:
name: "{{ app_user }}"
group: "{{ app_group }}"
system: true
shell: /usr/sbin/nologin
create_home: false
- name: Create deployment directories
file:
path: "{{ item }}"
state: directory
owner: "{{ app_user }}"
group: "{{ app_group }}"
mode: "0755"
loop:
- "{{ deploy_root }}"
- "{{ releases_dir }}"
- "{{ shared_dir }}"
- "{{ media_dir }}"
Run it:
ansible-playbook -i inventory/production.ini playbooks/bootstrap.yml --ask-vault-pass
Verification:
ssh deploy@app1.example.com 'ls -ld /srv/myapp /srv/myapp/releases /srv/myapp/shared /srv/myapp/shared/media'
For firewall basics, allow only SSH, HTTP, and HTTPS if you use UFW. Keep SSH restrictions aligned with your access method.
Deploy the Django application code
playbooks/deploy.yml
- hosts: web
become: true
vars:
release_name: "{{ lookup('pipe', 'date +%Y%m%d%H%M%S') }}"
release_path: "{{ releases_dir }}/{{ release_name }}"
tasks:
- name: Create release directory
file:
path: "{{ release_path }}"
state: directory
owner: "{{ app_user }}"
group: "{{ app_group }}"
mode: "0755"
- name: Checkout application code
git:
repo: "{{ repo_url }}"
dest: "{{ release_path }}"
version: "{{ deploy_version }}"
become_user: "{{ app_user }}"
- name: Create virtualenv
command: "{{ python_executable }} -m venv {{ venv_path }}"
args:
creates: "{{ venv_path }}/bin/activate"
- name: Install Python dependencies
pip:
requirements: "{{ release_path }}/requirements.txt"
virtualenv: "{{ venv_path }}"
- name: Render environment file
template:
src: env.j2
dest: "{{ env_file }}"
owner: "{{ app_user }}"
group: "{{ app_group }}"
mode: "0600"
Prefer preloading the Git host key in known_hosts instead of relying on trust-on-first-use behavior in production.
templates/env.j2
DEBUG=False
SECRET_KEY="{{ django_secret_key }}"
ALLOWED_HOSTS="{{ app_domain }}"
CSRF_TRUSTED_ORIGINS="{{ csrf_trusted_origins | join(',') }}"
DATABASE_NAME="{{ db_name }}"
DATABASE_USER="{{ db_user }}"
DATABASE_PASSWORD="{{ db_password }}"
DATABASE_HOST="{{ db_host }}"
DATABASE_PORT="{{ db_port }}"
For reproducibility, deploy a tag or commit SHA instead of always using main.
Run Django production tasks safely
Run these tasks before switching traffic to the new release where possible. If you switch the symlink first, a failed migration or check can leave current pointing at a bad release.
Add these tasks to playbooks/deploy.yml after the environment file is rendered:
- name: Run database migrations
shell: "set -a && . {{ env_file }} && set +a && {{ venv_path }}/bin/python manage.py migrate --noinput"
args:
chdir: "{{ release_path }}"
become_user: "{{ app_user }}"
environment:
DJANGO_SETTINGS_MODULE: "config.settings"
- name: Collect static files
shell: "set -a && . {{ env_file }} && set +a && {{ venv_path }}/bin/python manage.py collectstatic --noinput"
args:
chdir: "{{ release_path }}"
become_user: "{{ app_user }}"
environment:
DJANGO_SETTINGS_MODULE: "config.settings"
- name: Run Django deployment checks
shell: "set -a && . {{ env_file }} && set +a && {{ venv_path }}/bin/python manage.py check --deploy"
args:
chdir: "{{ release_path }}"
become_user: "{{ app_user }}"
environment:
DJANGO_SETTINGS_MODULE: "config.settings"
- name: Point current symlink to release
file:
src: "{{ release_path }}"
dest: "{{ current_dir }}"
state: link
force: true
Verification after this stage:
ansible web -i inventory/production.ini -b -a "sudo -u myapp /srv/myapp/shared/venv/bin/python /srv/myapp/current/manage.py showmigrations" --ask-vault-pass
If migrations are destructive or hard to reverse, take a database backup before deploy. That matters more than the automation tool.
Configure Gunicorn with systemd
templates/gunicorn.service.j2
[Unit]
Description=Gunicorn for {{ app_name }}
After=network.target
[Service]
User={{ app_user }}
Group={{ app_group }}
WorkingDirectory={{ current_dir }}
EnvironmentFile={{ env_file }}
ExecStart={{ venv_path }}/bin/gunicorn --workers 3 --bind {{ gunicorn_bind }} {{ django_wsgi_module }}
ExecReload=/bin/kill -HUP $MAINPID
Restart=always
RestartSec=5
[Install]
WantedBy=multi-user.target
Add these tasks:
- name: Install Gunicorn systemd unit
template:
src: gunicorn.service.j2
dest: "/etc/systemd/system/{{ app_name }}-gunicorn.service"
mode: "0644"
notify:
- reload systemd
- restart gunicorn
handlers:
- name: reload systemd
systemd:
daemon_reload: true
- name: restart gunicorn
systemd:
name: "{{ app_name }}-gunicorn"
state: restarted
enabled: true
Verification:
ssh deploy@app1.example.com 'systemctl status myapp-gunicorn --no-pager'
ssh deploy@app1.example.com 'journalctl -u myapp-gunicorn -n 50 --no-pager'
curl http://127.0.0.1:8000/
Configure Nginx as the reverse proxy
Use HTTP only for initial local validation, but treat TLS as part of the production setup. At minimum, terminate HTTPS in Nginx or Caddy before serving public traffic.
templates/nginx-site.conf.j2
server {
listen 80;
server_name {{ app_domain }};
return 301 https://$host$request_uri;
}
server {
listen 443 ssl http2;
server_name {{ app_domain }};
ssl_certificate /etc/letsencrypt/live/{{ app_domain }}/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/{{ app_domain }}/privkey.pem;
location /static/ {
alias {{ current_dir }}/staticfiles/;
}
location /media/ {
alias {{ media_dir }}/;
}
location / {
proxy_pass http://{{ gunicorn_bind }};
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_redirect off;
}
}
Tasks:
- name: Install Nginx site config
template:
src: nginx-site.conf.j2
dest: "/etc/nginx/sites-available/{{ app_name }}.conf"
mode: "0644"
- name: Enable Nginx site
file:
src: "/etc/nginx/sites-available/{{ app_name }}.conf"
dest: "/etc/nginx/sites-enabled/{{ app_name }}.conf"
state: link
force: true
- name: Disable default Nginx site
file:
path: /etc/nginx/sites-enabled/default
state: absent
- name: Validate Nginx config
command: nginx -t
- name: Reload Nginx
systemd:
name: nginx
state: reloaded
enabled: true
Verification:
curl -I http://app1.example.com
curl -Ik https://app1.example.com
ssh deploy@app1.example.com 'systemctl status nginx --no-pager'
If Django is behind a TLS-terminating proxy, configure SECURE_PROXY_SSL_HEADER = ("HTTP_X_FORWARDED_PROTO", "https"), set CSRF_TRUSTED_ORIGINS, and enable secure cookies as appropriate. In many deployments that also means setting SESSION_COOKIE_SECURE = True, CSRF_COOKIE_SECURE = True, and reviewing whether SECURE_SSL_REDIRECT should be enabled in Django.
Build a repeatable release workflow with Ansible
Keep bootstrap and deploy separate. Bootstrap is one-time or infrequent. Deploy runs every release.
A practical release sequence is:
- create a new release directory
- checkout the target Git ref
- install dependencies into the shared virtualenv
- render the environment file
- run migrations and collect static against the new release
- run
check --deploy - switch
currentto the new release - restart Gunicorn
- reload Nginx after config validation
Re-running the playbook should not break the host or restart services unnecessarily.
When to automate this further
Once you manage more than one app or environment, this playbook structure is a good candidate for reusable roles and templates. The first pieces worth standardizing are environment file rendering, systemd unit templates, Nginx site templates, pre-deploy checks, release retention, and symlink switching.
Verification after deployment
Check the release from outside and on the server:
curl -I http://app1.example.com
curl -Ik https://app1.example.com
curl -Ik https://app1.example.com/admin/login/
ssh deploy@app1.example.com 'systemctl is-active myapp-gunicorn nginx'
ssh deploy@app1.example.com 'journalctl -u myapp-gunicorn -n 50 --no-pager'
ssh deploy@app1.example.com 'readlink -f /srv/myapp/current'
Confirm:
DEBUG=FalseALLOWED_HOSTSmatches your domainCSRF_TRUSTED_ORIGINSincludes your HTTPS origin- static files load over HTTPS
- Gunicorn stays active after restart
- Nginx proxies to Gunicorn successfully
- no import or settings errors appear in logs
- the active release matches the version you intended to deploy
Rollback and recovery notes
If a release fails, point current back to a known-good release and restart Gunicorn:
ln -sfn /srv/myapp/releases/<previous-release-name> /srv/myapp/current
systemctl restart myapp-gunicorn
Keep at least one known-good release directory so rollback is actually possible.
If the problem is code-related, redeploy a previous tag:
deploy_version: "v1.2.3"
Failed migrations are different. If a schema change is irreversible, rolling back code alone may not restore the app. For risky migrations, take a backup first and define whether rollback means restoring the database or shipping a forward fix.
For bad Nginx changes, always keep nginx -t before reload. The same principle applies to systemd units: write the file, reload the daemon, and inspect service status immediately.
Explanation
This setup works because Ansible handles both server state and release steps in one repeatable workflow. You can use Ansible to deploy a Django app without turning deployment into a collection of shell notes.
The release-directory pattern gives you a clean rollback path if you retain previous releases. systemd keeps Gunicorn managed correctly. Nginx handles client traffic and static files. Django management commands run from the deployed code with the production environment loaded. That is a good default for production Django deployment with Ansible on a single Ubuntu host.
Alternatives exist:
- deploy from a built artifact instead of Git if you want stricter reproducibility
- use a Unix socket instead of TCP for Gunicorn-to-Nginx communication
- split static and media to object storage or CDN for larger deployments
- move database and Redis off-host as the app grows
Edge cases / notes
- If user uploads are stored locally, do not keep them inside release directories. Use a shared media path.
- If
collectstaticoutput is inside the release, old releases may contain stale assets. A shared static path or CDN-backed strategy may be better later. - Large uploads may require Nginx
client_max_body_size. - Long-running requests may require Gunicorn and Nginx timeout tuning.
- If private Git access is required, make sure the deploy user has the right SSH key or use an artifact upload workflow instead.
- If your Django settings read environment variables with a library like
django-environ, make sure the variable names in.envmatch your settings module exactly. - Before enabling the HTTPS Nginx server block, make sure certificates exist at the configured paths or adjust the template for your TLS method.
Internal links
For the broader production model, see How Django Deployment Works in Production.
If you want the app-server and reverse-proxy details in isolation, read Deploy Django with Gunicorn and Nginx on Ubuntu.
If you are deploying an ASGI stack instead, read Deploy Django ASGI with Uvicorn and Nginx.
If you want a simpler HTTPS edge setup, read Deploy Django with Caddy and Automatic HTTPS.
For a final production review, use Django Deployment Checklist for Production.
FAQ
How do I store Django secrets securely in an Ansible deployment?
Use Ansible Vault or another external secret backend. Render secrets into an environment file on the server with 0600 permissions and ownership set to the app user. Do not commit plaintext secrets to Git.
Should Ansible run Django migrations automatically during deploy?
Usually yes, but only if you understand the migration risk. For simple additive schema changes, automated migrations are common. For destructive or high-impact changes, add a backup step and a deliberate release procedure.
Is it better to deploy Django from Git or from a built artifact?
Git is simpler for small teams and single-server deployments. Built artifacts are often better when you need stricter reproducibility, supply-chain controls, or identical releases across many servers.
How do I roll back if the new Django release fails after restart?
Switch the current symlink back to the previous known-good release and restart Gunicorn. If the failure involved a database migration, code rollback alone may not be enough, so your release process should define backup and restore expectations.
Can I use the same Ansible structure for staging and production?
Yes. Keep separate inventory and variable files for each environment. That is usually the first step toward a reusable Ansible playbook for Django deployment across multiple servers.