Backup and Restore PostgreSQL for Django Apps
A Django app is only recoverable if its PostgreSQL data is recoverable. In production, that means more than occasionally running pg_dump.
Problem statement
A Django app is only recoverable if its PostgreSQL data is recoverable. In production, that means more than occasionally running pg_dump. You need a repeatable backup method, secure storage, a tested restore path, and a rollback plan for bad deploys, failed migrations, accidental deletes, or database host loss.
A common failure pattern is: the team has backups, but no one has verified that a restore works with the current Django app, migrations, extensions, and credentials. A backup that has never been restored is not a recovery plan.
Quick answer
For most Django apps, use pg_dump to create consistent logical backups, store them securely off the database host, and regularly restore them into a separate database or staging environment to verify they work. Take a backup before risky deploys or migrations, and document the restore steps before you need them in an incident.
Also be clear about limits: a logical backup restores the database to the time the dump was taken. It does not preserve writes made after that point, so it is not a substitute for point-in-time recovery when low data loss is required.
Step-by-step solution
1. Choose the right backup method
For most small to medium Django deployments, logical backups are the practical default.
Use pg_dump for routine Django backups
Logical backups are a good fit when you need:
- pre-deploy backups
- daily scheduled backups
- portable dumps you can restore elsewhere
- schema and data export for one database
Plain SQL dumps are easy to inspect. Custom-format dumps are usually better for production restores because pg_restore can restore them more flexibly.
Know when logical backups are not enough
If you need point-in-time recovery, very low data-loss tolerance, or you run a large PostgreSQL database, you will usually need physical backups plus WAL archiving. That is a different setup than pg_dump. This page focuses on logical backup and restore, which is still the right baseline for many Django teams.
2. Collect database connection details safely
Do not hardcode credentials into commands or scripts committed to git.
A typical Django deployment exposes database settings through environment variables:
export DB_NAME="myapp"
export DB_USER="myapp_user"
export DB_PASSWORD="replace-me"
export DB_HOST="127.0.0.1"
export DB_PORT="5432"
If you load values from a .env file, do it in a controlled shell session on the server or backup runner:
set -a
. /srv/myapp/.env
set +a
Only do this if the .env file is controlled, trusted, and not writable by other users on the system.
Be careful with shell history and CI logs. If possible, use a .pgpass file instead of putting passwords inline.
Example format for ~/.pgpass:
127.0.0.1:5432:myapp:myapp_user:replace-me
Restrict permissions:
chmod 600 ~/.pgpass
3. Create a PostgreSQL backup before a risky Django deploy
Take a backup before schema migrations, data migrations, or major releases.
Plain SQL backup
Use this when you want a simple, portable SQL file:
backup_file="myapp-production-$(date +%Y%m%d-%H%M%S).sql"
pg_dump \
-h "$DB_HOST" \
-p "$DB_PORT" \
-U "$DB_USER" \
-d "$DB_NAME" \
> "$backup_file"
Custom-format backup
This is usually the better production choice:
backup_file="myapp-production-$(date +%Y%m%d-%H%M%S).dump"
pg_dump \
-h "$DB_HOST" \
-p "$DB_PORT" \
-U "$DB_USER" \
-d "$DB_NAME" \
-F c \
-f "$backup_file"
Why use -F c:
- works with
pg_restore - supports more selective restore operations
- usually better for operational restore workflows
Verify the backup file exists and is readable
Immediately check that the dump was created and is not empty:
ls -lh "$backup_file"
test -s "$backup_file"
For custom dumps, also confirm PostgreSQL can read the archive:
pg_restore --list "$backup_file" | head
If you want an integrity record, create a checksum tied to the same filename:
sha256sum "$backup_file" > "$backup_file.sha256"
Include globals only when needed
Most Django app backups should focus on the application database, not the whole PostgreSQL cluster. Roles and global objects can be exported separately with pg_dumpall --globals-only. This is necessary only if your recovery plan depends on recreating roles, grants, or other cluster-level objects outside managed provisioning, because pg_dump of a single database does not include them.
4. Protect and store backups off-host
A dump file on the same disk as the database is not enough.
Restrict local permissions
chmod 600 "$backup_file" "$backup_file.sha256"
Copy the backup to another system
Use SSH-based transfer to a separate backup host:
scp "$backup_file" "$backup_file.sha256" backup@example-backup-host:/backups/myapp/
Or with rsync:
rsync -av --progress "$backup_file" "$backup_file.sha256" backup@example-backup-host:/backups/myapp/
Verification matters here too. Confirm the file arrived and has the expected size.
Basic backup security rules
- do not leave dump files world-readable
- do not rely only on local storage on the database host
- use encrypted transport for remote copy
- encrypt backup storage at rest, especially on backup hosts or object storage
- apply a retention policy so disks do not fill up
If you use object storage, prefer encrypted buckets and restricted access policies.
5. Restore safely into a new database first
Never test a restore by writing directly into production.
Create a fresh restore target
createdb \
-h "$DB_HOST" \
-p "$DB_PORT" \
-U "$DB_USER" \
restored_myapp
If the role does not have permission to create databases, use an admin role for this step.
Restore a plain SQL dump
psql \
-h "$DB_HOST" \
-p "$DB_PORT" \
-U "$DB_USER" \
-d restored_myapp \
< myapp-production-20260424-120000.sql
Restore a custom dump
pg_restore \
-h "$DB_HOST" \
-p "$DB_PORT" \
-U "$DB_USER" \
-d restored_myapp \
--clean --if-exists \
myapp-production-20260424-120000.dump
--clean --if-exists is useful when the target already contains objects and you intend a full replacement. Use it carefully. For a brand new empty database, it is usually harmless but not strictly required.
This restores database objects and data into the target database. It does not recreate cluster-wide roles, grants, or other global objects unless you back those up separately.
Verification checks after restore
Confirm tables exist:
psql -h "$DB_HOST" -p "$DB_PORT" -U "$DB_USER" -d restored_myapp -c "\dt"
Check migration history:
psql -h "$DB_HOST" -p "$DB_PORT" -U "$DB_USER" -d restored_myapp -c "SELECT app, name FROM django_migrations ORDER BY app, name;"
Check that the restore command exited successfully:
echo $?
A successful restore should return 0.
6. Validate the restored database with Django
A PostgreSQL restore is not complete until the Django app can use it.
Point your Django app or staging environment at the restored database, then run:
python manage.py check
python manage.py showmigrations
Also verify:
- admin login works
- expected users and recent records exist
- background job dependencies are present
- critical read and write paths behave correctly
If your app depends on PostgreSQL extensions such as pgcrypto, uuid-ossp, or full-text search features, verify those exist in the restored database too.
7. Plan rollback and incident recovery
Backups are most useful when tied to a deployment process.
Before migrations or risky releases
Document a pre-deploy sequence:
- take backup
- verify backup file exists and is readable
- deploy code
- run migrations
- run smoke checks
If the deploy fails
Rollback often requires both:
- restoring the previous application version
- restoring the database to a compatible state
Be careful with backward-incompatible migrations. If a migration drops columns or rewrites data, restoring only the app code may not be enough. You may need a full database restore from the pre-deploy backup.
A logical backup restores the database to the time the dump was taken. It does not preserve writes made after that point, so it is not a substitute for point-in-time recovery when low data loss is required.
For high-risk restores or cutovers, consider quiescing writes or putting the app into maintenance mode so you do not create new data while switching database state.
Define RPO and RTO
For operations planning, decide:
- RPO: how much data loss is acceptable
- RTO: how quickly the service must recover
Those targets determine whether daily pg_dump is enough or whether you need continuous archiving and point-in-time recovery.
A practical baseline for many small teams is:
- scheduled daily backups at minimum
- an extra pre-deploy backup before risky schema or data changes
- regular restore drills, such as monthly or after major database changes
8. Automate recurring backups
For a production Django app, recurring backups should not depend on manual shell access.
Use:
cronsystemdtimers- a CI job running in a trusted environment
A simple backup script should:
- load environment variables
- create a timestamped dump
- verify file creation
- verify the archive is readable
- generate a checksum
- upload off-host
- prune old backups
- exit non-zero on failure
When to convert this into a reusable script
If you are taking pre-deploy backups repeatedly, copying dumps off-host manually, or testing restores with the same sequence each time, convert the process into a script or template. The first automation targets should usually be timestamped backup creation, remote upload, retention cleanup, and restore verification into a temporary database.
Explanation
This setup works because it covers the full recovery chain, not just dump creation. pg_dump gives you a consistent logical backup of the Django application database. Off-host storage protects against server loss. Restoring into a separate database verifies that the dump is usable without risking live data. Running Django checks after restore confirms that the application, not just PostgreSQL, can operate on the recovered dataset.
Use plain SQL dumps when portability and inspection matter most. Use custom-format dumps when you want a cleaner operational restore path with pg_restore. If your database is large or your recovery requirements are strict, move beyond logical backups to physical backups and WAL archiving for point-in-time recovery.
Edge cases / notes
Docker-based Django deployments
If PostgreSQL runs in Docker, you can run pg_dump from:
- the database container
- a separate client container on the same network
- the app container, if PostgreSQL client tools are installed
Do not assume container-local storage is durable backup storage. Copy dumps to persistent external storage.
Managed PostgreSQL services
Provider snapshots are useful, but application teams often still want logical dumps because they are portable and easier to test in another environment. Snapshots alone may not give you the workflow you need for app-level recovery drills.
Backups do not include Django media files
PostgreSQL backups do not cover uploaded media. If users upload files, your disaster recovery plan must include media storage as well. Static files are usually rebuildable during deploy; media files usually are not.
Version compatibility
Use PostgreSQL client tools that are compatible with your server version. Test backup and restore with the same major PostgreSQL version family you plan to use in production, or validate the upgrade path separately before an incident. Mismatched versions can cause restore problems or unsupported dump behavior.
Internal links
To go deeper, see:
- Logical vs Physical PostgreSQL Backups for Django Deployments
- How to Deploy Django with PostgreSQL and Gunicorn
- How to Run Django Migrations Safely in Production
- How to Roll Back a Failed Django Deploy
FAQ
How often should I back up a PostgreSQL database for a Django app?
That depends on acceptable data loss. Many apps need at least daily backups plus a pre-deploy backup before migrations or risky releases. If losing even a few minutes of data is unacceptable, scheduled pg_dump alone is usually not enough.
Should I use pg_dump or snapshots for Django production backups?
pg_dump is a strong default for application-level, portable backups. Infrastructure snapshots can help with host recovery, but they are not always as portable or easy to test at the application level. Many teams use both.
Can I restore a PostgreSQL backup to a different server or environment?
Yes. That is one of the main advantages of logical backups. In fact, restoring to a separate database or staging server is the safest way to verify that your backups are actually usable.
What should I verify after restoring a Django PostgreSQL database?
At minimum, verify:
- expected tables exist
django_migrationslooks correct- required extensions are present
- Django can connect successfully
- admin login and critical app flows work
Do PostgreSQL backups cover uploaded media files in Django?
No. Database backups do not include uploaded files stored on disk or object storage. Your recovery plan must include both PostgreSQL data and Django media storage.