Fix Nginx 413 Request Entity Too Large for Django Uploads
If a Django file upload fails with 413 Request Entity Too Large, the rejection usually happens before Django sees the request. In a production stack like:
Problem statement
If a Django file upload fails with 413 Request Entity Too Large, the rejection usually happens before Django sees the request. In a production stack like:
browser → Nginx → Gunicorn/Uvicorn → Django
Nginx often enforces the first upload size limit. That means:
- small files succeed
- larger files fail immediately
- Django logs show nothing for the failed request
- Nginx error logs show request body rejection
This is a common production issue when upload requirements change but the reverse proxy limit stays at its default or an older value. The fix is usually to set client_max_body_size correctly in Nginx, reload safely, and then verify the rest of the stack can also handle the intended file size.
Quick answer
To fix Nginx 413 Request Entity Too Large for Django uploads, do this:
- Confirm Nginx is returning the 413.
- Set
client_max_body_sizein the active Nginx config for the affected site or upload endpoint. - Validate with
nginx -t. - Reload Nginx with no connection drop.
- Test uploads below, near, and above the configured limit.
- Check Django, app server, container, ingress, and disk space limits so the full path supports the file size safely.
Do not remove upload limits entirely unless you have a specific reason and compensating controls.
Step-by-step solution
1. Confirm that Nginx is the component returning 413
Check the browser response first. If the response page looks like a plain Nginx error page, that is already a strong signal.
Then inspect logs on the server:
sudo tail -f /var/log/nginx/error.log
Or if your distro logs through systemd:
sudo journalctl -u nginx -f
You may see messages like:
client intended to send too large body: 31457280 bytes
Also review access logs for the failing request:
sudo tail -f /var/log/nginx/access.log
If Nginx returns the 413 directly, Django and Gunicorn often will not log the request at all.
If the server hosts multiple Django sites, confirm which virtual host answered the request. nginx -T will show the loaded server_name and matching config, which helps avoid changing the wrong site.
Verification check:
- failing upload returns HTTP 413
- Nginx error log shows body-size rejection
- Django app logs do not show the request reaching the application
- you know which Nginx
serverblock handled the request
2. Find the active Nginx config file
You need to edit the file that is actually loaded for the affected site.
Inspect the full active config:
sudo nginx -T
List common site directories:
ls -lah /etc/nginx/sites-enabled/
ls -lah /etc/nginx/sites-available/
Typical Ubuntu-style setups load a site file from sites-enabled via symlink. In other layouts, the server block may be in conf.d/*.conf or directly inside nginx.conf.
client_max_body_size is commonly configured at the http, server, or location level. Use the narrowest scope that matches your requirement. Avoid defining conflicting values in multiple places unless you are doing it deliberately.
If your site serves uploads over HTTPS, make sure you update the active server block handling listen 443 ssl; as well, not just the port 80 redirect vhost.
Verification check:
- you know which
serverblock handles the domain - you know whether a more specific
locationblock overrides a broader setting - you have identified the active HTTP or HTTPS vhost that receives uploads
3. Apply the Nginx 413 Django fix with client_max_body_size
Before editing, back up the current config:
sudo cp /etc/nginx/sites-available/example.conf /etc/nginx/sites-available/example.conf.bak
Option A: set the limit for one Django site
Example server block:
server {
listen 80;
server_name example.com www.example.com;
client_max_body_size 25M;
location /static/ {
alias /srv/app/current/staticfiles/;
}
location /media/ {
alias /srv/app/current/media/;
}
location / {
proxy_pass http://unix:/run/gunicorn.sock:;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Option B: set a different limit for a specific upload endpoint
If only /api/uploads/ needs a larger limit, scope it there:
server {
listen 80;
server_name example.com;
location /api/uploads/ {
client_max_body_size 50M;
proxy_pass http://unix:/run/gunicorn.sock:;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location / {
client_max_body_size 10M;
proxy_pass http://unix:/run/gunicorn.sock:;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Choose a value based on actual application requirements. If users upload documents up to 20 MB, set a limit slightly above that, not 500 MB by default.
Verification check:
- configured limit matches real upload requirements
- no broader or narrower block unintentionally overrides it
- the active TLS vhost has the same intended limit if uploads terminate on
443
4. Test and reload Nginx safely
Validate syntax before reload:
sudo nginx -t
If valid, reload without dropping active connections:
sudo systemctl reload nginx
If the config test fails, do not reload. Restore the backup and test again:
sudo cp /etc/nginx/sites-available/example.conf.bak /etc/nginx/sites-available/example.conf
sudo nginx -t
sudo systemctl reload nginx
If uploads start failing in production after the change, revert to the previous known-good config, reload Nginx, and re-test with a small file. A size-limit change is low risk, but rollback should still be explicit and fast.
Verification check:
nginx -treports syntax is ok and test is successful- reload completes without service failure
systemctl status nginxremains healthy- you can restore the previous config quickly if needed
5. Check the full Django upload path
Fixing Nginx is only part of the path.
Django settings that affect uploads
Review upload-related settings if you handle larger files:
FILE_UPLOAD_MAX_MEMORY_SIZE = 2621440 # files above this size are streamed to a temp file instead of kept entirely in memory
DATA_UPLOAD_MAX_MEMORY_SIZE = 10485760 # limits how much non-file request data Django will read into memory
These do not replace the Nginx limit, and they are not a direct substitute for an end-to-end upload size policy. They affect how Django handles request data after the request reaches the app.
Also verify:
- the upload form and view allow the file type and size
- media storage is configured correctly
- the destination filesystem or object storage is writable
Gunicorn or Uvicorn considerations
Gunicorn usually is not the component generating 413 in this setup, but larger uploads may expose:
- worker timeouts
- slow request handling
- temporary file pressure
Review your systemd unit or process manager config if uploads are slow or fail later:
sudo systemctl cat gunicorn
Container, ingress, load balancer, or platform limits
If you run behind another proxy layer, Nginx may no longer be the only limit. Check for:
- Kubernetes ingress body-size settings
- cloud load balancer request size restrictions
- CDN or WAF upload limits
- containerized Nginx config generated from environment variables
Temporary file storage and disk space
Large request bodies may be buffered to disk by Nginx before they reach Django. That means accepted uploads can still fail if temp storage is full or I/O is under pressure.
Verify free space:
df -h
If you use a custom temp path, confirm it exists and has capacity. Low disk space can turn a 413 fix into a different upload failure later in the request path.
Verification check:
- Django accepts the request after Nginx
- upload storage works
- disk space and temp areas are adequate
- no upstream or platform layer still enforces a lower limit
6. Verify the fix end to end
Test with three file sizes:
- a small file that should always pass
- a file near the configured limit that should pass
- a file above the limit that should fail predictably
You can test through the application UI or with curl for a multipart endpoint:
curl -i -X POST \
-F "file=@./test-upload-20mb.bin" \
https://example.com/api/uploads/
After testing, review logs again:
sudo tail -n 50 /var/log/nginx/error.log
sudo tail -n 50 /var/log/nginx/access.log
Also confirm Django actually received and stored the file successfully.
Document the chosen max upload size in deployment notes or the repo’s ops documentation.
Explanation
The client_max_body_size directive controls how much request body data Nginx will accept. If the upload exceeds that value, Nginx returns 413 before proxying the request upstream. That is why Django often appears uninvolved.
Using a site-level limit is simplest when the whole app has one upload policy. A location-level limit is better when only a few endpoints need larger files. This keeps the rest of the app on stricter defaults.
Do not set client_max_body_size 0 unless you fully understand the consequences. Unlimited request bodies increase the risk of:
- denial-of-service through oversized uploads
- disk exhaustion from temp files
- long-running requests tying up workers and network resources
Restrict upload endpoints where possible:
- require authentication
- validate file types and sizes in Django
- add request logging and alerting for repeated 413s or unusually large requests
- review request timeout settings if uploads happen over slow links
When to turn this into a reusable template
If you manage more than one Django service, upload-size changes become repetitive and error-prone. This is a good candidate for a version-controlled Nginx template plus a small deploy script that backs up config, renders environment-specific limits, runs nginx -t, reloads only on success, and performs an upload smoke test.
Edge cases / notes
- Another proxy still blocks uploads: A CDN, ingress, or cloud edge proxy may still return 413 even after Nginx is updated.
- Wrong file edited: Editing
sites-availablewithout the matching symlink or editing an unused config file will have no effect. - Nginx not reloaded: The new setting does nothing until reload succeeds.
- Failure moves downstream: After fixing Nginx, uploads may then fail in Django validation, storage backend writes, or app server timeouts.
- Accepted uploads may still use temp disk: Increasing the limit can shift pressure to Nginx temp storage and disk I/O.
- Static and media are separate concerns: This fix affects incoming request bodies, not serving static files or media URLs directly.
- TLS and proxy headers: Keep your existing
proxy_set_headervalues intact when editing location blocks so Django still receives correct host and scheme information.
Internal links
- For background, see How Django file uploads work in production behind Nginx and Gunicorn.
- If your base stack is not stable yet, start with Deploy Django with Gunicorn and Nginx on Ubuntu.
- If uploaded files are accepted but not served correctly, review Configure Django media file handling in production.
- If requests fail at the proxy layer in other ways, use Debug Django 502 Bad Gateway errors with Nginx and Gunicorn.
FAQ
Should I set client_max_body_size 0 for Django uploads?
Usually no. 0 disables the limit, which increases abuse and resource exhaustion risk. Set a deliberate maximum based on your real file-size requirement.
Why do small uploads work but large files return 413?
Because Nginx accepts request bodies up to its configured limit and rejects anything larger. Small files stay under the threshold; larger ones do not.
Do I need to change Django settings as well as Nginx?
Sometimes. Nginx must allow the request first, but Django settings, validation logic, storage backends, temp storage, and app server timeouts may still affect whether the upload succeeds end to end.
Can I set different upload limits per endpoint?
Yes. You can define client_max_body_size inside a specific location block, such as /api/uploads/, while keeping stricter limits elsewhere.
What should I check if 413 still happens after updating Nginx?
Check these in order:
- confirm the correct Nginx config file was edited
- confirm
nginx -tpassed and reload happened - confirm no other
location,server, or broader config block overrides the value - check for another reverse proxy, ingress, CDN, or load balancer limit
- inspect logs again to see which component now returns the 413