I ran into this a few months ago. I believe that this was solved by adding
the following parameters (for a 150M limit).
In nginx:
server {
...
client_max_body_size 150M;
...
}
In settings.py:
FILEBROWSER_MAX_UPLOAD_SIZE = 157286400
This was a while ago, but I believe that the issue that I was having was
the Flash uploader used (filebrowser) has a limit on it as well as the
limit imposed by nginx. Our upstream was uwsgi.
K
On Wednesday, November 26, 2014 2:51:59 PM UTC-8, Billy Reynolds wrote:
>
> I'm having some issues with Django-Filebrowser and from what I can tell
> they seem to be related to Nginx. The two primary issues I'm having is that
> Django-Filebrowser fails to load directories with large amounts of Amazon
> S3 files in the mezzanine admin and I get an http error when trying to
> upload large files (500Mb) to Amazon S3 through filebrowser. I have
> multiple directories with 400+ large audio files (several hundred MB each)
> hosted on S3 that when I attempt to load in mezzanine admin/media-library,
> my server returns an nginx 500 (bad gateway) error. I didn't have any
> issues with this until the directories started getting bigger. I can also
> upload normal sized files (images, small audio files, etc.) without any
> issue. It's not until I try to upload large files that I get an error.
>
> It's probably worth noting a few things:
>
> 1. I only use Amazon S3 to serve the media files for the project, all
> static files are served locally through nginx.
> 2. All django-filebrowser functionality works correctly in directories
> that will actually load. (with the exception of large file uploads)
> 3. I created a test directory with 1000 small files and
> django-filebrowser loads the directory correctly.
> 4. In the nginx.conf settings listed below (proxy buffer size,
> proxy_connect_timeout, etc), I've tested multiple values, multiple times
> and I can never get the pages to consistently load. Now that the
> directories are larger, I'm can't even get them to load.
> 5. I've tried adding an additional location in my nginx conf for
> "admin/media-library/" with increased timeouts, and other settings I've
> tried... but nginx still did not load these large directories correctly.
>
> I believe my primary issues are an nginx or possible gunicorn issue as I
> have no trouble loading these directories or uploading large files in a
> local environment without nginx/gunicorn. My nginx error log throws the
> following error:
>
>> 2014/11/24 15:53:25 [error] 30816#0: *1 upstream prematurely closed
>> connection while reading response header from upstream, client:
>> xx.xxx.xxx.xxx, server: server, request: "GET /admin/media-library/browse/
>> HTTP/1.1", upstream: "http://127.0.0.1:8001/admin/media-library/browse/",
>> host: "server name, referrer: "https://example/admin/"
>
>
>
> I've researched that error which led me to add these lines to my nginx
> conf file.
>
>> proxy_buffer_size 128k;
>> proxy_buffers 100 128k;
>> proxy_busy_buffers_size 256k;
>> proxy_connect_timeout 75s;
>> proxy_read_timeout 75s;
>> client_max_body_size 9999M;
>> keepalive_timeout 60s;
>
>
>
> Despite trying multiple nginx timeout configurations, I'm still stuck
> exactly where I started. My production server will not load large
> directories from Amazon S3 through django-filebrowser nor can I upload
> large files through django-filebrowser.
>
> Here are some other lines from settings/conf files that are relevant.
>
> Settings.py
>
>> DEFAULT_FILE_STORAGE = 's3utils.S3MediaStorage'
>> AWS_S3_SECURE_URLS = True # use http instead of https
>> AWS_QUERYSTRING_AUTH = False # don't add complex authentication-related
>> query parameters for requests
>> #AWS_PRELOAD_METADATA = True
>> AWS_S3_ACCESS_KEY_ID = 'key' # enter your access key id
>> AWS_S3_SECRET_ACCESS_KEY = 'secret key' # enter your secret access key
>> AWS_STORAGE_BUCKET_NAME = 'bucket'
>> AWS_S3_CUSTOM_DOMAIN = 's3.amazonaws.com/bucket'
>> S3_URL = 'https://s3.amazonaws.com/bucket/'
>> MEDIA_URL = S3_URL + 'media/'
>> MEDIA_ROOT = 'media/uploads/'
>> FILEBROWSER_DIRECTORY = 'uploads'
>
>
>
> /etc/nginx/sites-enabled/production.conf
>
> upstream name {
> server 127.0.0.1:8001;
> }
>
> server {
> listen 80;
> server_name www.example.com;
> rewrite ^(.*) http://example.com$1 permanent;
> }
>
> server {
>
> listen 80;
> listen 443 default ssl;
> server_name example.com;
> client_max_body_size 999M;
> keepalive_timeout 60;
>
> ssl on;
> ssl_certificate /etc/nginx/ssl/cert.crt;
> ssl_certificate_key /etc/nginx/ssl/key.key;
> ssl_session_cache shared:SSL:10m;
> ssl_session_timeout 10m;
> ssl_ciphers RC4:HIGH:!aNULL:!MD5;
> ssl_prefer_server_ciphers on;
>
> location / {
> proxy_redirect off;
> proxy_set_header Host $host;
> proxy_set_header X-Real-IP $remote_addr;
> proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
> proxy_set_header X-Forwarded-Protocol $scheme;
> proxy_pass http://example;
> add_header X-Frame-Options "SAMEORIGIN";
> proxy_buffer_size 128k;
> proxy_buffers 100 128k;
> proxy_busy_buffers_size 256k;
> proxy_connect_timeout 75s;
> proxy_read_timeout 75s;
> client_max_body_size 9999M;
> keepalive_timeout 60s;
> }
>
> location /static/ {
> root /path/to/static
> }
>
> location /robots.txt {
> root /path/to/robots;
> access_log off;
> log_not_found off;
> }
>
> location /favicon.ico {
> root /path/to/favicon;
> access_log off;
> log_not_found off;
> }
>
> }
>
> Is this even an nginx issue? If so, does anyone have any suggestions for
> resolving this error? If not, what am I missing that would cause timeouts
> only on these large directories/large file uploads?
>
> Is there a better way to approach this problem than my current setup?
>
> Any help would be greatly appreciated.
>
> Thanks
>
--
You received this message because you are subscribed to the Google Groups
"Mezzanine Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
For more options, visit https://groups.google.com/d/optout.