We are building web site which will accept uploads of very big files, 
1-2GB is average. Spooler works fine but the problem comes after the 
upload. With big number of users, all my virtual memory will be 
exhausted very quickly by mmap, and after the upload i need to copy 
those files somewhere because they are temp and deleted already.

This makes the whole spooling thing useless by its own, i think it needs 
to be extended so i can somehow tell that those urls needs to go into 
normal files, not mmaped without parsing multipart-data at all, for such 
big files, it is better to do it offline than on the web server.


-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
naviserver-devel mailing list
naviserver-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/naviserver-devel

Reply via email to