That works, but there are a number of weaknesses I've run into:
1. Since $tempfile is on the system disk and $target is on a big RAID array, the move incurs a lot of extra disk I/O. Is there a way to tell CGI.pm or Apache::ASP to use a directory on the RAID array instead of /usr/tmp?
2. There seems to be no upload progress indicator, at least with Mozilla. Is there a way that I can insert some code that gets run right after the first HTTP header on the upload gets processed, so I can pick off the expected file size? If so, I could pop up a window with my own progress bar.
3. Something in the chain is barfing if the file is too large. The limit seems to be just under 2GB. Here's the spewage I get in the log for a 1.99GB file:
[Fri Apr 30 20:47:31 2004] [error] [client 172.16.0.42] Malformed multipart POST!!MultipartBuffer::read('MultipartBuffer=HASH(0x8b214ac)',0) called at (eval 109) line 55!!MultipartBuffer::new('MultipartBuffer','CGI=HASH(0x8ba6b58)','---------------------------9040894219264',-2147483254,'undef') called at (eval 107) line 4!!CGI::new_MultipartBuffer('CGI=HASH(0x8ba6b58)','---------------------------9040894219264',-2147483254,'undef') called at (eval 106) line 3!!CGI::read_multipart('CGI=HASH(0x8ba6b58)','---------------------------9040894219264',-2147483254) called at /usr/lib/perl5/5.8.0/CGI.pm line 415!!CGI::init('CGI=HASH(0x8ba6b58)','undef') called at /usr/lib/perl5/5.8.0/CGI.pm line 286!!CGI::new('CGI') called at /usr/lib/perl5/site_perl/5.8.0/Apache/ASP/Request.pm line 81!!Apache::ASP::Request::new('Apache::ASP=HASH(0x824ce84)') called at /usr/lib/perl5/site_perl/5.8.0/Apache/ASP.pm line 387!!Apache::ASP::new('Apache::ASP','Apache::RequestRec=SCALAR(0x8b41f64)','/home/tangent/mms5/cli/mma-edit-title.asp') called at /usr/lib/perl5/site_perl/5.8.0/Apache/ASP.pm line 181!!Apache::ASP::handler('Apache::RequestRec=SCALAR(0x8b41f64)') called at -e line 0!!eval {...} called at -e line 0!, referer: http://frank/mma-edit-title.asp?tid=97&sid=1
I'd guess some bit of code along the path is treating the Content-Length header as a signed 32-bit value. This is a Linux 2.4 system, so it can handle large files. What code is to blame here?
--------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]