The advantage of giving control to JHP, as opposed
to shell, is that by CONTENT_LENGTH it could be judged
if the upload size is too large and abort the processing.
This is limit would be controled by the application
based on particular application requirements and
configuration.

It seems that it could be safely assumed that upload
sizes should fit in memory, i.e. a few megabytes,
anything larger not feasible for HTTP upload at all.

The stdin implementation for Windows is done in J,
so I hope it's just a matter of correctly working
the loop and result codes.

What were those limitations for Unix exactly?



--- Joey K Tuttle <[EMAIL PROTECTED]> wrote:

> I my description of the upload facility I'm using, I should
> have noted that one of the problems I ran into was large files
> did not work directly because of buffer limitations in j's
> implementation of stdin... So my upload form invoked this CGI
> 
> #!/bin/bash
> tf=/tmp/fdat$$
> cat > $tf
> ./pfdat $tf "$REMOTE_ADDR" "$HTTP_USER_AGENT" "$$"
> 
> were the last line is the call to a j #! script "pfdat" which
> did the other things I described in my previous message. This
> was a very good work around for the stdin buffering issues.
> 
> - joey
> 
> 
> At 09:21  -0700 2007/05/30, Oleg Kobchenko wrote:
> >
> >See if that works. I noticed a problem that when calling
> >stdin'' not all content is loaded on larger files.
> >



       
____________________________________________________________________________________Get
 the free Yahoo! toolbar and rest assured with the added security of spyware 
protection.
http://new.toolbar.yahoo.com/toolbar/features/norton/index.php
----------------------------------------------------------------------
For information about J forums see http://www.jsoftware.com/forums.htm

Reply via email to