--- "Sherlock, Ric" <[EMAIL PROTECTED]> wrote:

> Thanks Oleg, your solution is great, I find it very instructive to see
> how someone else approaches a problem that I've struggled with myself.

The difference is mostly consistency in coding conventions: naming variables, 
interface signatures per http://www.jsoftware.com/jwiki/CGI/API , using 
regex etc. It's harder to have a working prototype than to clean it up.
So your input was valuable.
Please make sure that the current JHP code does not have anything
missing from functionality that was in your code (except temp files).

(One design decision: if method is not multipart, the file name is
passed as the parameter value. In current JHP, file name is in qparamFile
and content is in qparam. Should this be swapped for consistency?
However, qparamFile is a good way to test if it's a file, whereas content
may theoretically be zero--one rationale for keeping name in qparamFile.)

The very lastest JHP update contains a fix for stdin_w32 and an upload limit
of 1Mb in the examples script. I discovered a funny thing in IIS stdin
for CGI: it requires reading the stdin through, even if you decide it is 
exceeding
the limit and you show error, or it will drop connection.
Currently it is not addressed, I was not able to confirm by googling how
to address that, except reading it through without storing (copy null, 
basically).
No such problem for Unix, but it was only tested with 2-3Mb.

> > [mailto:[EMAIL PROTECTED] On Behalf Of Oleg Kobchenko
> > It seems that it could be safely assumed that upload sizes 
> > should fit in memory, i.e. a few megabytes, anything larger 
> > not feasible for HTTP upload at all.
> 
> It seems to me (someone with no indepth knowledge of the area) that HTTP
> in general must be able to work with files bigger than "a few megabytes"
> given that many HTTP downloads of software are 50MB+. Having said that,
> 50MB should still fit in the memory of most servers, as long as many
> people are not uploading at once.

At some point there should be a way to configure the temp files mode:
either externally like in Joey's solution or by JHP. But the in-memory mode
would still be a (default) option for a number of reasons: it is simple, it 
does not
require temp folder configuration and deleting on exit, it works for most
a-few-megabites solutions.

> Is there a difference between upload & download in this regard, or is it
> just because downloads are 1 computer to many computers and uploads are
> potentially many computers to 1?

Often HTTP capacity is asymetrical, typically for clients but may also
be for servers. Beside the one-to-many issue, there is one of controlled
environment: users makes conscious decision for a particular download,
and it is harder for server to discern many unpredictable client intentions.

I have come across many HTTP upload limits, e.g. hosting providers,
web-based email, etc. have a typical limit around 10Mb for single upload.
In addition, it is common practive to recommend for large volumes
of upload to use FTP or rsync, etc. over ssh as more suitable protocols.

> Am I right in saying that the decision to maintain the the
> multipart/form-data in memory, rather than store it, as it is received
> un-parsed, directly to disk, is a design decision that has pros and
> cons. One downside is that dealing with (many) simultaneous big uploads
> may become a problem once server memory is exhausted?

Yes, it should be the responsibility of concrete web server configuration
to decide which method suits best for particular capacity, purpose, users, etc.



      
____________________________________________________________________________________
Fussy? Opinionated? Impossible to please? Perfect.  Join Yahoo!'s user panel 
and lay it on us. http://surveylink.yahoo.com/gmrs/yahoo_panel_invite.asp?a=7 

----------------------------------------------------------------------
For information about J forums see http://www.jsoftware.com/forums.htm

Reply via email to