Thanks Oleg, your solution is great, I find it very instructive to see
how someone else approaches a problem that I've struggled with myself.

> [mailto:[EMAIL PROTECTED] On Behalf Of Oleg Kobchenko
> It seems that it could be safely assumed that upload sizes 
> should fit in memory, i.e. a few megabytes, anything larger 
> not feasible for HTTP upload at all.

It seems to me (someone with no indepth knowledge of the area) that HTTP
in general must be able to work with files bigger than "a few megabytes"
given that many HTTP downloads of software are 50MB+. Having said that,
50MB should still fit in the memory of most servers, as long as many
people are not uploading at once.

Is there a difference between upload & download in this regard, or is it
just because downloads are 1 computer to many computers and uploads are
potentially many computers to 1?

Am I right in saying that the decision to maintain the the
multipart/form-data in memory, rather than store it, as it is received
un-parsed, directly to disk, is a design decision that has pros and
cons. One downside is that dealing with (many) simultaneous big uploads
may become a problem once server memory is exhausted?

----------------------------------------------------------------------
For information about J forums see http://www.jsoftware.com/forums.htm

Reply via email to