That is right for different origins. For streaming the same file you could think of sending one chunk to all before proceeding. But I guess the latter one is not your intention. IMHO it depends upon the data you are sending . If chunk size is e.g. 1k per request you will need that much memory. This in fact seems not too much for 100.000 users especially when you are doing user login and so on.
On 24 Feb 2004, at 14:39, Ortwin Glück wrote:
John Keyes wrote:
In both cases, it is possible to get the behavior that you desire.
Not it is not. Again think of XXX,000 of requests.
I am getting a little angry by now. C'mon man, we wrote this baby and we know very well what's possible with it.
Ortwin, if I am wrong just correct me, maybe I just can't explain myself properly.
So please don't tell us it can not do unbuffered requests. It's as simple as:
InputStream dataStream = .... // get this from wherever, use a pipe or something
PostMethod method = new PostMethod("/myservlet");
method.setRequestContentLength(EntityEnclosingMethod.CONTENT_LENGTH_CHU NKED)
method.setRequestBody(dataStream);
client.execute(method);
My point here is that if I have X requests then there can be X * CONTENT_LENGTH_CHUNKED bytes in memory at one time.
-John K
Smaller chunks will require less memory but it will increase network traffic - not always a good idea.
Alternativly you may make your own spezialized data handling that will send to x connections at a time for getting new data while the other have to wait (1k for each of the first 100 users than 1k for the next 100). Depends upon the time for preparing and managing it all.
Regards, Stefan Dingfelder
--------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]