Aden,

Sorry man, I can not follow your argument. Your words are a complete mess.

This is just a simple buffer. Your application will not notice whether the buffer size is 4096 or 4560 bytes. You won't notice any remarkable difference at all as long as the buffer size is larger than a TCP packet size (i.e. the number of transmitted packets is minimal). If you still have doubts please do performance measurements and submit data that proves claims you make.

Ortwin Glück

S. Aden wrote:
Correct me if I am wrong but if a client sends 4096 bytes and the
server only reads in 1024 bytes chunks there will be 3072
(4096-1024=3072) bytes left in the server's request input stream. The
remaining 3072 bytes has to be read before the next chunk of data the
client sent will be readable from the servers request input stream.
This is inefficient in that if you really wanted to send data in 4096
byte chunks and read in 4096 byte chunks then you'd have to buffer the
server's request input stream yourself.

My current objective is to send Base64 encoded content I need to send
the content in 4560 byte chunks. The reason for doing this is because
decoding 4560 bytes produces the expected result. Basically I don't
want to buffer the content just to make sure I have right number bytes
before I do a Base64 decode operation.

-Aden

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to