On 12/2/11 2:02 PM, Chad Beaulac wrote:
I think I'm not communicating my thoughts well enough.
Well, I hope I have undesrtood what you said, at least :)

  A single algorithm
can handle large data pipes and provide extremely low latency for variable,
small and large message sizes at the same time.
AFAIU, it' snot because you use a big buffer that you will put some strain when dealing with small messages : the buffer will only contain a few useful bytes, and that's it. In any case, this buffer won't be allocated everytime we read from the channel, so it's just a container. But it's way better to have a big buffer when dealing with big messages, because then you'll have less roundtrips between the read and the processing. But the condition, as you said, is that you don't read the channel until there is no more bytes to read. You just read *once* get what you get, and go fetch the processing part of your application with these bytes.

The write has exactly the same kind of issue, as you said : don't pound the channel, let other channel the opportunity to be written too...

On the Producer side:
Application code should determine the block sizes that are pushed onto the
output queue. Logic would be as previously stated:
- write until there's nothing left to write, unregister for the write
event, return to event processing
This is what we do. I'm afraid that it may be a bit annoying for the other sessions, waiting to send data. At some point, it could be better to write only a limited number of bytes, then give back control to the selector, and be awaken when the selector set the OP_WRITE flag again (which will be during the next loop anyway, or ay be another later).
- write until the the channel is congestion controlled, stay registered for
write event, return to event processing

And what about a third option : write until the buffer we have prepared is empty, even if the channel is not full ? That mean even if the producer has prepared a -say- 1Mb block of data to write, it will be written in 16 blocks of 64Kb, even if the channel can absorb more.

Does it make sense ?
This handles very low latency for 1K message blocks and ensures optimal
usage of a socket for large data blocks.

On the Consumer side:
64K non-blocking read of channel when read selector fires. Don't read until
there's nothing left to read. Let the Selector tell you when it's time to
read again.
Read you. Totally agree.


--
Regards,
Cordialement,
Emmanuel Lécharny
www.iktek.com

Reply via email to