On Fri, Dec 2, 2011 at 10:19 AM, Emmanuel Lécharny <[email protected]>wrote:
> On 12/2/11 2:02 PM, Chad Beaulac wrote: > >> I think I'm not communicating my thoughts well enough. >> > Well, I hope I have undesrtood what you said, at least :) > > > A single algorithm >> can handle large data pipes and provide extremely low latency for >> variable, >> small and large message sizes at the same time. >> > AFAIU, it' snot because you use a big buffer that you will put some strain > when dealing with small messages : the buffer will only contain a few > useful bytes, and that's it. In any case, this buffer won't be allocated > everytime we read from the channel, so it's just a container. But it's way > better to have a big buffer when dealing with big messages, because then > you'll have less roundtrips between the read and the processing. But the > condition, as you said, is that you don't read the channel until there is > no more bytes to read. You just read *once* get what you get, and go fetch > the processing part of your application with these bytes. > > The write has exactly the same kind of issue, as you said : don't pound > the channel, let other channel the opportunity to be written too... > > The write has the same sort of issue but it can be handled more optimally in a different manner. The use case is slightly different because it's the client producer code driving the algorithm instead the Selector. Producer Side - Use a queue of ByteBuffers as a send queue. - When send is possible for the selector, block on the queue, loop over the output queue and send until SocketChannel.send(ByteBuffer src) (returnVal < src.remaining || returnVal == 0) or you catch exception. - This is a fair algorithm when dealing with multiple selectors because the amount of time the sending thread will spend inside the "send" method is bounded by how much data is in the ouputQueue and nothing can put data into the queue while draining the queue to send data out. Consumer Side - Use a ByteBuffer(64k) as a container to receive data into - Only call SocketChannel.read(inputBuffer) once for the channel that's ready to read. - Create a new ByteBuffer for the size read. Copy the the intputBuffer into the new ByteBuffer. Give the new ByteBuffer to the session to process. Rewind the input ByteBuffer. An alternative to creating a new ByteBuffer every time for the size read is allow client code to specify a custom ByteBuffer factory. This allows client code to pre-allocate memory and create a ring buffer or something like that. I use these algorithms in C++ (using ACE - Adaptive Communications Environment) and Java. The algorithm is basically the same in C++ and Java and handles protocols with a lot of small messages, variable message size protocols and large data block sizes. > >> On the Producer side: >> Application code should determine the block sizes that are pushed onto the >> output queue. Logic would be as previously stated: >> - write until there's nothing left to write, unregister for the write >> event, return to event processing >> > This is what we do. I'm afraid that it may be a bit annoying for the other > sessions, waiting to send data. At some point, it could be better to write > only a limited number of bytes, then give back control to the selector, and > be awaken when the selector set the OP_WRITE flag again (which will be > during the next loop anyway, or ay be another later). > > - write until the the channel is congestion controlled, stay registered >> for >> write event, return to event processing >> > > And what about a third option : write until the buffer we have prepared is > empty, even if the channel is not full ? That mean even if the producer has > prepared a -say- 1Mb block of data to write, it will be written in 16 > blocks of 64Kb, even if the channel can absorb more. > > Does it make sense ? > > No. Doesn't make sense to me. Let the TCP layer handle optimizing how large chunks of data is handled. If the client puts a ByteBuffer of 1MB or 20MB or whatever onto the outputQueue, call SocketChannel.write(outputByteBuffer). Don't chunk it up. > This handles very low latency for 1K message blocks and ensures optimal >> usage of a socket for large data blocks. >> >> On the Consumer side: >> 64K non-blocking read of channel when read selector fires. Don't read >> until >> there's nothing left to read. Let the Selector tell you when it's time to >> read again. >> > Read you. Totally agree. > > > > -- > Regards, > Cordialement, > Emmanuel Lécharny > www.iktek.com > >
