Hi guys,

yesterday, I committed some changes that make the NioSelectorProcessor to use the IoBuffer class instead of a singe buffer to store the incoming data. Here is the snippet of changed code :

                                int readCount = 0;
                                IoBuffer ioBuffer = session.getIoBuffer();

                                do {
ByteBuffer readBuffer = ByteBuffer.allocate(1024);
                                    readCount = channel.read(readBuffer);
LOGGER.debug("read {} bytes", readCount);

                                    if (readCount < 0) {
// session closed by the remote peer LOGGER.debug("session closed by the remote peer");
                                        sessionsToClose.add(session);
                                        break;
                                    } else if (readCount > 0) {
                                        readBuffer.flip();
                                        ioBuffer.add(readBuffer);
                                    }
                                } while (readCount > 0);

                                // we have read some data
// limit at the current position & rewind buffer back to start & push to the chain session.getFilterChain().processMessageReceived(session, ioBuffer);

As you can see, instead of reading one buffer, and call the chain, we gather as many data as we can (ie as many as the channel can provide), and we call the chain. This has one major advantage : we don't call the chain many times if the data is bigger than the buffer size (currently set to 1024 bytes), and as a side effect does not require that we define a bigger buffer (not really a big deal, we can afford to use a 64kb buffer here, as there is only one buffer per selector) The drawback is that we allocate ByteBuffers on the fly. This can be improved by using a pre-allocated buffer (say a 64kb buffer), and if we still have something to read, then we allocate some more (this is probably what I will change).

The rest of the code is not changed widely, except the decoder and every filter that expected to receive a ByteBuffer (like the LoggingFilter). It's just a matter of casting the Object to IoBuffer, and process the data, as the IoBuffer methods are the same than the ByteBuffer (except that you can't inject anything but ByteBuffers into an IoBuffer, so no put method, for instance).

The decoders for Http and Ldap have been changed to deal with the IoBuffer. The big gain here, in the Http cas, is that we don't have to accumulate the data into a new ByteBuffer : the IoBuffer already accumulate data itself.

The IoBuffer is stored into the session, which means we can reuse it over and over, no need to create a new one. I still have to implement the compact() method which will remove the used ByteBuffers, in order for this IoBuffer not to grow our of bounds.

thoughts, comments ?

Thanks !

--
Regards,
Cordialement,
Emmanuel Lécharny
www.iktek.com

Reply via email to