Sorry, wrong branch, that's on the current master (trunk), for our case it
would be in the following on 2.0.15 branch:

public abstract class AbstractProtocolEncoderOutput implements
ProtocolEncoderOutput {
    private final Queue<Object> messageQueue = new
ConcurrentLinkedQueue<Object>();
    ...
}

The messageQueue variable is the candidate for this kind of optimization,
assuming it is consumed by a single loop thread.

Regards,

Guido.

On Mon, Oct 17, 2016 at 10:12 AM, Guido Medina <oxyg...@gmail.com> wrote:

> Hi Emmanuel,
>
> At the mina-core class AbstractNioSession the variable:
>
>     /** the queue of pending writes for the session, to be dequeued by the
> {@link SelectorLoop} */
>     private final Queue<WriteRequest> writeQueue = new DefaultWriteQueue();
>
> such queue is being consumed by a selector loop? which makes me think it
> is a single thread loop hence making it MpSc and ideal for low GC
> optimization.
> But maybe such optimization is so unnoticeable that is not worth.
>
> That's the only place I think it is worth replacing by a low GC footprint
> queue, it will avoid the creation of GC-able linked nodes
> (ConcurrentLinkedQueue)
>
> In fact, further in that logic you try to avoid writing to the queue if is
> empty by passing the message directly to the next handler which is a
> micro-optimization,
> isEmpty() will 99.99% of the cases render to be false for systems with
> high load.
>
> WDTY?
>
> Guido.
>
> On Sat, Oct 15, 2016 at 8:12 PM, Guido Medina <oxyg...@gmail.com> wrote:
>
>> I will take a look again at the source code but not today, I will let you
>> know on Monday if is applicable for MINA core, it seems it is not the case,
>> my application is simply forwarding each decoded FIX message to an Akka
>> actor which are backed by a high performance queue,
>> I was thinking (will double check) these ByteBuffers were queue somehow
>> before they are picked by the handlers which is where a non-blocking MpSc
>> would play a role.
>>
>> But maybe I misunderstood the code I saw.
>>
>> I will check again and let you know,
>>
>> Have a nice weekend,
>>
>> Guido.
>>
>> On Sat, Oct 15, 2016 at 7:33 PM, Emmanuel Lecharny <elecha...@apache.org>
>> wrote:
>>
>>> Tio be clear : when some sockets are ready for read (ie, the OP_READ flag
>>> has been set, and there is something in the socket to be read), the
>>> IoProcessor call to select()) will return and we will have a set of
>>> SelectionKey returned. Thsi set contains the set of all the channel that
>>> are ready for some processing. The IoProcessor thread will process them
>>> one
>>> after the other, from top to bottom. That means we don't process multiple
>>> sessions in parallel when all those sessions are handled by one
>>> singleIoProcessor. You have to be careful in what you do in your
>>> application, because any costly processing, or any synchronous access to
>>> a
>>> remote system will block the other sessions processing.
>>>
>>> Now, we always start the server with more than one IoProcessor
>>> (typically,
>>> Nb core + 1 Ioprocessor). You can also fix a higher number of IoProcessor
>>> if you like, but at some point, if your CPU is 100% used, adding more
>>> IoProcessor does not help.
>>>
>>> What kind of performance are you expecting to reach ? (ie, how many
>>> requests per second ?)
>>>
>>
>>
>

Reply via email to