Maybe I misunderstood what I saw in the code, I saw 10 (not sure) places
where ConcurentLinkedQueue was used, one of them was for the connections
which for this case wouldn't make a difference.
The other place are for handling the received frame/message? Am I correct
here? that's where I believe it would make a difference, where a single
connection can have potentially hundreds of frames to be "handled" (by a
handler)

Isn't that a good place to introduce MpSc? if such place has 1 consumer
thread pulling from such queue and then delegating to a handler.

On Sat, Oct 15, 2016 at 2:54 PM, Guido Medina <oxyg...@gmail.com> wrote:

> The connections count is usually "finite" (not worth the effort), but the
> queue for packets, isn't also a ConcurrentLinkedQueue?
> I'm not sure how MINA core stores the packets received before they are
> passed to their handler.
>
> On Sat, Oct 15, 2016 at 2:27 PM, Emmanuel Lecharny <elecha...@apache.org>
> wrote:
>
>> On Sat, Oct 15, 2016 at 1:20 PM, Guido Medina <oxyg...@gmail.com> wrote:
>>
>> > Hi,
>> >
>> > I was looking at MINA core source code and I noticed events are publish
>> to
>> > a ConcurrentLinkedQueue so here are my questions and suggestions:
>> >
>> >    - Does ConcurrentLinkedQueue for these cases use the Pattern of
>> > *Multiple
>> >    Producer/Single Consumer* (MpSc) or *Multiple Producer/Multiple
>> > Consumer*
>> >    (MpMc)
>> >
>>
>> MpMc.
>>
>>
>>
>> >    - For low latency applications (in my case I'm talking QuickFixJ for
>> the
>> >    financial industry) would it benefit from a MpSc that has low memory
>> >    footprint (more like low GC footprint)?
>> >
>> > If that is the case I would shade JCtools dependency and use the queue:
>> > https://github.com/JCTools/JCTools/blob/master/jctools-
>> > core/src/main/java/org/jctools/queues/MpscChunkedArrayQueue.java
>> >
>> > Such queue uses ring buffers (power of two arrays) and linked them if
>> they
>> > need to expand, which is great for theoretically unbounded queues but
>> with
>> > the benefit of not used linked nodes per element but linked arrays.
>> >
>> > Recently Netty replaced their non-blocking linked queues for that one.
>> >
>>
>> That is an option.
>>
>> Now, I would say that for an application requiring low latency, basing it
>> on top of NIO makes littel sense, considering the extra cost compared to a
>> Blocking IO solution (and we are talking about 30% performance penalty, at
>> least).
>>
>> Do you need to handle potentially millions of connections ?
>>
>
>

Reply via email to