For pool of Buffers it is usually better some bounded MpMc queue with some
pre-allocated capacity.
It doesn't need to be synchronized, it only needs to be used as any other
pool in that:

- You take or create.
- Use and pass around.
- And finally you offer back to such queue.

By default you can use a ConcurrentLinkedQueue for that and later you can
easily move to a MpSc bounded queue,
why bounded? if you happen to slow down processing for whatever reason you
don't want your pool to have more than N capacity.

Is tricky but it has been done before and it shouldn't be a problem here,
though it requires try...finally to make sure you return the buffer to the
queue.
Bounded non-blocking queues are perfect because it will not throw an
exception if the queue is empty or full in the case of slow processing
scenario.

Guido.

On Mon, Oct 17, 2016 at 7:01 PM, Emmanuel Lécharny <elecha...@gmail.com>
wrote:

>
>
> Le 17/10/16 à 11:12, Guido Medina a écrit :
> > Hi Emmanuel,
> >
> > At the mina-core class AbstractNioSession the variable:
> >
> >     /** the queue of pending writes for the session, to be dequeued by
> the
> > {@link SelectorLoop} */
> >     private final Queue<WriteRequest> writeQueue = new
> DefaultWriteQueue();
> >
> > such queue is being consumed by a selector loop?
> It's consumed by the IoProcessor selector loop (so you have many of them).
>
> The thread that has been selected to process the read will go on up to
> the point it has written the bytes into teh write queue, then it go back
> to teh point it was called from, and the  it processes the writes :
>
> select :
> 1) read -> head -> filter -> filter -> ... -> handler ->
> session.write(something) -> filter -> filter -> ... -> head -> put in
> write queue and return (unstacking all teh calls)
> 2) write -> get the write queue, and write the data in it until the
> queue is empty or the socket is full (and in thsi case, set the OP_WRITE
> status)
>
>
>
> > which makes me think it is
> > a single thread loop hence making it MpSc and ideal for low GC
> optimization.
> > But maybe such optimization is so unnoticeable that is not worth.
> Actually, we have as many thread as we have IoProcessor instances. The
> thing is that I'm not even sure that we need a synchronize queue, as the
> IoProcessor is execution all the processing on ts own thread. The
> cincurrentQueue is there just because one can use an executor filter,
> that will spread the processing into many other threads, and then we
> *may* have more than one thread accessing this queue.
>
> Regarding GC, we have removed some useless object allocations in 2.0.15,
> so the GC should be slightly less under pressure.
>
> If you want to alleviate the GC load, I think there are other areas
> where some improvement can be made. Typically, there is a
> BufferAllocator that pool the CachedBufferAllocator that allows you to
> reuse buffers. Now, this is a tricky solution, as it has to be
> synchronized, so expect some bottleneck here. Ideally, we should have
> another implementation that use the ThreadLocalStorage, but that would
> be memory expensive.
>
>
> >
> > That's the only place I think it is worth replacing by a low GC footprint
> > queue, it will avoid the creation of GC-able linked nodes
> > (ConcurrentLinkedQueue)
> >
> > In fact, further in that logic you try to avoid writing to the queue if
> is
> > empty by passing the message directly to the next handler which is a
> > micro-optimization,
> > isEmpty() will 99.99% of the cases render to be false for systems with
> high
> > load.
> Well, it depends. The queue will be emptied if the socket is able to
> swallow the data. If your system is under high load, I suspect that all
> the socket will be full already, so you'll end up with some bigger
> problem ! (typically, the queue will grow, and at some pont, you'll get
> a OOM... I have alredy experienced that, so yes, it may happen).
>
> My point is that you shoud always expect that your OS and your network
> are capable of allocating enouh buffer for teh sockects, and have enough
> bandwith to send them fast enough so that the socket buffer can always
> accept any write done on it. In this case, the write queue will always
> be empty, except if you are writing a huge message (typically above the
> socket buffer size, whoch default to 1Kb - from the top of my head- but
> that you can set up to 64Kb more or less). Even on a loaded system.
>
>

Reply via email to