[ 
https://issues.apache.org/jira/browse/DIRMINA-738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12985617#comment-12985617
 ] 

Emmanuel Lecharny edited comment on DIRMINA-738 at 9/5/14 11:02 AM:
--------------------------------------------------------------------

The IoEventQueueThrottle has two some serious problems in its current 
implementation/application.

1. The internal counter variable is not working correctly.
2. The notification of waiting threads (e.g. from an ExecutorFilter) is 
deceptive. 

Besides this, the implementation is quite useful if you want to have a server 
running with a limited memory footprint. The threshold counter needs not to be 
decreased (or increased) during operation. It fulfils the specification for the 
maximum amount of memory to use (for the transferred data in total over all 
connections).


To the identified problems:

1. The incorrect behaviour of the counter variable is not an implementation 
issue of the IoEventQueueThrottle but a bad 'contract' in its application.

It is implicitly assumed that any IoEvent offered(...) and polled(...) will 
always have the same size. But this is not true. It seems to me that if an 
IoBuffer is written immediately before closing the session it's size can change 
between the calls to offered() and polled(). During my observation the polled
size decreased which lead to an increasing counter approximating the threshold. 
 

Probably there reason is a (sometimes) missing flip() somewhere else to the 
IoBuffer but this issue could also be resolved from the IoEventQueueThrottle by 
storing the estimated size between the calls to offered() and polled(). BTW the 
order of the calls to these functions is not sure to be in this order. An 
IoEvent can be polled before being offered. So I propose the following changes: 

1.1. Add a hash map for the event sizes:

{code}
    private final ConcurrentMap<Object,Integer> eventSizes = new 
ConcurrentHashMap<Object,Integer>();
{code}

1.2. Add a function to maintain the correct event sizes assuming that this 
function will be called twice for every IoEvent (that's the contract for the 
implementation):

{code}
    private int maintainSize(IoEvent event) {
        int eventSize = estimateSize(event);
        Integer stored = eventSizes.putIfAbsent(event, new Integer(eventSize));
        if (stored != null) {
                int storedSize = stored.intValue();
                if (eventSize != storedSize) {
                        LOGGER.debug(Thread.currentThread().getName() + " size 
of IoEvent changed from " + storedSize + " to " + eventSize);
                }
                eventSize = storedSize;
                eventSizes.remove(event);
        }
        return eventSize;
    }
{code}

1.3. Replace the call to estimateSize() inside offered() and polled() with a 
call to maintainSize()

If the state is logged and all sessions are somehow idle you could now see that 
the counter will return to it's initial value 0.



2. If the internal counter reaches or crosses the threshold *all* NioProcessor 
threads offering IoEvents will be stuck waiting for the lock. But when the 
counter drops below the threshold only *one* waiting thread will be notified.

Assume the session of the *one* continuing NioProcessor thread would have no 
more data transfer all then other threads would keep on waiting (NB: if you had 
specified an idle time for your server IoEvents could have notified all other 
waiting threads after a while hiding this problem).    

So I propose the following change: 

The unblock() method should call notifyAll() instead of notify().


Both problems together could mutually amplify to block forever.

I would appreciate any comments.


was (Author: wimmer):
The IoEventQueueThrottle has two some serious problems in its current 
implementation/application.

1. The internal counter variable is not working correctly.
2. The notification of waiting threads (e.g. from an ExecutorFilter) is 
deceptive. 

Besides this, the implementation is quite useful if you want to have a server 
running with a limited memory footprint. The threshold counter needs not to be 
decreased (or increased) during operation. It fulfils the specification for the 
maximum amount of memory to use (for the transferred data in total over all 
connections).


To the identified problems:

1. The incorrect behaviour of the counter variable is not an implementation 
issue of the IoEventQueueThrottle but a bad 'contract' in its application.

It is implicitly assumed that any IoEvent offered(...) and polled(...) will 
always have the same size. But this is not true. It seems to me that if an 
IoBuffer is written immediately before closing the session it's size can change 
between the calls to offered() and polled(). During my observation the polled
size decreased which lead to an increasing counter approximating the threshold. 
 

Probably there reason is a (sometimes) missing flip() somewhere else to the 
IoBuffer but this issue could also be resolved from the IoEventQueueThrottle by 
storing the estimated size between the calls to offered() and polled(). BTW the 
order of the calls to these functions is not sure to be in this order. An 
IoEvent can be polled before being offered. So I propose the following changes: 

1.1. Add a hash map for the event sizes:

    private final ConcurrentMap<Object,Integer> eventSizes = new 
ConcurrentHashMap<Object,Integer>();

1.2. Add a function to maintain the correct event sizes assuming that this 
function will be called twice for every IoEvent (that's the contract for the 
implementation):

    private int maintainSize(IoEvent event) {
        int eventSize = estimateSize(event);
        Integer stored = eventSizes.putIfAbsent(event, new Integer(eventSize));
        if (stored != null) {
                int storedSize = stored.intValue();
                if (eventSize != storedSize) {
                        LOGGER.debug(Thread.currentThread().getName() + " size 
of IoEvent changed from " + storedSize + " to " + eventSize);
                }
                eventSize = storedSize;
                eventSizes.remove(event);
        }
        return eventSize;
    }

1.3. Replace the call to estimateSize() inside offered() and polled() with a 
call to maintainSize()

If the state is logged and all sessions are somehow idle you could now see that 
the counter will return to it's initial value 0.



2. If the internal counter reaches or crosses the threshold *all* NioProcessor 
threads offering IoEvents will be stuck waiting for the lock. But when the 
counter drops below the threshold only *one* waiting thread will be notified.

Assume the session of the *one* continuing NioProcessor thread would have no 
more data transfer all then other threads would keep on waiting (NB: if you had 
specified an idle time for your server IoEvents could have notified all other 
waiting threads after a while hiding this problem).    

So I propose the following change: 

The unblock() method should call notifyAll() instead of notify().


Both problems together could mutually amplify to block forever.

I would appreciate any comments.

> Using IoEventQueueThrottler  with a WriteRequestFilter can lead to hangs
> ------------------------------------------------------------------------
>
>                 Key: DIRMINA-738
>                 URL: https://issues.apache.org/jira/browse/DIRMINA-738
>             Project: MINA
>          Issue Type: Bug
>          Components: Filter
>    Affects Versions: 2.0.0-M6
>            Reporter: Daniel John Debrunner
>             Fix For: 2.0.8
>
>
> The javadoc for WriteRequestFilter  shows an example using 
> IoEventQueueThrottler  to throttle writes. First issue is that 
> IoEventQueueThrottler  is not well documented, I'd assumed it was throttling 
> number of messages, instead its threshold is number of bytes. Second issue is 
> that if a message estimated length is less than the threshold then the write 
> hangs, even with an async executor in the chain.
> Emmanuel Lécharny  also wrote:
>   FYI, the counter (threshold) is *never* decreased.
> To be frank, this is not a piece of code I know, but from what I see, it's 
> *extremely* dubious that it does something useful, and most likely to block 
> forever.
> I would suggest not to use this



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to