[ 
https://issues.apache.org/jira/browse/DIRMINA-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15120991#comment-15120991
 ] 

Emmanuel Lecharny commented on DIRMINA-1006:
--------------------------------------------

I have changed the {{removeSessions()}} method :

{noformat}
    private int removeSessions() {
        int removedSessions = 0;

        for (S session = removingSessions.poll(); session != null;session = 
removingSessions.poll()) {
            SessionState state = getState(session);

            // Now deal with the removal accordingly to the session's state
            switch (state) {
                case OPENED:
                    // Try to remove this session
                    if (removeNow(session)) {
                        removedSessions++;
                    }
                    
                    break;
    
                case CLOSING:
                    // Skip if channel is already closed
                    // In any case, remove the session from the queue
                    removedSessions++;
                    break;
    
                case OPENING:
                    // Remove session from the newSessions queue and
                    // remove it
                    newSessions.remove(session);
    
                    if (removeNow(session)) {
                        removedSessions++;
                    }
    
                    break;
    
                default:
                    throw new IllegalStateException(String.valueOf(state));
            }
        }

        return removedSessions;
    }
{noformat}

This is more a workaround to get the number of sessions going down to 0 and let 
the processor loop exit. There is certainly more to do. 

Basically, we should analyze all the possible reasons why a session should be 
removed and for each use case, determinate what we should do. Here are the use 
cases that comes to mind :
* the application close the session explicitly
* the remote peer has close the connection
* the application has disposed the processor or the service
* we have had an exception that forces the session's closure (and here, we have 
many possible causes)

In most of the use cases, the session can be removed immediately, but when the 
application decide to close a session, we need to take care of the pending 
messages, they have to be sent if the application hasn't requested the session 
to be closed immediately. This is the corner case we have to take care of, IMO.

I do think we lack a state (OPENING, OPENED, CLOSING but we don't have a CLOSED 
state)...


> mina2.0.9 NioProcessor thread make cpu 100%
> -------------------------------------------
>
>                 Key: DIRMINA-1006
>                 URL: https://issues.apache.org/jira/browse/DIRMINA-1006
>             Project: MINA
>          Issue Type: Bug
>          Components: Core
>    Affects Versions: 2.0.9
>         Environment: Linux version 2.6.32-358.el6.x86_64  (Red Hat 4.4.7-3)
> java version "1.7.0_67"
> Java(TM) SE Runtime Environment (build 1.7.0_67-b01)
> Java HotSpot(TM) 64-Bit Server VM (build 24.65-b04, mixed mode)
>            Reporter: briokyd
>            Priority: Blocker
>         Attachments: cpu100_01.png, cpu100_02.png, cpu100_03.png, 
> threadDump78658_20150211_151742.log
>
>
> running as client after the Exception (java.io.IOException: Connection reset 
> by peer)  appeared , cpu 100%
> thread dump:
> "NioProcessor-931" prio=10 tid=0x00007f3788004800 nid=0xd41 runnable 
> [0x00007f394f4f3000]
>    java.lang.Thread.State: RUNNABLE
>       at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
>       at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
>       at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
>       at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
>       - locked <0x00000007842118f0> (a sun.nio.ch.Util$2)
>       - locked <0x00000007842118e0> (a java.util.Collections$UnmodifiableSet)
>       - locked <0x00000007842114b0> (a sun.nio.ch.EPollSelectorImpl)
>       at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
>       at 
> org.apache.mina.transport.socket.nio.NioProcessor.select(NioProcessor.java:97)
>       at 
> org.apache.mina.core.polling.AbstractPollingIoProcessor$Processor.run(AbstractPollingIoProcessor.java:1074)
>       at 
> org.apache.mina.util.NamePreservingRunnable.run(NamePreservingRunnable.java:64)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>       at java.lang.Thread.run(Thread.java:745)
>    Locked ownable synchronizers:
>       - <0x0000000784211210> (a 
> java.util.concurrent.ThreadPoolExecutor$Worker)
> is that nio epollWait bug?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to