[
https://issues.apache.org/jira/browse/DIRMINA-925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13567747#comment-13567747
]
Maarten Bosteels commented on DIRMINA-925:
------------------------------------------
We have had similar issues. It's logical that MINA cannot accept more
connections once the file limit is reached, so no point in trying to fix that.
But the problem was that the Acceptor became totally unresponsive, even when
the number of open file descriptors dropped significantly below the limit.
For now we have 'fixed' it by increasing the max open files limit.
One day I will test (and report) how MINA 3 (and other libs) behave in this
case. One day, for sure ;-)
> Unresponsive I/O once file limit is reached
> -------------------------------------------
>
> Key: DIRMINA-925
> URL: https://issues.apache.org/jira/browse/DIRMINA-925
> Project: MINA
> Issue Type: Bug
> Components: Transport
> Affects Versions: 2.0.4, 2.0.5, 2.0.7
> Environment: Linux
> Reporter: Paul Gregoire
>
> I had this issue reported to the red5 issues list and it reminded me of an
> issue on an internal company project that uses Mina but has nothing to do
> with red5 itself. So the common part in these two cases now is Mina, so I am
> posting this here.
> The gist of both situations is that once the file limit is reached on the os,
> mina still attempts to do its work but an exception occurs which eventually
> runs the box of resources.
> What steps will reproduce the problem?
> 1. Use telnet to connect to a servlet's socket until no more connections are
> allowed (open files limit is reached)
> 2. Use telnet to connect to the tomcat port (5080)
> 3. Observe red5.logs shown below will repeat until the hard drive fills
> java.io.IOException: Too many open files
> at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
> ~[na:1.6.0_22]
> at
> sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:163)
> ~[na:1.6.0_22]
> at
> org.apache.mina.transport.socket.nio.NioSocketAcceptor.accept(NioSocketAcceptor.java:170)
> ~[mi
> na-core-2.0.4.jar:na]
> at
> org.apache.mina.transport.socket.nio.NioSocketAcceptor.accept(NioSocketAcceptor.java:51)
> ~[min
> a-core-2.0.4.jar:na]
> at
> org.apache.mina.core.polling.AbstractPollingIoAcceptor$Acceptor.processHandles(AbstractPolling
> IoAcceptor.java:501) ~[mina-core-2.0.4.jar:na]
> at
> org.apache.mina.core.polling.AbstractPollingIoAcceptor$Acceptor.run(AbstractPollingIoAcceptor.
> java:442) ~[mina-core-2.0.4.jar:na]
> at
> org.apache.mina.util.NamePreservingRunnable.run(NamePreservingRunnable.java:64)
> [mina-core-2.0
> .4.jar:na]
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
> [na:1.6.0_22]
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
> [na:1.6.0_22]
> at java.lang.Thread.run(Thread.java:679) [na:1.6.0_22]
> 2012-12-13 15:42:28,521 [NioSocketAcceptor-2] WARN
> o.a.m.util.DefaultExceptionMonitor - Unexpected exception.
> Red5 issue #315 - Red5 gets into a tight loop writing error messages if file
> open limit is exceeded
> https://code.google.com/p/red5/issues/detail?id=315
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira