[
https://issues.apache.org/jira/browse/DIRMINA-925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13532838#comment-13532838
]
Emmanuel Lecharny commented on DIRMINA-925:
-------------------------------------------
One easy workaround would be to log less. You don't need to keep Gb of logs,
you can perfectly decide to keep the log maximum size to, say, 10Mb.
All in all, the exception that bubbles up is a "message" which is logged. The
fact that we log the full stack trace makes it big, but even if it was a single
line of text, the logs would have filed up your entire disk, if it were to be
ignored.
I don't think that ignoring this error in the code would be a good idea. I'm
not sure we should store the full stack trace either. But in any case, I'm sure
that you must, as an administrator of your application, be ready to cope with
the number of open files and the size of the logs.
Monitoring an application is not an option here...
> Unresponsive I/O once file limit is reached
> -------------------------------------------
>
> Key: DIRMINA-925
> URL: https://issues.apache.org/jira/browse/DIRMINA-925
> Project: MINA
> Issue Type: Bug
> Components: Transport
> Affects Versions: 2.0.4, 2.0.5, 2.0.7
> Environment: Linux
> Reporter: Paul Gregoire
>
> I had this issue reported to the red5 issues list and it reminded me of an
> issue on an internal company project that uses Mina but has nothing to do
> with red5 itself. So the common part in these two cases now is Mina, so I am
> posting this here.
> The gist of both situations is that once the file limit is reached on the os,
> mina still attempts to do its work but an exception occurs which eventually
> runs the box of resources.
> What steps will reproduce the problem?
> 1. Use telnet to connect to a servlet's socket until no more connections are
> allowed (open files limit is reached)
> 2. Use telnet to connect to the tomcat port (5080)
> 3. Observe red5.logs shown below will repeat until the hard drive fills
> java.io.IOException: Too many open files
> at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
> ~[na:1.6.0_22]
> at
> sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:163)
> ~[na:1.6.0_22]
> at
> org.apache.mina.transport.socket.nio.NioSocketAcceptor.accept(NioSocketAcceptor.java:170)
> ~[mi
> na-core-2.0.4.jar:na]
> at
> org.apache.mina.transport.socket.nio.NioSocketAcceptor.accept(NioSocketAcceptor.java:51)
> ~[min
> a-core-2.0.4.jar:na]
> at
> org.apache.mina.core.polling.AbstractPollingIoAcceptor$Acceptor.processHandles(AbstractPolling
> IoAcceptor.java:501) ~[mina-core-2.0.4.jar:na]
> at
> org.apache.mina.core.polling.AbstractPollingIoAcceptor$Acceptor.run(AbstractPollingIoAcceptor.
> java:442) ~[mina-core-2.0.4.jar:na]
> at
> org.apache.mina.util.NamePreservingRunnable.run(NamePreservingRunnable.java:64)
> [mina-core-2.0
> .4.jar:na]
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
> [na:1.6.0_22]
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
> [na:1.6.0_22]
> at java.lang.Thread.run(Thread.java:679) [na:1.6.0_22]
> 2012-12-13 15:42:28,521 [NioSocketAcceptor-2] WARN
> o.a.m.util.DefaultExceptionMonitor - Unexpected exception.
> Red5 issue #315 - Red5 gets into a tight loop writing error messages if file
> open limit is exceeded
> https://code.google.com/p/red5/issues/detail?id=315
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira