[
https://issues.apache.org/jira/browse/HADOOP-3051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12580590#action_12580590
]
rangadi edited comment on HADOOP-3051 at 3/19/08 3:35 PM:
---------------------------------------------------------------
What is the fd limit you have for your JVM? 0.17 uses NIO sockets. Looks like
JVM uses selector to wait for connect, and each selector in Java eats up 3
extra fds :(. DataNode will also use selectors if it needs to wait on sockets..
> (50 dfs clients with 40 streams each writing concurrently files on a 8 nodes
> DFS cluster)
2000 writes across 8 datanodes?
yes, 0.17 eats more file descriptors, especially with loads that result in a
lot of threads waiting on sockets (as opposed to disk i/o). If this is big
problem we might default to not using extra fds and provide a config option to
enable it. One would enable such an option if 'write timout' is required on
sockets (HADOOP-2346).
was (Author: rangadi):
What is the fd limit you have for your JVM? 0.17 uses NIO sockets. Looks
like JVM uses selector to wait for connect, and each selector in Java eats up 3
extra fds :(. Also DataNode will also use selectors if it needs to wait on
sockets.. and while writing data, there
> (50 dfs clients with 40 streams each writing concurrently files on a 8 nodes
> DFS cluster)
2000 writes across 8 datanodes?
yes, 0.17 eats more file descriptors, especially with loads that result in a
lot of threads waiting on sockets (as opposed to disk i/o). If this is big
problem we might default to not using extra fds and provide a config option to
enable it. One would enable such an option if 'write timout' is required on
sockets (HADOOP-2346).
> DataXceiver: java.io.IOException: Too many open files
> -----------------------------------------------------
>
> Key: HADOOP-3051
> URL: https://issues.apache.org/jira/browse/HADOOP-3051
> Project: Hadoop Core
> Issue Type: Bug
> Components: dfs
> Affects Versions: 0.17.0
> Reporter: André Martin
>
> I just ran an experiment with the latest nightly build hadoop-2008-03-15
> available and after 2 minutes I'm getting a tons of "java.io.IOException: Too
> many open files" exceptions as shown here:
> {noformat} 2008-03-19 20:08:09,303 ERROR org.apache.hadoop.dfs.DataNode:
> 141.30.xxx.xxx:50010:DataXceiver: java.io.IOException: Too many open files
> at sun.nio.ch.IOUtil.initPipe(Native Method)
> at sun.nio.ch.EPollSelectorImpl.<init>(Unknown Source)
> at sun.nio.ch.EPollSelectorProvider.openSelector(Unknown Source)
> at sun.nio.ch.Util.getTemporarySelector(Unknown Source)
> at sun.nio.ch.SocketAdaptor.connect(Unknown Source)
> at
> org.apache.hadoop.dfs.DataNode$DataXceiver.writeBlock(DataNode.java:1114)
> at org.apache.hadoop.dfs.DataNode$DataXceiver.run(DataNode.java:956)
> at java.lang.Thread.run(Unknown Source){noformat}
> I ran the same experiment with same high workload (50 dfs clients with 40
> streams each writing concurrently files on a 8 nodes DFS cluster) with the
> 0.16.1 release and no exception is thrown. So it looks like a bug to me...
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.