[
https://issues.apache.org/jira/browse/HADOOP-3051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12580596#action_12580596
]
André Martin commented on HADOOP-3051:
--------------------------------------
fd limit is 1024 according to "ulimit -aH"
exactly 2000 writes across 8 datanodes...
> DataXceiver: java.io.IOException: Too many open files
> -----------------------------------------------------
>
> Key: HADOOP-3051
> URL: https://issues.apache.org/jira/browse/HADOOP-3051
> Project: Hadoop Core
> Issue Type: Bug
> Components: dfs
> Affects Versions: 0.17.0
> Reporter: André Martin
>
> I just ran an experiment with the latest nightly build hadoop-2008-03-15
> available and after 2 minutes I'm getting a tons of "java.io.IOException: Too
> many open files" exceptions as shown here:
> {noformat} 2008-03-19 20:08:09,303 ERROR org.apache.hadoop.dfs.DataNode:
> 141.30.xxx.xxx:50010:DataXceiver: java.io.IOException: Too many open files
> at sun.nio.ch.IOUtil.initPipe(Native Method)
> at sun.nio.ch.EPollSelectorImpl.<init>(Unknown Source)
> at sun.nio.ch.EPollSelectorProvider.openSelector(Unknown Source)
> at sun.nio.ch.Util.getTemporarySelector(Unknown Source)
> at sun.nio.ch.SocketAdaptor.connect(Unknown Source)
> at
> org.apache.hadoop.dfs.DataNode$DataXceiver.writeBlock(DataNode.java:1114)
> at org.apache.hadoop.dfs.DataNode$DataXceiver.run(DataNode.java:956)
> at java.lang.Thread.run(Unknown Source){noformat}
> I ran the same experiment with same high workload (50 dfs clients with 40
> streams each writing concurrently files on a 8 nodes DFS cluster) with the
> 0.16.1 release and no exception is thrown. So it looks like a bug to me...
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.