Hi Experts, We failed to run an MR job which accesses hive, as hdfs is unable to create new block during reduce phase. The exceptions: 1) In tasklog: hdfs.DFSClient: DataStreamer Exception: java.io.IOException: Unable to create new block 2) In HDFS data node log: DataXceiveServer: IOException due to:java.io.IOException: Too many open fiiles ... ... at sun.nio.ch.ServerSocketAdapter.accept(ServerSocketAdaptor.java:96) at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:131)
In hdfs-site.xml, we set 'dfs.datanode.max.xcievers' to 8196. At the same time, we modified /etc/security/limits.conf to increase nofile of mapred user to 1048576. But this issue still happen. Any suggestions? Thanks a lot!
