[
https://issues.apache.org/jira/browse/HDFS-11711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16040209#comment-16040209
]
Hudson commented on HDFS-11711:
-------------------------------
SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11836 (See
[https://builds.apache.org/job/Hadoop-trunk-Commit/11836/])
HDFS-11711. DN should not delete the block On "Too many open files" (brahma:
rev 1869e1771c7eeea46ccb822ce6f7081d994bb12c)
* (edit)
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java
* (edit)
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNodeFaultInjector.java
* (edit)
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeMetrics.java
> DN should not delete the block On "Too many open files" Exception
> -----------------------------------------------------------------
>
> Key: HDFS-11711
> URL: https://issues.apache.org/jira/browse/HDFS-11711
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: datanode
> Reporter: Brahma Reddy Battula
> Assignee: Brahma Reddy Battula
> Priority: Critical
> Attachments: HDFS-11711-002.patch, HDFS-11711-003.patch,
> HDFS-11711-004.patch, HDFS-11711-branch-2-002.patch,
> HDFS-11711-branch-2-003.patch, HDFS-11711.patch
>
>
> *Seen the following scenario in one of our customer environment*
> * while jobclient writing {{"job.xml"}} there are pipeline failures and
> written to only one DN.
> * when mapper reading the {{"job.xml"}}, DN got {{"Too many open files"}} (as
> system exceed limit) and block got deleted. Hence mapper failed to read and
> job got failed.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]