[
https://issues.apache.org/jira/browse/HDFS-11711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16010411#comment-16010411
]
Vinayakumar B commented on HDFS-11711:
--------------------------------------
Changes looks good to me. +1.
Also this should be marked as critical, as 'replica' deletion may lead to
missing blocks if in case other nodes are not available.
> DN should not delete the block On "Too many open files" Exception
> -----------------------------------------------------------------
>
> Key: HDFS-11711
> URL: https://issues.apache.org/jira/browse/HDFS-11711
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: datanode
> Reporter: Brahma Reddy Battula
> Assignee: Brahma Reddy Battula
> Attachments: HDFS-11711.patch
>
>
> *Seen the following scenario in one of our customer environment*
> * while jobclient writing {{"job.xml"}} there are pipeline failures and
> written to only one DN.
> * when mapper reading the {{"job.xml"}}, DN got {{"Too many open files"}} (as
> system exceed limit) and block got deleted. Hence mapper failed to read and
> job got failed.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]