[ 
https://issues.apache.org/jira/browse/HDFS-11711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16038892#comment-16038892
 ] 

Rushabh S Shah commented on HDFS-11711:
---------------------------------------

[~brahma]:  the latest patch looks good.
There is one checkstyle warning (in branch-2 patch) and 1 more warning (in 
trunk patch) that needs to be addressed. I wouldn't submit a new patch and 
waste build resources for that.
The committer can fix it while checking in.
I verified the tests  (that failed from branch-2 patch) passes locally for me.
[~brahma]: Can you verify test failures from trunk patch are not related to 
your patch ?
Other than that, +1 (non-binding) from me.
Thanks !

> DN should not delete the block On "Too many open files" Exception
> -----------------------------------------------------------------
>
>                 Key: HDFS-11711
>                 URL: https://issues.apache.org/jira/browse/HDFS-11711
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: datanode
>            Reporter: Brahma Reddy Battula
>            Assignee: Brahma Reddy Battula
>            Priority: Critical
>         Attachments: HDFS-11711-002.patch, HDFS-11711-003.patch, 
> HDFS-11711-branch-2-002.patch, HDFS-11711.patch
>
>
>  *Seen the following scenario in one of our customer environment* 
> * while jobclient writing {{"job.xml"}} there are pipeline failures and 
> written to only one DN.
> * when mapper reading the {{"job.xml"}}, DN got {{"Too many open files"}} (as 
> system exceed limit) and block got deleted. Hence mapper failed to read and 
> job got failed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to