[ 
https://issues.apache.org/jira/browse/HDFS-8402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HDFS-8402:
------------------------------
    Attachment: HDFS-8402.patch

Prior to blockId checking, ie. path based scans, fsck determined exit code 
based on whether the last line contained HEALTHY or CORRUPT.  This doesn't make 
sense when displaying multiple blocks with multiple storages.  Modified NN's 
blockid scans to return a final line similar to path-based scans.  Removed the 
recent logic that flagged nodes with decom nodes as an error.

The real motivation for this patch is to use {{bm.getStorages(block)}} instead 
of directly accessing the storages.  This altered the order of the storages, 
which broke the tests.  The tests were specifically coded (grumble) to ensure 
the last displayed storage was in the state expected by the test.

> Fsck exit codes are not reliable
> --------------------------------
>
>                 Key: HDFS-8402
>                 URL: https://issues.apache.org/jira/browse/HDFS-8402
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namenode
>    Affects Versions: 2.7.0
>            Reporter: Daryn Sharp
>            Assignee: Daryn Sharp
>         Attachments: HDFS-8402.patch
>
>
> HDFS-6663 added the ability to check specific blocks.  The exit code is 
> non-deterministically based on the state (corrupt, healthy, etc) of the last 
> displayed block's last storage location - instead of whether any of the 
> checked blocks' storages are corrupt.  Blocks with decommissioning or 
> decommissioned nodes should not be flagged as an error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to