Hi Ascot!

Just out of curiosity, which version of hadoop are you using?

fsck has some other options (e.g. -blocks will print out the block report
too, -list-corruptfileblocks prints out the list of missing blocks and
files they belong to) . I suspect you may also want to specify the
-openforwrite option.

In any case, missing blocks are a pretty bad symptom. There's a high
likelihood that you've lost data. If you can't find the blocks on any of
the datanodes, you would want to delete the files on HDFS and recreate them
(however they were originally created). In my experience I've seen missing
files which were never closed. This used to happen in older versions when
an rsync via HDFS NFS / HDFS FUSE is cancelled / fails.

HTH
Ravi



On Sun, Feb 12, 2017 at 4:15 AM, Ascot Moss <ascot.m...@gmail.com> wrote:

> Hi,
>
> After running 'hdfs fsck /blocks' to check the cluster, I got
> 'Missing replicas:              441 (0.24602923 %)"
>
> How to fix HDFS missing replicas?
> Regards
>
>
>
>
> (detailed output)
>
> Status: HEALTHY
>
>  Total size:    3375617914739 B (Total open files size: 68183613174 B)
>
>  Total dirs:    2338
>
>  Total files:   39960
>
>  Total symlinks:                0 (Files currently being written: 60)
>
>  Total blocks (validated):      59493 (avg. block size 56739749 B) (Total
> open file blocks (not validated): 560)
>
>  Minimally replicated blocks:   59493 (100.0 %)
>
>  Over-replicated blocks:        0 (0.0 %)
>
>  Under-replicated blocks:       111 (0.18657658 %)
>
>  Mis-replicated blocks:         0 (0.0 %)
>
>  Default replication factor:    3
>
>  Average block replication:     3.0054965
>
>  Corrupt blocks:                0
>
>  Missing replicas:              441 (0.24602923 %)
>
>  Number of data-nodes:          7
>
>  Number of racks:               1
>
>
>

Reply via email to