[ 
https://issues.apache.org/jira/browse/HDFS-11160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-11160:
-----------------------------------
    Description: 
Due to a race condition initially reported in HDFS-6804, VolumeScanner may 
erroneously detect good replicas as corrupt. This is serious because in some 
cases it results in data loss if all replicas are declared corrupt. This bug is 
especially prominent when there are a lot of append requests via HttpFs/WebHDFS.

We are investigating an incidence that caused very high block corruption rate 
in a relatively small cluster. Initially, we thought HDFS-11056 is to blame. 
However, after applying HDFS-11056, we are still seeing VolumeScanner reporting 
corrupt replicas.

It turns out that if a replica is being appended while VolumeScanner is 
scanning it, VolumeScanner may use the new checksum to compare against old 
data, causing checksum mismatch.

I have a unit test to reproduce the error. Will attach later. A quick and 
simple fix is to hold FsDatasetImpl lock and read from disk the checksum.

  was:
Due to a race condition initially reported in HDFS-6804, VolumeScanner may 
erroneously detect good replicas as corrupt. This is serious because in some 
cases it results in data loss if all replicas are declared corrupt.

We are investigating an incidence that caused very high block corruption rate 
in a relatively small cluster. Initially, we thought HDFS-11056 is to blame. 
However, after applying HDFS-11056, we are still seeing VolumeScanner reporting 
corrupt replicas.

It turns out that if a replica is being appended while VolumeScanner is 
scanning it, VolumeScanner may use the new checksum to compare against old 
data, causing checksum mismatch.

I have a unit test to reproduce the error. Will attach later. A quick and 
simple fix is to hold FsDatasetImpl lock and read from disk the checksum.


> VolumeScanner reports write-in-progress replicas as corrupt incorrectly
> -----------------------------------------------------------------------
>
>                 Key: HDFS-11160
>                 URL: https://issues.apache.org/jira/browse/HDFS-11160
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: datanode
>         Environment: CDH5.7.4
>            Reporter: Wei-Chiu Chuang
>            Assignee: Wei-Chiu Chuang
>         Attachments: HDFS-11160.001.patch, HDFS-11160.002.patch, 
> HDFS-11160.reproduce.patch
>
>
> Due to a race condition initially reported in HDFS-6804, VolumeScanner may 
> erroneously detect good replicas as corrupt. This is serious because in some 
> cases it results in data loss if all replicas are declared corrupt. This bug 
> is especially prominent when there are a lot of append requests via 
> HttpFs/WebHDFS.
> We are investigating an incidence that caused very high block corruption rate 
> in a relatively small cluster. Initially, we thought HDFS-11056 is to blame. 
> However, after applying HDFS-11056, we are still seeing VolumeScanner 
> reporting corrupt replicas.
> It turns out that if a replica is being appended while VolumeScanner is 
> scanning it, VolumeScanner may use the new checksum to compare against old 
> data, causing checksum mismatch.
> I have a unit test to reproduce the error. Will attach later. A quick and 
> simple fix is to hold FsDatasetImpl lock and read from disk the checksum.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to