[ 
https://issues.apache.org/jira/browse/HBASE-28065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-28065:
------------------------------
    Fix Version/s:     (was: 4.0.0-alpha-1)

> Corrupt HFile data is mishandled in several cases
> -------------------------------------------------
>
>                 Key: HBASE-28065
>                 URL: https://issues.apache.org/jira/browse/HBASE-28065
>             Project: HBase
>          Issue Type: Bug
>          Components: HFile
>    Affects Versions: 2.5.2
>            Reporter: Nick Dimiduk
>            Assignee: Nick Dimiduk
>            Priority: Major
>             Fix For: 2.6.0, 2.4.18, 2.5.6, 3.0.0-beta-1
>
>
> While riding over a spat of HDFS data corruption issues, we've observed 
> several places in the read path that do not fall back to HDFS checksum 
> appropriately. These failures manifest during client reads and during 
> compactions. Sometimes failure is detected by the fallback 
> {{verifyOnDiskSizeMatchesHeader}}, sometimes we attempt to allocate a buffer 
> with a negative size, and sometimes we read through to a failure from block 
> decompression.
> After code study, I think that all three cases arise from using a block 
> header that was read without checksum validation.
> Will post up the stack traces in the comments. Not sure if we'll want a 
> single patch or multiple.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to