[ 
https://issues.apache.org/jira/browse/HBASE-5720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13248773#comment-13248773
 ] 

Lars Hofhansl commented on HBASE-5720:
--------------------------------------

Yeah, I poked around a few minutes. Seems we need to pass the header at the 
time when we do the encoding, rather than storing it in the EncodingContext. 
The changes for that are not entirely trivial.
An ugly way out might be a setter for the header in the EncodingContext and 
then set it when known, but that is too nasty and error prone. Maybe Dhruba has 
an idea...?

At this point I think I want to commit to and close this for 0.94 and file 
another jira for trunk. Any objections to that?
                
> HFileDataBlockEncoderImpl uses wrong header size when reading HFiles with no 
> checksums
> --------------------------------------------------------------------------------------
>
>                 Key: HBASE-5720
>                 URL: https://issues.apache.org/jira/browse/HBASE-5720
>             Project: HBase
>          Issue Type: Bug
>          Components: io, regionserver
>    Affects Versions: 0.94.0
>            Reporter: Matt Corgan
>            Priority: Blocker
>             Fix For: 0.94.0
>
>         Attachments: 5720v4.txt, 5720v4.txt, 5720v4.txt, HBASE-5720-v1.patch, 
> HBASE-5720-v2.patch, HBASE-5720-v3.patch
>
>
> When reading a .92 HFile without checksums, encoding it, and storing in the 
> block cache, the HFileDataBlockEncoderImpl always allocates a dummy header 
> appropriate for checksums even though there are none.  This corrupts the 
> byte[].
> Attaching a patch that allocates a DUMMY_HEADER_NO_CHECKSUM in that case 
> which I think is the desired behavior.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to