[ 
https://issues.apache.org/jira/browse/HBASE-3474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HBASE-3474:
-------------------------------

    Attachment: hbase-3474.txt

Hi Ashish. I took your patch and cleaned up the unit tests a bit to share some 
of the mocking code. I also added a {{getCompressionAlgorithm}} call to 
{{HFile.Reader}} as you suggested.

I removed one of the unit tests since it seemed to be entirely subsumed by the 
other one. Also updated the unit tests to check cases like 0-CF and 1-CF.

Can you check over this updated patch and make sure it looks good to you?

> HFileOutputFormat to use column family's compression algorithm
> --------------------------------------------------------------
>
>                 Key: HBASE-3474
>                 URL: https://issues.apache.org/jira/browse/HBASE-3474
>             Project: HBase
>          Issue Type: Improvement
>          Components: mapreduce
>    Affects Versions: 0.92.0
>         Environment: All
>            Reporter: Ashish Shinde
>             Fix For: 0.92.0
>
>         Attachments: hbase-3474.txt, patch3474.txt, patch3474.txt, 
> patch3474.txt
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> HFileOutputFormat  currently creates HFile writer's using a compression 
> algorithm set as configuration "hbase.hregion.max.filesize" with default as 
> no compression. The code does not take into account the compression algorithm 
> configured for the table's column family.  As a result bulk uploaded tables 
> are not compressed until a major compaction is run on them. This could be 
> fixed by using the column family descriptors while creating HFile writers.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to