[
https://issues.apache.org/jira/browse/HBASE-3474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12989061#comment-12989061
]
Todd Lipcon commented on HBASE-3474:
------------------------------------
Hey Ashish. This is looking good. A few more comments before I think it's ready
to commit:
- If you make configureCompression and createFamilyCompressionMap
package-private instead of private, you could add a nice unit test for them.
You can use Mockito to make a mock HTable that returns the HColumnDescriptors
you want for the sake of the test - if you need a hand with this you can catch
me on #hbase IRC
- Probably a good idea to put "families.compression" in a constant, and maybe
change it to be more obviously scoped - ie something like
"hbase.hfileoutputformat.families.compression"
- When you upload a patch, you should take the diff from the root of the SVN
trunk/ - so that it applies with "patch -p0" from inside trunk/.
Thanks!
> HFileOutputFormat to use column family's compression algorithm
> --------------------------------------------------------------
>
> Key: HBASE-3474
> URL: https://issues.apache.org/jira/browse/HBASE-3474
> Project: HBase
> Issue Type: Improvement
> Components: mapreduce
> Affects Versions: 0.92.0
> Environment: All
> Reporter: Ashish Shinde
> Fix For: 0.92.0
>
> Attachments: patch3474.txt, patch3474.txt
>
> Original Estimate: 48h
> Remaining Estimate: 48h
>
> HFileOutputFormat currently creates HFile writer's using a compression
> algorithm set as configuration "hbase.hregion.max.filesize" with default as
> no compression. The code does not take into account the compression algorithm
> configured for the table's column family. As a result bulk uploaded tables
> are not compressed until a major compaction is run on them. This could be
> fixed by using the column family descriptors while creating HFile writers.
--
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira