[
https://issues.apache.org/jira/browse/HBASE-3474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12987693#action_12987693
]
Todd Lipcon commented on HBASE-3474:
------------------------------------
Thanks for taking this on, Ashish. A few comments on the patch:
- for createFamilyCompressionMap, specify in the javadoc that this is run
inside the task to deserialize the map back out of the Configuration
- In javadoc for configureCompression, I don't think there's any reason to
include the @param table and @param conf since they're kind of obvious
- in createFamilyCompressionMap, 'compression' is misspelled in one var name
- The encoding you've chosen for this config value isn't the best, since we
currently do support ',' in a column family name. What about encoding the map
like a query string in a URL? foo=bar&baz=blah - and using
URLEncoder/URLDecoder to escape the =s and &s that might be in there.
> HFileOutputFormat to use column family's compression algorithm
> --------------------------------------------------------------
>
> Key: HBASE-3474
> URL: https://issues.apache.org/jira/browse/HBASE-3474
> Project: HBase
> Issue Type: Improvement
> Components: mapreduce
> Affects Versions: 0.92.0
> Environment: All
> Reporter: Ashish Shinde
> Fix For: 0.92.0
>
> Attachments: patch3474.txt
>
> Original Estimate: 48h
> Remaining Estimate: 48h
>
> HFileOutputFormat currently creates HFile writer's using a compression
> algorithm set as configuration "hbase.hregion.max.filesize" with default as
> no compression. The code does not take into account the compression algorithm
> configured for the table's column family. As a result bulk uploaded tables
> are not compressed until a major compaction is run on them. This could be
> fixed by using the column family descriptors while creating HFile writers.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.