-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://review.cloudera.org/r/821/#review1163
-----------------------------------------------------------



src/main/java/org/apache/hadoop/hbase/HColumnDescriptor.java
<http://review.cloudera.org/r/821/#comment4000>

    This will throw NPE if it's not set, right?
    Why not just make this return getCompression() if n == null, then you can 
get rid of the hasCompactionCompression() check down in StoreFile below and 
simplify things a bit?
    
    Also why redundant getCompactionCompression and 
getCompactionCompressionType?


- Todd


On 2010-09-13 00:16:06, Andrew Purtell wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> http://review.cloudera.org/r/821/
> -----------------------------------------------------------
> 
> (Updated 2010-09-13 00:16:06)
> 
> 
> Review request for hbase.
> 
> 
> Summary
> -------
> 
> Support alternate compression for major compactions.
> 
> This is expected to be an uncommmon configuration so I did not pollute the 
> HColumnDescriptor constructor with the new option; instead only added 
> convenience {get,set}ters.
> 
> 
> This addresses bug HBASE-2988.
>     http://issues.apache.org/jira/browse/HBASE-2988
> 
> 
> Diffs
> -----
> 
>   src/main/java/org/apache/hadoop/hbase/HColumnDescriptor.java 8e3bd53 
>   src/main/java/org/apache/hadoop/hbase/regionserver/Store.java 4f777f0 
>   src/main/ruby/hbase/admin.rb 82d7e54 
> 
> Diff: http://review.cloudera.org/r/821/diff
> 
> 
> Testing
> -------
> 
> Created table with LZO compression, inserted 10GB of data, altered schema 
> with COMPRESSION_COMPACT of 'LZMA' (custom LZMA Hadoop compression plugin), 
> initiated major compaction, confirmed the results by examining HFiles on 
> local fs with the UNIX file utility.
> 
> 
> Thanks,
> 
> Andrew
> 
>

Reply via email to