Hi Meng,
Thank you very much to add Snappy support, could you please follow this
flow to submit your patch:
1. Open JIRA first:
https://issues.apache.org/jira/browse/KYLIN
<https://issues.apache.org/jira/browse/KYLIN>
2. Generate patch, for 0.7-staging branch
3. Attach patch to that JIRA
Committers will review and merge your patch to branch if there's no
issue.
More detail, please refer to "How to contribute":
http://kylin.incubator.apache.org/development/howto_contribute.html
<http://kylin.incubator.apache.org/development/howto_contribute.html>
Thank you very much and looking forward for your submission:)
Luke
Best Regards!
---------------------
Luke Han
On Sun, Aug 23, 2015 at 11:02 PM, [email protected] <[email protected]>
wrote:
> i had fixed this, and submitted to the master branch, is that right?
>
> add kylin.hbase.default.compression.codec to kylin.properties and
> modified CreateHTableJob.java
>
> #default compression codec for htable,snappy,lzo,gzip,lz4 available
> kylin.hbase.default.compression.codec=snappy
>
>
>
>
> ------------------ 原始邮件 ------------------
> 发件人: liangmeng <[email protected]>;
> 发送时间: 2015-08-19 13:38:02
> 收件人: dev <[email protected]>;
> 抄送: (无);
> 主题: kylin force to use lzo in hbase?
> in the doc, it says lzo compression is not used by default, but actually,
> if cluster is configured with lzo, kylin will use it, so i review the
> source code, and find that kylin determine to use lzo in hbase table by
> compression test result, not user's configuration;
> also, we prefer snappy as default compression, will kylin support it ?
>
> //////////////////////////////////////////////////
> this is the source code in CreateHTableJob.java:
>
> for (HBaseColumnFamilyDesc cfDesc :
> cubeDesc.getHBaseMapping().getColumnFamily()) {
> HColumnDescriptor cf = new
> HColumnDescriptor(cfDesc.getName());
> cf.setMaxVersions(1);
>
> if (LZOSupportnessChecker.getSupportness()) {
> logger.info("hbase will use lzo to compress data");
> cf.setCompressionType(Algorithm.LZO);
> } else {
> logger.info("hbase will not use lzo to compress
> data");
> }
>
> cf.setDataBlockEncoding(DataBlockEncoding.FAST_DIFF);
> cf.setInMemory(false);
> cf.setBlocksize(4 * 1024 * 1024); // set to 4MB
> tableDesc.addFamily(cf);
> }
>
>
>
> public class LZOSupportnessChecker {
> private static final Logger log =
> LoggerFactory.getLogger(LZOSupportnessChecker.class);
>
> public static boolean getSupportness() {
> try {
> File temp = File.createTempFile("test", ".tmp");
> CompressionTest.main(new String[] { "file://" +
> temp.getAbsolutePath(), "lzo" });
> } catch (Exception e) {
> log.error("Fail to compress file with lzo", e);
> return false;
> }
> return true;
> }
>
> public static void main(String[] args) throws Exception {
> System.out.println("LZO supported by current env? " +
> getSupportness());
> }
> }
>
>
>
>
>
>
>
>