cube building stop at step 10:create htable, and our cluster crashed down,many 
region servers restarted! kylin.log is as follow:

[2015-08-06 
08:42:26,050][ERROR][org.apache.kylin.job.tools.LZOSupportnessChecker.getSupportness(LZOSupportnessChecker.java:38)]
 - Fail to compress file with lzo
java.lang.RuntimeException: java.lang.ClassNotFoundException: 
com.hadoop.compression.lzo.LzoCodec
at 
org.apache.hadoop.hbase.io.compress.Compression$Algorithm$1.buildCodec(Compression.java:131)
at 
org.apache.hadoop.hbase.io.compress.Compression$Algorithm$1.getCodec(Compression.java:116)
at 
org.apache.hadoop.hbase.io.compress.Compression$Algorithm.getCompressor(Compression.java:310)
at 
org.apache.hadoop.hbase.io.encoding.HFileBlockDefaultEncodingContext.<init>(HFileBlockDefaultEncodingContext.java:92)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$Writer.<init>(HFileBlock.java:690)
at 
org.apache.hadoop.hbase.io.hfile.HFileWriterV2.finishInit(HFileWriterV2.java:117)
at org.apache.hadoop.hbase.io.hfile.HFileWriterV2.<init>(HFileWriterV2.java:109)
at 
org.apache.hadoop.hbase.io.hfile.HFileWriterV2$WriterFactoryV2.createWriter(HFileWriterV2.java:97)
at org.apache.hadoop.hbase.io.hfile.HFile$WriterFactory.create(HFile.java:393)
at 
org.apache.hadoop.hbase.util.CompressionTest.doSmokeTest(CompressionTest.java:118)
at org.apache.hadoop.hbase.util.CompressionTest.main(CompressionTest.java:148)
at 
org.apache.kylin.job.tools.LZOSupportnessChecker.getSupportness(LZOSupportnessChecker.java:36)
at 
org.apache.kylin.job.hadoop.hbase.CreateHTableJob.run(CreateHTableJob.java:100)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at 
org.apache.kylin.job.common.HadoopShellExecutable.doWork(HadoopShellExecutable.java:63)
at 
org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:106)
at 
org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:50)
at 
org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:106)
at 
org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:133)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.ClassNotFoundException: com.hadoop.compression.lzo.LzoCodec
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
at 
org.apache.hadoop.hbase.io.compress.Compression$Algorithm$1.buildCodec(Compression.java:125)
... 22 more
[pool-7-thread-10]:[2015-08-06 
08:42:26,052][INFO][org.apache.kylin.job.hadoop.hbase.CreateHTableJob.run(CreateHTableJob.java:104)]
 - hbase will not use lzo to compress data
2015-08-06 08:42:26,084 INFO  [pool-7-thread-10] compress.CodecPool: Got 
brand-new decompressor [.snappy]
2015-08-06 08:42:26,085 INFO  [pool-7-thread-10] compress.CodecPool: Got 
brand-new decompressor [.snappy]
2015-08-06 08:42:26,085 INFO  [pool-7-thread-10] compress.CodecPool: Got 
brand-new decompressor [.snappy]
2015-08-06 08:42:26,085 INFO  [pool-7-thread-10] compress.CodecPool: Got 
brand-new decompressor [.snappy]
[pool-7-thread-10]:[2015-08-06 
08:42:26,091][INFO][org.apache.kylin.job.hadoop.hbase.CreateHTableJob.getSplits(CreateHTableJob.java:161)]
 - 1 regions
[pool-7-thread-10]:[2015-08-06 
08:42:26,091][INFO][org.apache.kylin.job.hadoop.hbase.CreateHTableJob.getSplits(CreateHTableJob.java:162)]
 - 0 splits
2015-08-06 08:42:26,096 INFO  [pool-7-thread-10] zookeeper.ZooKeeper: 
Initiating client connection, 
connectString=node41.cluster-a.gdyd.com:2181,node22.cluster-a.gdyd.com:2181,node21.cluster-a.gdyd.com:2181,node20.cluster-a.gdyd.com:2181,node42.cluster-a.gdyd.com:2181
 sessionTimeout=300000 watcher=catalogtracker-on-hconnection-0x66f0a71e, 
quorum=node41.cluster-a.gdyd.com:2181,node22.cluster-a.gdyd.com:2181,node21.cluster-a.gdyd.com:2181,node20.cluster-a.gdyd.com:2181,node42.cluster-a.gdyd.com:2181,
 baseZNode=/hbase
2015-08-06 08:42:26,097 INFO  [pool-7-thread-10] 
zookeeper.RecoverableZooKeeper: Process 
identifier=catalogtracker-on-hconnection-0x66f0a71e connecting to ZooKeeper 
ensemble=node41.cluster-a.gdyd.com:2181,node22.cluster-a.gdyd.com:2181,node21.cluster-a.gdyd.com:2181,node20.cluster-a.gdyd.com:2181,node42.cluster-a.gdyd.com:2181
2015-08-06 08:42:26,097 DEBUG [pool-7-thread-10] catalog.CatalogTracker: 
Starting catalog tracker 



梁猛 
中国移动广东公司 网管维护中心 网管支撑室 
电话:13802880779
邮箱: [email protected][email protected]
地址:广东省广州市珠江新城珠江西路11号 广东全球通大厦北3楼 
邮编:510623 

Reply via email to