According to your suggestion,I find these error log in tomcat/log/kylin.log:
[ERROR][org.apache.kylin.job.hadoop.hbase.CreateHTableJob.run(CreateHTableJob.java:157)]
 - org.apache.hadoop.hbase.DoNotRetryIOException: 
file:/root/local/hdp-data/hbase/local/jars/tmp/.4cd8c51d-cc8f-480c-be7d-32a8442a7ce0.kylin-coprocessor-1.2-incubating-SNAPSHOT-3.jar.1448872036162.jar
 (No such file or directory) Set hbase.table.sanity.checks to false at conf or 
table descriptor if you want to bypass sanity checks
at 
org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1597)
at 
org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1529)
at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1448)
at 
org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:422)
at 
org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:48502)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
at java.lang.Thread.run(Thread.java:745)

org.apache.hadoop.hbase.DoNotRetryIOException: 
org.apache.hadoop.hbase.DoNotRetryIOException: 
file:/root/local/hdp-data/hbase/local/jars/tmp/.4cd8c51d-cc8f-480c-be7d-32a8442a7ce0.kylin-coprocessor-1.2-incubating-SNAPSHOT-3.jar.1448872036162.jar
 (No such file or directory) Set hbase.table.sanity.checks to false at conf or 
table descriptor if you want to bypass sanity checks
at 
org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1597)
at 
org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1529)
at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1448)
at 
org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:422)
at 
org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:48502)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
at java.lang.Thread.run(Thread.java:745)

at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at 
org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
at 
org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.translateException(RpcRetryingCaller.java:226)
at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.translateException(RpcRetryingCaller.java:240)
at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:140)
at 
org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3917)
at 
org.apache.hadoop.hbase.client.HBaseAdmin.createTableAsyncV2(HBaseAdmin.java:636)
at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:557)
at 
org.apache.kylin.job.hadoop.hbase.CreateHTableJob.run(CreateHTableJob.java:150)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at 
org.apache.kylin.job.common.HadoopShellExecutable.doWork(HadoopShellExecutable.java:62)
at 
org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:107)
at 
org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:51)
at 
org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:107)
at 
org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:130)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: 
org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.DoNotRetryIOException):
 org.apache.hadoop.hbase.DoNotRetryIOException: 
file:/root/local/hdp-data/hbase/local/jars/tmp/.4cd8c51d-cc8f-480c-be7d-32a8442a7ce0.kylin-coprocessor-1.2-incubating-SNAPSHOT-3.jar.1448872036162.jar
 (No such file or directory) Set hbase.table.sanity.checks to false at conf or 
table descriptor if you want to bypass sanity checks
at 
org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1597)
at 
org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1529)
at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1448)
at 
org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:422)
at 
org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:48502)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
at java.lang.Thread.run(Thread.java:745)

at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1248)
at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
at 
org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.createTable(MasterProtos.java:51086)
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$4.createTable(ConnectionManager.java:1802)
at org.apache.hadoop.hbase.client.HBaseAdmin$4.call(HBaseAdmin.java:641)
at org.apache.hadoop.hbase.client.HBaseAdmin$4.call(HBaseAdmin.java:637)
at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126)
... 14 more
Could you help me check what is wrong further?


[email protected]
 
From: hongbin ma
Date: 2015-11-27 17:28
To: dev
Subject: Re: create sample cube error at step 13 Create HTable
in that case, you should
disable snappy for hive jobs by removing compression configs in
conf/kylin_hive_conf.xml
disable snappy for MR by removing compression configs in
conf/kylin_job_conf.xml
disable snappy for hbase table by removing
"kylin.hbase.default.compression.codec=snappy" in conf/kylin.properties
 
also you could discover more details about your error in
tomcat/log/kylin.log
 
On Fri, Nov 27, 2015 at 5:07 PM, sfwang <[email protected]> wrote:
 
> It is a long story, My hadoop cluster dosn't support snappy by default,so
> if
> use compress,It will encounter a error at step 1#.
> I am a newbee, I don't know how I can setup the snappy. so I try to close
> compress to avoid it.
>
>
>
> --
> View this message in context:
> http://apache-kylin-incubating.74782.x6.nabble.com/create-sample-cube-error-at-step-13-Create-HTable-tp2572p2576.html
> Sent from the Apache Kylin (Incubating) mailing list archive at Nabble.com.
>
 
 
-- 
Regards,
 
*Bin Mahone | 马洪宾*
Apache Kylin: http://kylin.io
Github: https://github.com/binmahone

Reply via email to