First there is issue with your HBase. You have to fix that first. Verify by
creating new HTable.

After HBase is back, discard the error job in Kylin and rebuild the cube
again. Kylin should back to normal.

On Fri, Aug 18, 2017 at 5:39 PM, [email protected] <[email protected]> wrote:

>
> Hello everyone:
>     I've been using kylin for two months ,there was no problem each time
> I created projects,models and built cube,it always worked!
> But today  I created a project, model and build cube,when I build cube , an 
> error
> occurred when build cube at step 6:create HTable.
> The specific errors are described as follows:
>
> java.lang.IllegalArgumentException: table KYLIN_CIT4CEENL7 created, but is 
> not available due to some reasons
>       at 
> com.google.common.base.Preconditions.checkArgument(Preconditions.java:92)
>       at 
> org.apache.kylin.storage.hbase.steps.CubeHTableUtil.createHTable(CubeHTableUtil.java:106)
>       at 
> org.apache.kylin.storage.hbase.steps.CreateHTableJob.run(CreateHTableJob.java:103)
>       at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>       at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
>       at 
> org.apache.kylin.engine.mr.common.HadoopShellExecutable.doWork(HadoopShellExecutable.java:63)
>       at 
> org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:124)
>       at 
> org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:64)
>       at 
> org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:124)
>       at 
> org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:142)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>       at java.lang.Thread.run(Thread.java:745)
> result code:2
>
>     环境:
>           apache-kylin-2.0.0
>           Hadoop 2.6.0-cdh5.8.5
>           HBase 1.2.0-cdh5.8.5
>           Hive 1.1.0-cdh5.8.5
>
> I found the same problems on the Internet in the  http://apache-kylin.
> 74782.x6.nabble.com/an-error-occurred-when-build-a-sample-
> cube-at-step-5-create-HTable-td4102.html .
> *  it* offered a solution as follows:
> $KYLIN_HOME/conf/kylin_job_conf.xml by removing all configuration entries
> related to compression(Just grep the keyword “compress”). To disable
> compressing hbase tables you need to open $KYLIN_HOME/conf/kylin.properties
> and remove the line starting with kylin.hbase.default.compression.codec.
>
> I did ,but My Cube can't build succeed,the same error occurred.
>
> 他遇到的问题是:kylin默认使用snappy压缩,我用的CDH是支持snappy压缩的。
> [root@master hadoop-hdfs]# hadoop checknative -a
> 17/08/18 15:45:01 INFO bzip2.Bzip2Factory: Successfully
> loaded & initialized native-bzip2 library system-native
> 17/08/18 15:45:01 INFO zlib.ZlibFactory: Successfully
> loaded & initialized native-zlib library
> Native library checking:
> hadoop:  true /opt/cloudera/parcels/CDH-5.8.5-1.cdh5.8.5.
> p0.5/lib/hadoop/lib/native/libhadoop.so.1.0.0
> zlib:    true /lib64/libz.so.1
> snappy:  true /opt/cloudera/parcels/CDH-5.8.5-1.cdh5.8.5.
> p0.5/lib/hadoop/lib/native/libsnappy.so.1
> lz4:     true revision:10301
> bzip2:   true /lib64/libbz2.so.1
> openssl: true /usr/lib64/libcrypto.so
>
>
>
>
>
>
> Now ,I will list more detailed description of the problem :
> --- --------------------
> kylin.log
>
> 2017-08-18 13:49:00,802 INFO  [Job c0b3d6de-99e5-4b91-a941-
> 0fb03d89e40a-726] util.DeployCoprocessorCLI:229 :
> Add coprocessor on KYLIN_CIT4CEENL7
> 2017-08-18 13:49:00,812 INFO  [Job c0b3d6de-99e5-4b91-a941-
> 0fb03d89e40a-726] util.DeployCoprocessorCLI:209 :
> hbase table KYLIN_CIT4CEENL7 deployed with coprocessor.
> 2017-08-18 13:49:29,059 INFO  [Job c0b3d6de-99e5-4b91-a941-
> 0fb03d89e40a-726] client.HBaseAdmin:784 : Created KYLIN_CIT4CEENL7
> 2017-08-18 13:49:29,078 ERROR [Job c0b3d6de-99e5-4b91-a941-
> 0fb03d89e40a-726] common.HadoopShellExecutable:65 : error execute
> HadoopShellExecutable{id=c0b3d6de-99e5-4b91-a941-
> 0fb03d89e40a-05, name=Create HTable, state=RUNNING}
> java.lang.IllegalArgumentException: table KYLIN_CIT4CEENL7
> created, but is not available due to some reasons
> at com.google.common.base.Preconditions.checkArgument(
> Preconditions.java:92)
> at org.apache.kylin.storage.hbase.steps.CubeHTableUtil.
> createHTable(CubeHTableUtil.java:106)
> at org.apache.kylin.storage.hbase.steps.CreateHTableJob.
> run(CreateHTableJob.java:103)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
> at org.apache.kylin.engine.mr.common.HadoopShellExecutable.
> doWork(HadoopShellExecutable.java:63)
> at org.apache.kylin.job.execution.AbstractExecutable.
> execute(AbstractExecutable.java:124)
> at org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(
> DefaultChainedExecutable.java:64)
> at org.apache.kylin.job.execution.AbstractExecutable.
> execute(AbstractExecutable.java:124)
> at org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(
> DefaultScheduler.java:142)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1142)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> 2017-08-18 13:49:29,080 DEBUG [Job c0b3d6de-99e5-4b91-a941-
> 0fb03d89e40a-726] dao.ExecutableDao:217 : updating
> job output, id: c0b3d6de-99e5-4b91-a941-0fb03d89e40a-05
> 2017-08-18 13:49:29,087 DEBUG [Job c0b3d6de-99e5-4b91-a941-
> 0fb03d89e40a-726] dao.ExecutableDao:217 : updating
> job output, id: c0b3d6de-99e5-4b91-a941-0fb03d89e40a-05
> 2017-08-18 13:49:29,091 INFO  [Job c0b3d6de-99e5-4b91-a941-
> 0fb03d89e40a-726] execution.ExecutableManager:389 : job
> id:c0b3d6de-99e5-4b91-a941-0fb03d89e40a-05 from RUNNING to ERROR
> 2017-08-18 13:49:29,100 DEBUG [Job c0b3d6de-99e5-4b91-a941-
> 0fb03d89e40a-726] dao.ExecutableDao:217 : updating
> job output, id: c0b3d6de-99e5-4b91-a941-0fb03d89e40a
> 2017-08-18 13:49:29,106 DEBUG [Job c0b3d6de-99e5-4b91-a941-
> 0fb03d89e40a-726] dao.ExecutableDao:217 : updating
> job output, id: c0b3d6de-99e5-4b91-a941-0fb03d89e40a
>   -----------------------------------------------------------------
>
> hbase的web 控制台
>
>
> ------------------------------------------
> hbase的日志信息如下: 里面STARTKEY => '', ENDKEY => ''   表示数据没有写入成功吗??
>
> 2017-08-18 17:04:54,324 INFO org.apache.hadoop.hbase.
> master.AssignmentManager: Assigning KYLIN_CIT4CEENL7,,1503046982941.
> 3b629cdbed8252f24ced8f6f71fe2d1f. to slave3.cdh.com,60020,1503035643997
> 2017-08-18 17:04:54,324 INFO org.apache.hadoop.hbase.master.RegionStates:
> Transition {3b629cdbed8252f24ced8f6f71fe2d1f state=OFFLINE, ts=
> 1503047094320, server=slave2.cdh.com,60020,1503035646921} to {
> 3b629cdbed8252f24ced8f6f71fe2d1f state=PENDING_OPEN, ts=
> 1503047094324, server=slave3.cdh.com,60020,1503035643997}
> 2017-08-18 17:04:54,345 INFO org.apache.hadoop.hbase.master.RegionStates:
> Transition {3b629cdbed8252f24ced8f6f71fe2d1f state=PENDING_OPEN, ts=
> 1503047094324, server=slave3.cdh.com,60020,1503035643997} to {
> 3b629cdbed8252f24ced8f6f71fe2d1f state=OPENING, ts=1503047094345, server=
> slave3.cdh.com,60020,1503035643997}
> 2017-08-18 17:05:49,463 INFO org.apache.hadoop.hbase.master.RegionStates:
> Transition {3b629cdbed8252f24ced8f6f71fe2d1f state=OPENING, ts=
> 1503047094345, server=slave3.cdh.com,60020,1503035643997} to {
> 3b629cdbed8252f24ced8f6f71fe2d1f state=CLOSED, ts=1503047149463, server=
> slave3.cdh.com,60020,1503035643997}
> 2017-08-18 17:05:49,465 INFO org.apache.hadoop.hbase.
> master.AssignmentManager: Setting node as OFFLINED in
> ZooKeeper for region {ENCODED => 3b629cdbed8252f24ced8f6f71fe2d
> 1f, NAME => 'KYLIN_CIT4CEENL7,,1503046982941.
> 3b629cdbed8252f24ced8f6f71fe2d1f.', STARTKEY => '', ENDKEY => ''}
> 2017-08-18 17:05:49,465 INFO org.apache.hadoop.hbase.master.RegionStates:
> Transition {3b629cdbed8252f24ced8f6f71fe2d1f state=CLOSED, ts=
> 1503047149465, server=slave3.cdh.com,60020,1503035643997} to {
> 3b629cdbed8252f24ced8f6f71fe2d1f state=OFFLINE, ts=1503047149465, server=
> slave3.cdh.com,60020,1503035643997}
> 2017-08-18 17:05:49,470 INFO org.apache.hadoop.hbase.
> master.AssignmentManager: Assigning KYLIN_CIT4CEENL7,,1503046982941.
> 3b629cdbed8252f24ced8f6f71fe2d1f. to slave2.cdh.com,60020,1503035646921
> 2017-08-18 17:05:49,470 INFO org.apache.hadoop.hbase.master.RegionStates:
> Transition {3b629cdbed8252f24ced8f6f71fe2d1f state=OFFLINE, ts=
> 1503047149465, server=slave3.cdh.com,60020,1503035643997} to {
> 3b629cdbed8252f24ced8f6f71fe2d1f state=PENDING_OPEN, ts=
> 1503047149470, server=slave2.cdh.com,60020,1503035646921}
> 2017-08-18 17:05:49,496 INFO org.apache.hadoop.hbase.master.RegionStates:
> Transition {3b629cdbed8252f24ced8f6f71fe2d1f state=PENDING_OPEN, ts=
> 1503047149470, server=slave2.cdh.com,60020,1503035646921} to {
> 3b629cdbed8252f24ced8f6f71fe2d1f state=OPENING, ts=1503047149496, server=
> slave2.cdh.com,60020,1503035646921}
> ------------------------------------------------------------
> ---------------
>
> 最后查看 hdfs的日志信息:
> 里面有警告:KYLIN_CIT4CEENL7 这张表没有写入权限
> 2017-08-18 17:05:09,389 WARN org.apache.hadoop.security.
> UserGroupInformation: PriviledgedActionException as:
> hbase (auth:SIMPLE) cause:org.apache.hadoop.security.
> AccessControlException: Permission denied: user=hbase,
>  access=WRITE, inode="/hbase/data/default/2017-08-18 17:05:
> 09,389 WARN org.apache.hadoop.security.UserGroupInformation:
>  PriviledgedActionException as:hbase (auth:SIMPLE) cause:
> org.apache.hadoop.security.AccessControlException:
> Permission denied: user=hbase, access=WRITE, inode="/hbase/
> data/default/KYLIN_CIT4CEENL7/3b629cdbed8252f24ced8f6f71fe2d
> 1f":hdfs:hbase:drwxr-xr-x
> 2017-08-18 17:05:09,389 INFO org.apache.hadoop.ipc.Server:
> IPC Server handler 12 on 8020, call org.apache.hadoop.hdfs.
> protocol.ClientProtocol.mkdirs from 192.168.1.114:39396
>  Call#1504 Retry#0: org.apache.hadoop.security.AccessControlException:
> Permission denied: user=hbase, access=WRITE, inode="/hbase/
> data/default/KYLIN_CIT4CEENL7/3b629cdbed8252f24ced8f6f71fe2d
> 1f":hdfs:hbase:drwxr-xr-x
> 2017-08-18 17:05:10,849 INFO org.apache.hadoop.hdfs.server.
> blockmanagement.CacheReplicationMonitor: Rescanning after 30000
> milliseconds
> 2017-08-18 17:05:10,850 INFO org.apache.hadoop.hdfs.server.
> blockmanagement.CacheReplicationMonitor: Scanned 0 directive(s) and 0
> block(s) in 1 millisecond(s).
> 2017-08-18 17:05:15,395 WARN org.apache.hadoop.security.
> UserGroupInformation: PriviledgedActionException as:
> hbase (auth:SIMPLE) cause:org.apache.hadoop.security.
> AccessControlException: Permission denied: user=hbase,
>  access=WRITE, inode="/hbase/data/default/KYLIN_CIT4CEENL7/
> 3b629cdbed8252f24ced8f6f71fe2d1f":hdfs:hbase:drwxr-xr-x
> 2017-08-18 17:05:15,395 INFO org.apache.hadoop.ipc.Server:
> IPC Server handler 29 on 8020, call org.apache.hadoop.hdfs.
> protocol.ClientProtocol.mkdirs from 192.168.1.114:39396
>  Call#1507 Retry#0: org.apache.hadoop.security.AccessControlException:
> Permission denied: user=hbase, access=WRITE, inode="/hbase/
> data/default/KYLIN_CIT4CEENL7/3b629cdbed8252f24ced8f6f71fe2d
> 1f":hdfs:hbase:drwxr-xr-x
> 2017-08-18 17:05:16,703 INFO org.apache.hadoop.hdfs.server.
> namenode.FSEditLog: Number of transactions: 117 Total time
> for transactions(ms): 3 Number of transactions
> batched in Syncs: 32 Number of syncs: 85 SyncTimes(ms): 107
> 2017-08-18 17:05:16,817 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
> allocateBlock: /tmp/.cloudera_health_monitoring_canary_
> files/.canary_file_2017_08_18-17_05_16. BP-100649073-192.
> 168.1.108-1499418425504 blk_1073896776_155980{blockUCState=UNDER_
> CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[
> DISK]DS-291c404a-5f38-4850-9164-334da4e2f35b:NORMAL:192.
> 168.1.113:50010|RBW], ReplicaUnderConstruction[[
> DISK]DS-5d1d9d8e-bd8c-47c8-9554-f4643bcf26b7:NORMAL:192.
> 168.1.112:50010|RBW], ReplicaUnderConstruction[[
> DISK]DS-c6bb2c74-ddb4-48a4-aed4-971bd781af21:NORMAL:192.
> 168.1.114:50010|RBW]]}
> 2017-08-18 17:05:09,389 WARN org.apache.hadoop.security.
> UserGroupInformation: PriviledgedActionException as:
> hbase (auth:SIMPLE) cause:org.apache.hadoop.security.
> AccessControlException: Permission denied: user=hbase,
>  access=WRITE, inode="/hbase/data/default/KYLIN_CIT4CEENL7/
> 3b629cdbed8252f24ced8f6f71fe2d1f":hdfs:hbase:drwxr-xr-x
> 2017-08-18 17:05:09,389 INFO org.apache.hadoop.ipc.Server:
> IPC Server handler 12 on 8020, call org.apache.hadoop.hdfs.
> protocol.ClientProtocol.mkdirs from 192.168.1.114:39396
>  Call#1504 Retry#0: org.apache.hadoop.security.AccessControlException:
> Permission denied: user=hbase, access=WRITE, inode="/hbase/
> data/default/KYLIN_CIT4CEENL7/3b629cdbed8252f24ced8f6f71fe2d
> 1f":hdfs:hbase:drwxr-xr-x
> 2017-08-18 17:05:10,849 INFO org.apache.hadoop.hdfs.server.
> blockmanagement.CacheReplicationMonitor: Rescanning after 30000
> milliseconds
> 2017-08-18 17:05:10,850 INFO org.apache.hadoop.hdfs.server.
> blockmanagement.CacheReplicationMonitor: Scanned 0 directive(s) and 0
> block(s) in 1 millisecond(s).
> 2017-08-18 17:05:15,395 WARN org.apache.hadoop.security.
> UserGroupInformation: PriviledgedActionException as:
> hbase (auth:SIMPLE) cause:org.apache.hadoop.security.
> AccessControlException: Permission denied: user=hbase,
>  access=WRITE, inode="/hbase/data/default/KYLIN_CIT4CEENL7/
> 3b629cdbed8252f24ced8f6f71fe2d1f":hdfs:hbase:drwxr-xr-x
> 2017-08-18 17:05:15,395 INFO org.apache.hadoop.ipc.Server:
> IPC Server handler 29 on 8020, call org.apache.hadoop.hdfs.
> protocol.ClientProtocol.mkdirs from 192.168.1.114:39396
>  Call#1507 Retry#0: org.apache.hadoop.security.AccessControlException:
> Permission denied: user=hbase, access=WRITE, inode="/hbase/
> data/default/KYLIN_CIT4CEENL7/3b629cdbed8252f24ced8f6f71fe2d
> 1f":hdfs:hbase:drwxr-xr-x
> 2017-08-18 17:05:16,703 INFO org.apache.hadoop.hdfs.server.
> namenode.FSEditLog: Number of transactions: 117 Total time
> for transactions(ms): 3 Number of transactions
> batched in Syncs: 32 Number of syncs: 85 SyncTimes(ms): 107
> 2017-08-18 17:05:16,817 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
> allocateBlock: /tmp/.cloudera_health_monitoring_canary_
> files/.canary_file_2017_08_18-17_05_16. BP-100649073-192.
> 168.1.108-1499418425504 blk_1073896776_155980{blockUCState=UNDER_
> CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[
> DISK]DS-291c404a-5f38-4850-9164-334da4e2f35b:NORMAL:192.
> 168.1.113:50010|RBW], ReplicaUnderConstruction[[
> DISK]DS-5d1d9d8e-bd8c-47c8-9554-f4643bcf26b7:NORMAL:192.
> 168.1.112:50010|RBW], ReplicaUnderConstruction[[
> DISK]DS-c6bb2c74-ddb4-48a4-aed4-971bd781af21:NORMAL:192.
> 168.1.114:50010|RBW]]}
> /3b629cdbed8252f24ced8f6f71fe2d1f":hdfs:hbase:drwxr-xr-x
> 2017-08-18 17:05:09,389 INFO org.apache.hadoop.ipc.Server:
> IPC Server handler 12 on 8020, call org.apache.hadoop.hdfs.
> protocol.ClientProtocol.mkdirs from 192.168.1.114:39396
>  Call#1504 Retry#0: org.apache.hadoop.security.AccessControlException:
> Permission denied: user=hbase, access=WRITE, inode="/hbase/
> data/default/KYLIN_CIT4CEENL7/3b629cdbed8252f24ced8f6f71fe2d
> 1f":hdfs:hbase:drwxr-xr-x
> 2017-08-18 17:05:10,849 INFO org.apache.hadoop.hdfs.server.
> blockmanagement.CacheReplicationMonitor: Rescanning after 30000
> milliseconds
> 2017-08-18 17:05:10,850 INFO org.apache.hadoop.hdfs.server.
> blockmanagement.CacheReplicationMonitor: Scanned 0 directive(s) and 0
> block(s) in 1 millisecond(s).
> 2017-08-18 17:05:15,395 WARN org.apache.hadoop.security.
> UserGroupInformation: PriviledgedActionException as:
> hbase (auth:SIMPLE) cause:org.apache.hadoop.security.
> AccessControlException: Permission denied: user=hbase,
>  access=WRITE, inode="/hbase/data/default/KYLIN_CIT4CEENL7/
> 3b629cdbed8252f24ced8f6f71fe2d1f":hdfs:hbase:drwxr-xr-x
> 2017-08-18 17:05:15,395 INFO org.apache.hadoop.ipc.Server:
> IPC Server handler 29 on 8020, call org.apache.hadoop.hdfs.
> protocol.ClientProtocol.mkdirs from 192.168.1.114:39396
>  Call#1507 Retry#0: org.apache.hadoop.security.AccessControlException:
> Permission denied: user=hbase, access=WRITE, inode="/hbase/
> data/default/KYLIN_CIT4CEENL7/3b629cdbed8252f24ced8f6f71fe2d
> 1f":hdfs:hbase:drwxr-xr-x
> 2017-08-18 17:05:16,703 INFO org.apache.hadoop.hdfs.server.
> namenode.FSEditLog: Number of transactions: 117 Total time
> for transactions(ms): 3 Number of transactions
> batched in Syncs: 32 Number of syncs: 85 SyncTimes(ms): 107
> 2017-08-18 17:05:16,817 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
> allocateBlock: /tmp/.cloudera_health_monitoring_canary_
> files/.canary_file_2017_08_18-17_05_16. BP-100649073-192.
> 168.1.108-1499418425504 blk_1073896776_155980{blockUCState=UNDER_
> CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[
> DISK]DS-291c404a-5f38-4850-9164-334da4e2f35b:NORMAL:192.
> 168.1.113:50010|RBW], ReplicaUnderConstruction[[
> DISK]DS-5d1d9d8e-bd8c-47c8-9554-f4643bcf26b7:NORMAL:192.
> 168.1.112:50010|RBW], ReplicaUnderConstruction[[
> DISK]DS-c6bb2c74-ddb4-48a4-aed4-971bd781af21:NORMAL:192.
> 168.1.114:50010|RBW]]}
>
>
>
> 上面的警告,表KYLIN_CIT4CEENL7 没有写入权限 ,导致,创建表后无法向里写入数据,导致异常?是不是这里的问题?
>  (以前buid成功是有写权限的,今天build突然没权限了)
>  直接给表加写权限 ,重新build会报表已经存在的异常,所以,怎么能让kylin 创建表的时候,同时给KYLIN_CIT4CEENL7这样的表
> 写权限?在哪里配置?求助中。。。。
> 在kylin 中配置  还是在hadoop的配置环境中配置:
>
> 网上说在hdfs-site.xml 中如下配置
> <property>
>     <name>dfs.permissions</name>
>     <value>false</value>
>  </property>
> 试过,还是无法解决问题。。。
>
>
> 大家帮忙分析分析。。。。。
>
>
>
>
>
>
>
>
>
>
>
>
> ------------------------------
>
>

Reply via email to