[ 
https://issues.apache.org/jira/browse/KYLIN-2587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shaofeng SHI closed KYLIN-2587.
-------------------------------
    Resolution: Workaround

> "Convert Cuboid Data to HFile" failed on EMR 5.5
> ------------------------------------------------
>
>                 Key: KYLIN-2587
>                 URL: https://issues.apache.org/jira/browse/KYLIN-2587
>             Project: Kylin
>          Issue Type: Bug
>          Components: Job Engine
>    Affects Versions: v2.0.0
>         Environment: EMR 5.5, EMR 5.4
>            Reporter: Shaofeng SHI
>
> Create a EMR 5.5 HBase cluser, download and start Kylin on the master node, 
> and then create the sample cube. The cube build got many errors in the 
> reducer of "Convert Cuboid Data to HFile"  step. Error trace is:
> {code}
> 2017-05-05 08:02:13,057 WARN [main] org.apache.hadoop.hbase.zookeeper.ZKUtil: 
> hconnection-0x4a67318f0x0, quorum=localhost:2181, baseZNode=/hbase Unable to 
> set watcher on znode (/hbase/hbaseid)
> org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode 
> = ConnectionLoss for /hbase/hbaseid
>       at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
>       at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
>       at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1102)
>       at 
> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:220)
>       at org.apache.hadoop.hbase.zookeeper.ZKUtil.checkExists(ZKUtil.java:420)
>       at 
> org.apache.hadoop.hbase.zookeeper.ZKClusterId.readClusterIdZNode(ZKClusterId.java:65)
>       at 
> org.apache.hadoop.hbase.client.ZooKeeperRegistry.getClusterId(ZooKeeperRegistry.java:105)
>       at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.retrieveClusterId(ConnectionManager.java:913)
>       at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.<init>(ConnectionManager.java:697)
>       at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>       at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>       at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>       at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>       at 
> org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:238)
>       at 
> org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:218)
>       at 
> org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:119)
>       at 
> org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2$1.write(HFileOutputFormat2.java:213)
>       at 
> org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2$1.write(HFileOutputFormat2.java:167)
>       at 
> org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.write(ReduceTask.java:566)
>       at 
> org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
>       at 
> org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer$Context.write(WrappedReducer.java:105)
>       at 
> org.apache.hadoop.hbase.mapreduce.KeyValueSortReducer.reduce(KeyValueSortReducer.java:53)
>       at 
> org.apache.hadoop.hbase.mapreduce.KeyValueSortReducer.reduce(KeyValueSortReducer.java:36)
>       at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:171)
>       at 
> org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:635)
>       at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:390)
>       at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
>       at java.security.AccessController.doPrivileged(Native Method)
>       at javax.security.auth.Subject.doAs(Subject.java:422)
>       at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
>       at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> {code}
> I checked the environment, the hbase-site.xml is in /etc/hbase/conf folder, 
> which appeared at the start position of "hbase classpath", looks good, but 
> the reducer couldn't get zookeeper's info, so using the default value 
> "localhost:2181".
> I bypassed this error by copy "hbase.zookeeper.quorum" from hbase-site.xml to 
> $KYLIN_HOME/conf/kylin_job_conf.xml:
>   <property>
>     <name>hbase.zookeeper.quorum</name>
>     <value>ip-nn-nn-nn-nn.ap-northeast-2.compute.internal</value>
>   </property>
> It need a further investigation. Maybe is related with some change in hbase 
> 1.3 (EMR 5.5)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to