finally I succeed with read-write separation deployment . There are two point 
cause my failure :
1、when use beeline to connect hive ,should not use zookeeper discoverer model 
,should connect to one of the hiveservers directly.
2、should not configure NN HA to connect to Hbase cluster , although I 
configured kylin.storage.hbase.cluster-hdfs-config-file=hbase.hdfs.xml , JOB 
failed when step to : Convert Cuboid Data to HFile.
Error Message :      
java.lang.RuntimeException: Could not find any configured addresses for URI 
hdfs://nameservice1/user/mykylin/kylin_metadata/kylin-4fdee76b-6b73-087a-b9ad-6cf17dd84aad/kylin_sales_cube/hfile

2019-04-17 

lk_hadoop 



发件人:"lk_hadoop"<[email protected]>
发送时间:2019-04-16 15:30
主题:Deploy Apache Kylin with Standalone HBase Cluster
收件人:"dev"<[email protected]>
抄送:

hi,all: 
    I want to try read-write separation deployment . Is the Standalone HBase 
Cluster should use the same HDFS withe the Main Cluster ? My Hbase cluster is 
completly separate with main cluster , both cluster's HDFS is NN HA , I can't 
sucess with read-write separation deployment . 

2019-04-16 


lk_hadoop  

Reply via email to