?????? Error in kylin with standalone HBase cluster

2017-01-09 Thread ????
hi Shao feng,

Thank you for your advice.
As you said, if I replace the "core-site.xml" and "hdfs-site.xml" with the 
hive/mapreduce cluster's config file, it means that the hbase and 
hive/mapreduce cluster's are all dependent on the same hadoop client, but not 
deploying with standalone HBase cluster. When they dependent on the same hadoop 
client, I know it works well.
On the other hand, the property "kylin.hbase.cluster.fs" in 
kylin.properties,when to use for?
If I need to dependent hbase and hive/mapreduce cluster's on diffent hdfs, 
counld kylin support that? Or may be in the future?


--  --
??: "ShaoFeng Shi";;
: 2017??1??10??(??) 10:35
??: "dev"; 

: Re: Error in kylin with standalone HBase cluster



Hi Nan,

This error indicates that the hadoop configuration of the HBase cluster has
overwriten the configuration of the default hadoop cluster. As you know,
Kylin uses the output of "hbase classpath" as the classpath to startup; so
it is possible that the hbase cluster's "core-site.xml", "hdfs-site.xml"
are on the classpath's starting position. Please locate them, take backup,
and then replace them with the hive/mapreduce cluster's config file; only
left "hbase-site.xml" with the configurations to the dedicated hbase
cluster.

After doing that, restart Kylin, discard the error job, and resubmit a
build.



2017-01-09 23:12 GMT+08:00  :

> hi,all??
>  I want to deploy apache kylin with standalone HBase cluster,and also
> refer offical doc(http://kylin.apache.org/blog/2016/06/10/standalone-
> hbase-cluster) to update the config kylin.hbase.cluster.fs in
> kylin.properties, but when I build new cube in step2(Redistribute Flat Hive
> Table), it is using hadoop client whch hbase cluster is dependent on, but
> not main cluster's. Thus error that file in hdfs couldn't be find  occur.
> The program get hadoop configuration using 
> HadoopUtil.getCurrentConfiguration()
> in CreateFlatHiveTableStep class.
> The version of Kylin upgade 1.6.0 is the same.
>
>
>
>
> 2017-01-04 18:42:43,459 ERROR [pool-7-thread-3]
> hive.CreateFlatHiveTableStep:114 : job:3889515b-0054-4b71-9db0-615b1ceab3bc-01
> execute finished with exception
> java.io.FileNotFoundException: File does not exist:
> /user/kylin/kylin_metadata/kylin-3889515b-0054-4b71-9db0-
> 615b1ceab3bc/row_count/00_0
> at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(
> INodeFile.java:65)
> at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(
> INodeFile.java:55)
> at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.
> getBlockLocationsUpdateTimes(FSNamesystem.java:1879)
> at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.
> getBlockLocationsInt(FSNamesystem.java:1820)
> at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.
> getBlockLocations(FSNamesystem.java:1800)
> at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.
> getBlockLocations(FSNamesystem.java:1772)
> at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.
> getBlockLocations(NameNodeRpcServer.java:527)
> at org.apache.hadoop.hdfs.server.namenode.
> AuthorizationProviderProxyClientProtocol.getBlockLocations(
> AuthorizationProviderProxyClientProtocol.java:85)
> at org.apache.hadoop.hdfs.protocolPB.
> ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(
> ClientNamenodeProtocolServerSideTranslatorPB.java:356)
> at org.apache.hadoop.hdfs.protocol.proto.
> ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(
> ClientNamenodeProtocolProtos.java)
> at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$
> ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:587)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at org.apache.hadoop.security.UserGroupInformation.doAs(
> UserGroupInformation.java:1642)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
>
>
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance(
> NativeConstructorAccessorImpl.java:57)
> at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(
> DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> at org.apache.hadoop.ipc.RemoteException.instantiateException(
> RemoteException.java:106)
> at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(
> RemoteException.java:73)
> at 

Re: Error in kylin with standalone HBase cluster

2017-01-09 Thread ShaoFeng Shi
Hi Nan,

This error indicates that the hadoop configuration of the HBase cluster has
overwriten the configuration of the default hadoop cluster. As you know,
Kylin uses the output of "hbase classpath" as the classpath to startup; so
it is possible that the hbase cluster's "core-site.xml", "hdfs-site.xml"
are on the classpath's starting position. Please locate them, take backup,
and then replace them with the hive/mapreduce cluster's config file; only
left "hbase-site.xml" with the configurations to the dedicated hbase
cluster.

After doing that, restart Kylin, discard the error job, and resubmit a
build.



2017-01-09 23:12 GMT+08:00 柯南 :

> hi,all:
>  I want to deploy apache kylin with standalone HBase cluster,and also
> refer offical doc(http://kylin.apache.org/blog/2016/06/10/standalone-
> hbase-cluster) to update the config kylin.hbase.cluster.fs in
> kylin.properties, but when I build new cube in step2(Redistribute Flat Hive
> Table), it is using hadoop client whch hbase cluster is dependent on, but
> not main cluster's. Thus error that file in hdfs couldn't be find  occur.
> The program get hadoop configuration using 
> HadoopUtil.getCurrentConfiguration()
> in CreateFlatHiveTableStep class.
> The version of Kylin upgade 1.6.0 is the same.
>
>
>
>
> 2017-01-04 18:42:43,459 ERROR [pool-7-thread-3]
> hive.CreateFlatHiveTableStep:114 : job:3889515b-0054-4b71-9db0-615b1ceab3bc-01
> execute finished with exception
> java.io.FileNotFoundException: File does not exist:
> /user/kylin/kylin_metadata/kylin-3889515b-0054-4b71-9db0-
> 615b1ceab3bc/row_count/00_0
> at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(
> INodeFile.java:65)
> at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(
> INodeFile.java:55)
> at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.
> getBlockLocationsUpdateTimes(FSNamesystem.java:1879)
> at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.
> getBlockLocationsInt(FSNamesystem.java:1820)
> at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.
> getBlockLocations(FSNamesystem.java:1800)
> at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.
> getBlockLocations(FSNamesystem.java:1772)
> at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.
> getBlockLocations(NameNodeRpcServer.java:527)
> at org.apache.hadoop.hdfs.server.namenode.
> AuthorizationProviderProxyClientProtocol.getBlockLocations(
> AuthorizationProviderProxyClientProtocol.java:85)
> at org.apache.hadoop.hdfs.protocolPB.
> ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(
> ClientNamenodeProtocolServerSideTranslatorPB.java:356)
> at org.apache.hadoop.hdfs.protocol.proto.
> ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(
> ClientNamenodeProtocolProtos.java)
> at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$
> ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:587)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at org.apache.hadoop.security.UserGroupInformation.doAs(
> UserGroupInformation.java:1642)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
>
>
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance(
> NativeConstructorAccessorImpl.java:57)
> at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(
> DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> at org.apache.hadoop.ipc.RemoteException.instantiateException(
> RemoteException.java:106)
> at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(
> RemoteException.java:73)
> at org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(
> DFSClient.java:1171)
> at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(
> DFSClient.java:1159)
> at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(
> DFSClient.java:1149)
> at org.apache.hadoop.hdfs.DFSInputStream.
> fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:270)
> at org.apache.hadoop.hdfs.DFSInputStream.openInfo(
> DFSInputStream.java:237)
> at org.apache.hadoop.hdfs.DFSInputStream.(
> DFSInputStream.java:230)
> at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1448)
> at org.apache.hadoop.hdfs.DistributedFileSystem$3.
> doCall(DistributedFileSystem.java:301)
> at org.apache.hadoop.hdfs.DistributedFileSystem$3.
> doCall(DistributedFileSystem.java:297)
> at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(

[jira] [Created] (KYLIN-2373) kyin1.5.3, the exposed tables often disappear from the kylin Insight page.

2017-01-09 Thread chelubaiq (JIRA)
chelubaiq created KYLIN-2373:


 Summary: kyin1.5.3, the exposed tables often disappear from the 
kylin Insight page.
 Key: KYLIN-2373
 URL: https://issues.apache.org/jira/browse/KYLIN-2373
 Project: Kylin
  Issue Type: Bug
  Components: REST Service
Affects Versions: v1.5.3
Reporter: chelubaiq
Assignee: Zhong,Jason


1 environment:
kylin1.5.3
tow nodes: "query" server a and "all" server b,
with config: kylin.rest.servers=a_ip,b_ip

2 problem:
the exposed tables often disappear from the kylin Insight page.
the log says:
ERROR [http-bio-7070-exec-9] project.ProjectL2Cache:240 : Realization 
'CUBE[name=custom_out_sales4]' reports column 'PROJECT1.TABLE1.COLUMN1', but it 
is not equal to 'ColumnDesc [name=COLUMN1,table=PROJECT1.TABLE1]' according to 
MetadataManager

3 one way to reproduce:
in project1, cube1 is ready;
in server a, choose project1, and reload one table from the DataSource of Model 
page, succeed; the table exist in Insight page.
in server b, choose project1, refresh Insight page, no tables found: "No 
Result."

if click "reload metadata" in the system page, tables will show up.

4
reason of table not found in server b:
in ProjectL2Cache, ColumnDesc from project realization is not equal to 
ColumnDesc from MetadataManager,
because table.equals(other.table) is false in the ColumnDesc.equals() 
method,
because table's lastModified not equal.
and table's lastModified from MetadataManager >  from project realization

the bug may exist in CacheService.rebuildCache():
case TABLE:
getMetadataManager().reloadTableCache(cacheKey);
CubeDescManager.clearCache();
break;

MetadataManager.reloadAllDataModel() may be also needed here.
or MetadataManager.reloadDataModelDesc(forEachModelNameOfTheProject)
or MetadataManager.clearCache()

5
why table exist in server a's Insight page?
according to 4, server a should also have same equality problem and should also 
disappear?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [Announce] New Apache Kylin committer Kaisen Kang

2017-01-09 Thread Dong Li
Welcome Kaisen!


Thanks,
Dong Li


Original Message
Sender:Jian zhongzhongj...@apache.org
Recipient:dev...@kylin.apache.org
Date:Monday, Jan 9, 2017 23:07
Subject:Re: [Announce] New Apache Kylin committer Kaisen Kang


Welcome! Kaisen On Sat, Jan 7, 2017 at 1:47 PM, Li Yang liy...@apache.org 
wrote:  Welcome, Kaisen~~~   Yang   On Wed, Jan 4, 2017 at 3:32 PM, Henry 
Saputra henry.sapu...@gmail.com  wrote:Congrats! Welcome, Kaisen. - 
Henry On Tue, Jan 3, 2017 at 6:40 PM, 康凯森 kangkai...@qq.com wrote:  I 
am glad and honored to join the Apache Kylin community and will keep
contributing to Apache kylin.  Thanks for your help and guidance.   
   Wish Apache Kylin better and prosperous.  Thank you very much, Luke 
and our community.   -- 原始邮件 --发件人: 
"Luke Han";luke...@apache.org;发送时间: 2017年1月3日(星期二) 晚上7:12收件人: 
"dev"dev@kylin.apache.org; "user"u...@kylin.apache.org;  "ApacheKylin 
PMC"priv...@kylin.apache.org;   主题: [Announce] New Apache Kylin committer 
Kaisen Kang On behalf of the Apache Kylin PMC, I am very pleased to 
announcethat Kaisen Kang has accepted the PMC's invitation to become a
committer on the project.   We appreciate all of Kaisen's generous 
contributions about many bugfixes, patches, helped many users. We are so 
glad to have him to beour new committer and looking forward to his 
continued involvement.   Congratulations and Welcome, Kaisen!

Error in kylin with standalone HBase cluster

2017-01-09 Thread ????
hi,all??
 I want to deploy apache kylin with standalone HBase cluster,and also refer 
offical doc(http://kylin.apache.org/blog/2016/06/10/standalone-hbase-cluster) 
to update the config kylin.hbase.cluster.fs in kylin.properties, but when I 
build new cube in step2(Redistribute Flat Hive Table), it is using hadoop 
client whch hbase cluster is dependent on, but not main cluster's. Thus error 
that file in hdfs couldn't be find  occur. The program get hadoop configuration 
using HadoopUtil.getCurrentConfiguration() in CreateFlatHiveTableStep class.
The version of Kylin upgade 1.6.0 is the same. 




2017-01-04 18:42:43,459 ERROR [pool-7-thread-3] 
hive.CreateFlatHiveTableStep:114 : job:3889515b-0054-4b71-9db0-615b1ceab3bc-01 
execute finished with exception
java.io.FileNotFoundException: File does not exist: 
/user/kylin/kylin_metadata/kylin-3889515b-0054-4b71-9db0-615b1ceab3bc/row_count/00_0
at 
org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:65)
at 
org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:55)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1879)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1820)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1800)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1772)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:527)
at 
org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getBlockLocations(AuthorizationProviderProxyClientProtocol.java:85)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:356)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:587)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)


at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at 
org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
at 
org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
at 
org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:1171)
at 
org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1159)
at 
org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1149)
at 
org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:270)
at 
org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:237)
at org.apache.hadoop.hdfs.DFSInputStream.(DFSInputStream.java:230)
at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1448)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:301)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:297)
at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:297)
at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:766)
at 
org.apache.kylin.source.hive.CreateFlatHiveTableStep.readRowCountFromFile(CreateFlatHiveTableStep.java:51)
at 
org.apache.kylin.source.hive.CreateFlatHiveTableStep.doWork(CreateFlatHiveTableStep.java:103)
at 
org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:112)
at 
org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:57)
at 
org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:112)
at 

Re: [Announce] New Apache Kylin committer Kaisen Kang

2017-01-09 Thread Jian Zhong
Welcome! Kaisen

On Sat, Jan 7, 2017 at 1:47 PM, Li Yang  wrote:

>  Welcome, Kaisen~~~
>
> Yang
>
> On Wed, Jan 4, 2017 at 3:32 PM, Henry Saputra 
> wrote:
>
> > Congrats!
> >
> > Welcome, Kaisen.
> >
> > - Henry
> >
> > On Tue, Jan 3, 2017 at 6:40 PM, 康凯森  wrote:
> >
> > > I am glad and honored to join the Apache Kylin community and will keep
> > > contributing to Apache kylin.
> > >
> > >
> > > Thanks for your help and guidance.
> > >
> > >
> > > Wish Apache Kylin better and prosperous.
> > >
> > >
> > > Thank you very much, Luke and our community.
> > >
> > > -- 原始邮件 --
> > > 发件人: "Luke Han";;
> > > 发送时间: 2017年1月3日(星期二) 晚上7:12
> > > 收件人: "dev"; "user";
> "Apache
> > > Kylin PMC";
> > >
> > > 主题: [Announce] New Apache Kylin committer Kaisen Kang
> > >
> > >
> > >
> > > On behalf of the Apache Kylin PMC, I am very pleased to announce
> > > that Kaisen Kang has accepted the PMC's invitation to become a
> > > committer on the project.
> > >
> > > We appreciate all of Kaisen's generous contributions about many bug
> > > fixes, patches, helped many users. We are so glad to have him to be
> > > our new committer and looking forward to his continued involvement.
> > >
> > > Congratulations and Welcome, Kaisen!
> > >
> >
>