[jira] [Created] (KYLIN-5305) kylin4.0.1 build cube error

2022-11-27 Thread lsy_budd (Jira)
lsy_budd created KYLIN-5305:
---

 Summary: kylin4.0.1 build cube  error
 Key: KYLIN-5305
 URL: https://issues.apache.org/jira/browse/KYLIN-5305
 Project: Kylin
  Issue Type: Bug
  Components: Spark Engine
Affects Versions: v4.0.1
Reporter: lsy_budd
 Attachments: kylin.log

kylin4.0.1 build cube  error

之前创建的cube 可以正常build 新创建的cube 报错如下: 已查验 度量列无null值


Caused by: java.lang.NoSuchMethodError: 
org.apache.hadoop.hive.ql.io.sarg.SearchArgument$Builder.isNull(Ljava/lang/String;Lorg/apache/hadoop/hive/ql/io/sarg/PredicateLeaf$Type;)Lorg/apache/hadoop/hive/ql/io/sarg/SearchArgument$Builder;



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KYLIN-4868) kylin build cube error

2021-01-11 Thread roger wang (Jira)
roger wang created KYLIN-4868:
-

 Summary: kylin build cube error
 Key: KYLIN-4868
 URL: https://issues.apache.org/jira/browse/KYLIN-4868
 Project: Kylin
  Issue Type: Bug
  Components: Job Engine
Affects Versions: v3.1.1, v2.5.1
 Environment: kyllin 2.5.1 ,3.1.1
Reporter: roger wang


when i build a cube from a model , It will success at last  . but during the 
building , the process will report several errors.

the logs shows :

{color:#FF}*java.net.ConnectException: Call From shdata1test/192.168.1.167 
to shdata1test:10020 failed on connection exception.*{color}

 

but the enviroment is ok,  i can resume the job, It will success at last .

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Build cube error

2020-07-17 Thread ShaoFeng Shi
 org.apache.hadoop.hbase.client.RetriesExhaustedException: Can't
get the location for replica 0

It seems the HBase service is not healthy.

Best regards,

Shaofeng Shi 史少锋
Apache Kylin PMC
Email: shaofeng...@apache.org

Apache Kylin FAQ: https://kylin.apache.org/docs/gettingstarted/faq.html
Join Kylin user mail group: user-subscr...@kylin.apache.org
Join Kylin dev mail group: dev-subscr...@kylin.apache.org




Fish  于2020年7月17日周五 上午9:48写道:

> Hi,
>
> I tried to build the cube and when I got this following errors. I am using
> EMR 5.30.1 and kylin 3.1.0
>
>  client.ZooKeeperRegistry:107 : ClusterId read in ZooKeeper is null
> Exception in thread "main" java.lang.IllegalArgumentException: Failed to
> find metadata store by url: kylin_metadata@hbase
> at
>
> org.apache.kylin.common.persistence.ResourceStore.createResourceStore(ResourceStore.java:101)
> at
>
> org.apache.kylin.common.persistence.ResourceStore.getStore(ResourceStore.java:113)
> at
>
> org.apache.kylin.rest.service.AclTableMigrationTool.checkIfNeedMigrate(AclTableMigrationTool.java:99)
> at
>
> org.apache.kylin.tool.AclTableMigrationCLI.main(AclTableMigrationCLI.java:43)
> Caused by: java.lang.reflect.InvocationTargetException
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
> at
>
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at
>
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at
>
> org.apache.kylin.common.persistence.ResourceStore.createResourceStore(ResourceStore.java:94)
> ... 3 more
> Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException: Can't
> get the location for replica 0
> at
>
> org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:372)
> at
>
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:153)
> at
>
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:58)
> at
>
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:219)
> at
> org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:275)
> at
>
> org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:436)
> at
> org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:310)
> at
>
> org.apache.hadoop.hbase.MetaTableAccessor.fullScan(MetaTableAccessor.java:640)
> at
>
> org.apache.hadoop.hbase.MetaTableAccessor.tableExists(MetaTableAccessor.java:367)
> at
> org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:411)
> at
>
> org.apache.kylin.storage.hbase.HBaseConnection.tableExists(HBaseConnection.java:290)
> at
>
> org.apache.kylin.storage.hbase.HBaseConnection.createHTableIfNeeded(HBaseConnection.java:315)
> at
>
> org.apache.kylin.storage.hbase.HBaseResourceStore.createHTableIfNeeded(HBaseResourceStore.java:119)
> at
>
> org.apache.kylin.storage.hbase.HBaseResourceStore.(HBaseResourceStore.java:89)
> ... 8 more
> 2020-07-16 16:14:10,927 INFO  [close-hbase-conn] hbase.HBaseConnection:137
> :
> Closing HBase connections...
> 2020-07-16 16:14:10,927 INFO  [close-hbase-conn]
> client.ConnectionManager$HConnectionImplementation:1767 : Closing zookeeper
> sessionid=0x1026ac60a64
> 2020-07-16 16:14:10,933 INFO  [close-hbase-conn] zookeeper.ZooKeeper:693 :
> Session: 0x1026ac60a64 closed
> 2020-07-16 16:14:10,933 INFO  [main-EventThread] zookeeper.ClientCnxn:522 :
> EventThread shut down for session: 0x1026ac60a64
>
>
> Any help would be appreciated ! Thank you
>
> --
> Sent from: http://apache-kylin.74782.x6.nabble.com/
>


Build cube error

2020-07-16 Thread Fish
Hi, 

I tried to build the cube and when I got this following errors. I am using
EMR 5.30.1 and kylin 3.1.0 

 client.ZooKeeperRegistry:107 : ClusterId read in ZooKeeper is null
Exception in thread "main" java.lang.IllegalArgumentException: Failed to
find metadata store by url: kylin_metadata@hbase
at
org.apache.kylin.common.persistence.ResourceStore.createResourceStore(ResourceStore.java:101)
at
org.apache.kylin.common.persistence.ResourceStore.getStore(ResourceStore.java:113)
at
org.apache.kylin.rest.service.AclTableMigrationTool.checkIfNeedMigrate(AclTableMigrationTool.java:99)
at
org.apache.kylin.tool.AclTableMigrationCLI.main(AclTableMigrationCLI.java:43)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at
org.apache.kylin.common.persistence.ResourceStore.createResourceStore(ResourceStore.java:94)
... 3 more
Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException: Can't
get the location for replica 0
at
org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:372)
at
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:153)
at
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:58)
at
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:219)
at
org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:275)
at
org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:436)
at
org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:310)
at
org.apache.hadoop.hbase.MetaTableAccessor.fullScan(MetaTableAccessor.java:640)
at
org.apache.hadoop.hbase.MetaTableAccessor.tableExists(MetaTableAccessor.java:367)
at
org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:411)
at
org.apache.kylin.storage.hbase.HBaseConnection.tableExists(HBaseConnection.java:290)
at
org.apache.kylin.storage.hbase.HBaseConnection.createHTableIfNeeded(HBaseConnection.java:315)
at
org.apache.kylin.storage.hbase.HBaseResourceStore.createHTableIfNeeded(HBaseResourceStore.java:119)
at
org.apache.kylin.storage.hbase.HBaseResourceStore.(HBaseResourceStore.java:89)
... 8 more
2020-07-16 16:14:10,927 INFO  [close-hbase-conn] hbase.HBaseConnection:137 :
Closing HBase connections...
2020-07-16 16:14:10,927 INFO  [close-hbase-conn]
client.ConnectionManager$HConnectionImplementation:1767 : Closing zookeeper
sessionid=0x1026ac60a64
2020-07-16 16:14:10,933 INFO  [close-hbase-conn] zookeeper.ZooKeeper:693 :
Session: 0x1026ac60a64 closed
2020-07-16 16:14:10,933 INFO  [main-EventThread] zookeeper.ClientCnxn:522 :
EventThread shut down for session: 0x1026ac60a64


Any help would be appreciated ! Thank you

--
Sent from: http://apache-kylin.74782.x6.nabble.com/


[jira] [Created] (KYLIN-4538) Kylin build cube error with Oracle data source

2020-05-27 Thread haiquanhe (Jira)
haiquanhe created KYLIN-4538:


 Summary: Kylin build cube error with Oracle data source 
 Key: KYLIN-4538
 URL: https://issues.apache.org/jira/browse/KYLIN-4538
 Project: Kylin
  Issue Type: Bug
  Components: Driver - JDBC
Affects Versions: v2.6.6
 Environment: Hadoop 2.7.6
hbase 1.4.5
hive 2.3.2
Kylin 2.6.6
Reporter: haiquanhe
 Attachments: error.log, kylin.properties

I setup a kylin test environment, and I try to using oracle as JDBC data source.
I can get the table structure in Kylin, but fail to build cube.
I find the SQL  in cubes :
SELECT
T_IOT_CARD.ID as T_IOT_CARD_ID
 FROM UBI_OLAP.T_IOT_CARD as T_IOT_CARD
WHERE 1=1
 
But this sql can't run sucessfull in ORACLE, Oracle can't support table alias 
with as.
Can you point out what should I do , which parameter I should set.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KYLIN-4278) Build cube error in step 3. Connected to metastore, then MetaStoreClient lost connection

2019-12-04 Thread xl_zl (Jira)
xl_zl created KYLIN-4278:


 Summary: Build cube error in step 3. Connected to metastore, then 
MetaStoreClient lost connection
 Key: KYLIN-4278
 URL: https://issues.apache.org/jira/browse/KYLIN-4278
 Project: Kylin
  Issue Type: Bug
  Components: Metadata, Security
Affects Versions: v3.0.0-beta
 Environment: hadoop 3.0
hiveserver2
hive metastore
beeline
Reporter: xl_zl
 Fix For: Future


When i build a  cube,  I encounter a strange issue in step 3(name=Extract Fact 
Table Distinct Columns). Kylin connect to hive metastore and want to get 
metadata,but  metastore server throw exception:{color:#FF}Error occurred 
during processing of message. | {color}
{color:#FF}java.lang.RuntimeException: 
org.apache.thrift.transport.TTransportException: Invalid status -128{color}

{color:#172b4d}==metastore-server  error logs:==={color}

{color:#172b4d}2019-12-04 17:50:10,180 | ERROR | pool-10-thread-173 | Error 
occurred during processing of message. | 2019-12-04 17:50:10,180 | ERROR | 
pool-10-thread-173 | Error occurred during processing of message. | 
java.lang.RuntimeException: org.apache.thrift.transport.TTransportException: 
Invalid status -128 at 
org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
 ~[hive-exec-3.1.0-mrs-2.0.jar:3.1.0-mrs-2.0] at 
org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:694)
 ~[hive-exec-3.1.0-mrs-2.0.jar:3.1.0-mrs-2.0] at 
org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:691)
 ~[hive-exec-3.1.0-mrs-2.0.jar:3.1.0-mrs-2.0] at 
java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_212] at 
javax.security.auth.Subject.doAs(Subject.java:360) ~[?:1.8.0_212] at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709)
 ~[hadoop-common-3.1.1-mrs-2.0.jar:?] at 
org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory.getTransport(HadoopThriftAuthBridge.java:691)
 ~[hive-exec-3.1.0-mrs-2.0.jar:3.1.0-mrs-2.0] at 
org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:269)
 ~[hive-exec-3.1.0-mrs-2.0.jar:3.1.0-mrs-2.0] at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
~[?:1.8.0_212] at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
~[?:1.8.0_212] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_212]Caused by: 
org.apache.thrift.transport.TTransportException: Invalid status -128 at 
org.apache.thrift.transport.TSaslTransport.sendAndThrowMessage(TSaslTransport.java:232)
 ~[hive-exec-3.1.0-mrs-2.0.jar:3.1.0-mrs-2.0] at 
org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:184)
 ~[hive-exec-3.1.0-mrs-2.0.jar:3.1.0-mrs-2.0] at 
org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:125)
 ~[hive-exec-3.1.0-mrs-2.0.jar:3.1.0-mrs-2.0] at 
org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:271) 
~[hive-exec-3.1.0-mrs-2.0.jar:3.1.0-mrs-2.0] at 
org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
 ~[hive-exec-3.1.0-mrs-2.0.jar:3.1.0-mrs-2.0] at 
org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
 ~[hive-exec-3.1.0-mrs-2.0.jar:3.1.0-mrs-2.0] ... 10 more2019-12-04 
17:50:10,399 | ERROR | pool-10-thread-173 | Error occurred during processing of 
message. | java.lang.RuntimeException: 
org.apache.thrift.transport.TTransportException: Invalid status -128 at 
org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
 ~[hive-exec-3.1.0-mrs-2.0.jar:3.1.0-mrs-2.0] at 
org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:694)
 ~[hive-exec-3.1.0-mrs-2.0.jar:3.1.0-mrs-2.0] at 
org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:691)
 ~[hive-exec-3.1.0-mrs-2.0.jar:3.1.0-mrs-2.0] at 
java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_212] at 
javax.security.auth.Subject.doAs(Subject.java:360) ~[?:1.8.0_212] at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709)
 ~[hadoop-common-3.1.1-mrs-2.0.jar:?] at 
org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory.getTransport(HadoopThriftAuthBridge.java:691)
 ~[hive-exec-3.1.0-mrs-2.0.jar:3.1.0-mrs-2.0] at 
org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:269)
 ~[hive-exec-3.1.0-mrs-2.0.jar:3.1.0-mrs-2.0

[jira] [Created] (KYLIN-3028) Build cube error when set S3 as working-dir

2017-11-10 Thread Shaofeng SHI (JIRA)
Shaofeng SHI created KYLIN-3028:
---

 Summary: Build cube error when set S3 as working-dir
 Key: KYLIN-3028
 URL: https://issues.apache.org/jira/browse/KYLIN-3028
 Project: Kylin
  Issue Type: Bug
  Components: Job Engine
Affects Versions: v2.2.0
Reporter: Shaofeng SHI
Assignee: Shaofeng SHI






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: build cube error

2017-07-19 Thread Li Feng
Hi,

This may due to metadata disruption ,try to purge the cube and build again.

BR,
Lee.

发件人: "apache_...@163.com" <apache_...@163.com>
答复: "dev@kylin.apache.org" <dev@kylin.apache.org>
日期: 2017年7月19日 星期三 21:24
至: dev <dev@kylin.apache.org>
主题: build cube error

Hi,

  After i created model(m01) and cube(c001 base m01),when i build the c001 
cube, get a error, "cube c001 doesn't contain and ready segment",What's the 
reason, please?




[cid:_Foxmail.1@f8e61157-9c33-38bc-bac6-a28719c81571]




apache_...@163.com


build cube error

2017-07-19 Thread apache_...@163.com
Hi,

  After i created model(m01) and cube(c001 base m01),when i build the c001 
cube, get a error, "cube c001 doesn't contain and ready segment",What's the 
reason, please?







apache_...@163.com


build cube error in Step 7 Build base cuboid Data

2017-02-26 Thread 446463...@qq.com
Hi all:
 build cube error in Step 7 # Build base cuboid Data
  I found error log in MR and found that:
```
2017-02-27 03:23:50,139 ERROR [Thread-52] 
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Could not 
deallocate container for task attemptId 
attempt_1487323381320_50335_r_00_0```Is my yarn resources manager didn't 
give enough memories for MR(MapReduce)?what should I do for solve this ?



446463...@qq.com


build cube error

2016-12-22 Thread 35925138
when I build the cube, the step is  Extract Fact Table Distinct Columns  ,and 
the kylin create the table 
kylin_intermediate_ttt_39f8cf5a_b873_4606_a4b3_f52e99d5c771,  and it realy in 
the hive,but it is can not readed by kylin .
I already config the hive.metastore.uris,it's value is 
thrift://172.16.1.90:9083,and I start the hcatalog with the commond  
/home/hadooper/hive/bin/hive --service metastore -p 9083


but the kylin give me the error on the above step


java.lang.RuntimeException: java.io.IOException: 
NoSuchObjectException(message:default.kylin_intermediate_ttt_39f8cf5a_b873_4606_a4b3_f52e99d5c771
 table not found) at 
org.apache.kylin.source.hive.HiveMRInput$HiveTableInputFormat.configureJob(HiveMRInput.java:110)
 at 
org.apache.kylin.engine.mr.steps.FactDistinctColumnsJob.setupMapper(FactDistinctColumnsJob.java:119)
 at 
org.apache.kylin.engine.mr.steps.FactDistinctColumnsJob.run(FactDistinctColumnsJob.java:103)
 at org.apache.kylin.engine.mr.MRUtil.runMRJob(MRUtil.java:92)   at 
org.apache.kylin.engine.mr.common.MapReduceExecutable.doWork(MapReduceExecutable.java:120)
   at 
org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:113)
   at 
org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:57)
 at 
org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:113)
   at 
org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:136)
   at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
 at java.lang.Thread.run(Thread.java:745) Caused by: java.io.IOException: 
NoSuchObjectException(message:default.kylin_intermediate_ttt_39f8cf5a_b873_4606_a4b3_f52e99d5c771
 table not found) at 
org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:97)
 at 
org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:51)
 at 
org.apache.kylin.source.hive.HiveMRInput$HiveTableInputFormat.configureJob(HiveMRInput.java:105)
 ... 11 more Caused by: 
NoSuchObjectException(message:default.kylin_intermediate_ttt_39f8cf5a_b873_4606_a4b3_f52e99d5c771
 table not found)   at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_table_core(HiveMetaStore.java:1808)
at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_table(HiveMetaStore.java:1778)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)  at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)   
 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498) at 
org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:107)
  at com.sun.proxy.$Proxy49.get_table(Unknown Source) at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getTable(HiveMetaStoreClient.java:1208)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)  at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)   
 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498) at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:152)
at com.sun.proxy.$Proxy50.getTable(Unknown Source)  at 
org.apache.hive.hcatalog.common.HCatUtil.getTable(HCatUtil.java:180) at 
org.apache.hive.hcatalog.mapreduce.InitializeInput.getInputJobInfo(InitializeInput.java:105)
 at 
org.apache.hive.hcatalog.mapreduce.InitializeInput.setInput(InitializeInput.java:88)
 at 
org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:95)
 ... 13 more result code:2

[jira] [Created] (KYLIN-2249) Build cube error when use "inmem" but ok with "layer"

2016-12-05 Thread hoangle (JIRA)
hoangle created KYLIN-2249:
--

 Summary: Build cube error when use "inmem" but ok with "layer"
 Key: KYLIN-2249
 URL: https://issues.apache.org/jira/browse/KYLIN-2249
 Project: Kylin
  Issue Type: Bug
Affects Versions: v1.6.0
Reporter: hoangle


2016-12-05 17:17:37,451 ERROR [Thread-13] org.apache.kylin.dict.TrieDictionary: 
Not a valid value: 122594010041
2016-12-05 17:17:38,452 ERROR [pool-8-thread-1] 
org.apache.kylin.cube.inmemcubing.DoggedCubeBuilder: Dogged Cube Build error
java.io.IOException: java.lang.IllegalArgumentException: Value not exists!
at 
org.apache.kylin.cube.inmemcubing.DoggedCubeBuilder$BuildOnce.abort(DoggedCubeBuilder.java:196)
at 
org.apache.kylin.cube.inmemcubing.DoggedCubeBuilder$BuildOnce.checkException(DoggedCubeBuilder.java:169)
at 
org.apache.kylin.cube.inmemcubing.DoggedCubeBuilder$BuildOnce.build(DoggedCubeBuilder.java:116)
at 
org.apache.kylin.cube.inmemcubing.DoggedCubeBuilder.build(DoggedCubeBuilder.java:75)
at 
org.apache.kylin.cube.inmemcubing.AbstractInMemCubeBuilder$1.run(AbstractInMemCubeBuilder.java:82)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.IllegalArgumentException: Value not exists!
at 
org.apache.kylin.common.util.Dictionary.getIdFromValueBytes(Dictionary.java:162)
at 
org.apache.kylin.dict.TrieDictionary.getIdFromValueImpl(TrieDictionary.java:167)
at 
org.apache.kylin.common.util.Dictionary.getIdFromValue(Dictionary.java:98)
at 
org.apache.kylin.dimension.DictionaryDimEnc$DictionarySerializer.serialize(DictionaryDimEnc.java:121)
at 
org.apache.kylin.cube.gridtable.CubeCodeSystem.encodeColumnValue(CubeCodeSystem.java:121)
at 
org.apache.kylin.cube.gridtable.CubeCodeSystem.encodeColumnValue(CubeCodeSystem.java:110)
at org.apache.kylin.gridtable.GTRecord.setValues(GTRecord.java:93)
at org.apache.kylin.gridtable.GTRecord.setValues(GTRecord.java:81)
at 
org.apache.kylin.cube.inmemcubing.InMemCubeBuilderInputConverter.convert(InMemCubeBuilderInputConverter.java:74)
at 
org.apache.kylin.cube.inmemcubing.InMemCubeBuilder$InputConverter$1.next(InMemCubeBuilder.java:544)
at 
org.apache.kylin.cube.inmemcubing.InMemCubeBuilder$InputConverter$1.next(InMemCubeBuilder.java:525)
at 
org.apache.kylin.gridtable.GTAggregateScanner.iterator(GTAggregateScanner.java:139)
at 
org.apache.kylin.cube.inmemcubing.InMemCubeBuilder.createBaseCuboid(InMemCubeBuilder.java:341)
at 
org.apache.kylin.cube.inmemcubing.InMemCubeBuilder.build(InMemCubeBuilder.java:168)
at 
org.apache.kylin.cube.inmemcubing.InMemCubeBuilder.build(InMemCubeBuilder.java:137)
at 
org.apache.kylin.cube.inmemcubing.DoggedCubeBuilder$SplitThread.run(DoggedCubeBuilder.java:284)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: build cube error

2016-10-18 Thread Li Yang
Please upgrade to latest 1.5.4.1 if not yet. A few dictionary related
issues are fixed in that release. ArrayIndexOutOfBoundsException is another
form of the same root cause. Also if the full log (especially the stack
trace) can be provided, we can better identify the problem.

https://issues.apache.org/jira/browse/KYLIN-1973
https://issues.apache.org/jira/browse/KYLIN-1834

On Mon, Oct 10, 2016 at 9:43 AM, 胡志华(万里通科技及数据中心商务智能团队数据分析组) <
huzhihua...@pingan.com.cn> wrote:

> Hi,all
>
>
>
> My cube buding  stopped at step 4 “Build Dimension Dictionary” , error
> info as below。The webUI told me “java.lang.ArrayIndexOutOfBoundsException”
>
> I don’t know how to solve it
>
>
>
> 2016-10-10 09:12:45,470 INFO  [pool-7-thread-10
> DictionaryGeneratorCLI:58]: Building snapshot of WLT_PUB.REP_DATE_FOR_WEEK_
> INFO_DIMT0
>
> 2016-10-10 09:12:45,472 INFO  [http-bio-7070-exec-10 CacheController:64]:
> wipe cache type: CUBE event:UPDATE name:txn_cube_1009
>
> 2016-10-10 09:12:45,473 INFO  [http-bio-7070-exec-10 CacheService:169]:
> rebuild cache type: CUBE name:txn_cube_1009
>
> 2016-10-10 09:12:45,474 DEBUG [http-bio-7070-exec-10 CubeManager:855]:
> Reloaded new cube: txn_cube_1009 with reference
> beingCUBE[name=txn_cube_1009] having 1 segments:KYLIN_YW3OW19Z7E
>
> 2016-10-10 09:12:45,474 INFO  [http-bio-7070-exec-10 CacheService:122]:
> removeOLAPDataSource is called for project WLT
>
> 2016-10-10 09:12:45,475 INFO  [http-bio-7070-exec-10 CacheService:104]:
> cleaning cache for 50fcf195-b40d-4248-97ac-8a43f7dee1c0 (currently remove
> all entries)
>
> 2016-10-10 09:12:45,475 DEBUG [http-bio-7070-exec-10 CubeService:611]: on
> updateOnNewSegmentReady: txn_cube_1009
>
> 2016-10-10 09:12:45,475 DEBUG [http-bio-7070-exec-10 CubeService:614]:
> server mode: all
>
> 2016-10-10 09:12:45,475 INFO  [http-bio-7070-exec-10 CubeService:623]:
> checking keepCubeRetention
>
> 2016-10-10 09:12:45,476 DEBUG [http-bio-7070-exec-10 CubeManager:666]:
> Cube txn_cube_1009 has bulding segment, will not trigger merge at this
> moment
>
> 2016-10-10 09:12:45,476 DEBUG [http-bio-7070-exec-10 CubeService:670]: Not
> ready for merge on cube txn_cube_1009
>
> 2016-10-10 09:12:45,577 INFO  [pool-7-thread-10 SnapshotManager:183]:
> Loading snapshotTable from /table_snapshot/rep_date_for_
> week_info_dimt0/1144fac0-3e03-42db-9947-c92baf740b92.snapshot, with
> loadData: false
>
> 2016-10-10 09:12:45,578 INFO  [pool-7-thread-10 SnapshotManager:183]:
> Loading snapshotTable from /table_snapshot/rep_date_for_
> week_info_dimt0/179f77d9-1fe5-4ed5-a758-4530d6da437a.snapshot, with
> loadData: false
>
> 2016-10-10 09:12:45,579 INFO  [pool-7-thread-10 SnapshotManager:183]:
> Loading snapshotTable from /table_snapshot/rep_date_for_
> week_info_dimt0/1cdaa7ed-5e76-4ace-95c1-938e18f78c14.snapshot, with
> loadData: false
>
> 2016-10-10 09:12:45,618 INFO  [pool-7-thread-10 SnapshotManager:99]:
> Identical input FileSignature [path=hdfs://hadoop2NameNode/
> wlt_pub/rep_date_for_week_info_dimt0, size=9431, 
> lastModifiedTime=1475951797531],
> reuse existing snapshot at /table_snapshot/rep_date_for_
> week_info_dimt0/1cdaa7ed-5e76-4ace-95c1-938e18f78c14.snapshot
>
> 2016-10-10 09:12:45,618 INFO  [pool-7-thread-10 CubeManager:314]: Updating
> cube instance 'txn_cube_1009'
>
> 2016-10-10 09:12:45,618 WARN  [pool-7-thread-10 CubeValidator:102]: NEW
> segment start does not fit/connect with other segments:
> txn_cube_1009[2016010100_2016030100]
>
> 2016-10-10 09:12:45,618 WARN  [pool-7-thread-10 CubeValidator:104]: NEW
> segment end does not fit/connect with other segments:
> txn_cube_1009[2016010100_2016030100]
>
> 2016-10-10 09:12:45,622 DEBUG [pool-7-thread-10 HBaseResourceStore:257]:
> Update row /cube/txn_cube_1009.json from oldTs: 1476061965466, to newTs:
> 1476061965618, operation result: true
>
> 2016-10-10 09:12:45,622 INFO  [pool-7-thread-10
> DictionaryGeneratorCLI:60]: Checking snapshot of WLT_PUB.REP_DATE_FOR_WEEK_
> INFO_DIMT0
>
> 2016-10-10 09:12:45,622 INFO  [pool-11-thread-1 Broadcaster:101]: new
> broadcast event:BroadcastEvent{type=cube, name=txn_cube_1009,
> action=update}
>
> usage: CreateDictionaryJob
>
> -cubename  Cube name. For exmaple, flat_item_cube
>
> -inputInput path
>
> -segmentnameCube segment name
>
> 2016-10-10 09:12:45,623 ERROR [pool-7-thread-10 HadoopShellExecutable:65]:
> error execute 
> HadoopShellExecutable{id=fa447355-d9e8-475a-addf-f8d1fefd24c4-03,
> name=Build Dimension Dictionary, state=RUNNING}
>
> java.lang.ArrayIndexOutOfBoundsException
>
> 2016-10-10 09:12:45,625 INFO  [http-bio-7070-exec-10 CacheController:64]:
> wipe cache type: CUBE event:UPDATE name:txn_cube_1009
>
> 2016-10-10 09:12:45,626 INFO  [http-bio-7070-exec-10 CacheService:169]:
> rebuild cache type: CUBE name:txn_cube_1009
>
> 2016-10-10 09:12:45,626 DEBUG [pool-7-thread-10 HBaseResourceStore:257]:
> Update row /execute_output/fa447355-d9e8-475a-addf-f8d1fefd24c4-03 from
> oldTs: 1476061933401, to newTs: 

build cube error

2016-10-09 Thread 万里通科技及数据中心商务智能团队数据分析组
Hi,all

My cube buding  stopped at step 4 “Build Dimension Dictionary” , error info as 
below。The webUI told me “java.lang.ArrayIndexOutOfBoundsException”
I don’t know how to solve it

2016-10-10 09:12:45,470 INFO  [pool-7-thread-10 DictionaryGeneratorCLI:58]: 
Building snapshot of WLT_PUB.REP_DATE_FOR_WEEK_INFO_DIMT0
2016-10-10 09:12:45,472 INFO  [http-bio-7070-exec-10 CacheController:64]: wipe 
cache type: CUBE event:UPDATE name:txn_cube_1009
2016-10-10 09:12:45,473 INFO  [http-bio-7070-exec-10 CacheService:169]: rebuild 
cache type: CUBE name:txn_cube_1009
2016-10-10 09:12:45,474 DEBUG [http-bio-7070-exec-10 CubeManager:855]: Reloaded 
new cube: txn_cube_1009 with reference beingCUBE[name=txn_cube_1009] having 1 
segments:KYLIN_YW3OW19Z7E
2016-10-10 09:12:45,474 INFO  [http-bio-7070-exec-10 CacheService:122]: 
removeOLAPDataSource is called for project WLT
2016-10-10 09:12:45,475 INFO  [http-bio-7070-exec-10 CacheService:104]: 
cleaning cache for 50fcf195-b40d-4248-97ac-8a43f7dee1c0 (currently remove all 
entries)
2016-10-10 09:12:45,475 DEBUG [http-bio-7070-exec-10 CubeService:611]: on 
updateOnNewSegmentReady: txn_cube_1009
2016-10-10 09:12:45,475 DEBUG [http-bio-7070-exec-10 CubeService:614]: server 
mode: all
2016-10-10 09:12:45,475 INFO  [http-bio-7070-exec-10 CubeService:623]: checking 
keepCubeRetention
2016-10-10 09:12:45,476 DEBUG [http-bio-7070-exec-10 CubeManager:666]: Cube 
txn_cube_1009 has bulding segment, will not trigger merge at this moment
2016-10-10 09:12:45,476 DEBUG [http-bio-7070-exec-10 CubeService:670]: Not 
ready for merge on cube txn_cube_1009
2016-10-10 09:12:45,577 INFO  [pool-7-thread-10 SnapshotManager:183]: Loading 
snapshotTable from 
/table_snapshot/rep_date_for_week_info_dimt0/1144fac0-3e03-42db-9947-c92baf740b92.snapshot,
 with loadData: false
2016-10-10 09:12:45,578 INFO  [pool-7-thread-10 SnapshotManager:183]: Loading 
snapshotTable from 
/table_snapshot/rep_date_for_week_info_dimt0/179f77d9-1fe5-4ed5-a758-4530d6da437a.snapshot,
 with loadData: false
2016-10-10 09:12:45,579 INFO  [pool-7-thread-10 SnapshotManager:183]: Loading 
snapshotTable from 
/table_snapshot/rep_date_for_week_info_dimt0/1cdaa7ed-5e76-4ace-95c1-938e18f78c14.snapshot,
 with loadData: false
2016-10-10 09:12:45,618 INFO  [pool-7-thread-10 SnapshotManager:99]: Identical 
input FileSignature 
[path=hdfs://hadoop2NameNode/wlt_pub/rep_date_for_week_info_dimt0, size=9431, 
lastModifiedTime=1475951797531], reuse existing snapshot at 
/table_snapshot/rep_date_for_week_info_dimt0/1cdaa7ed-5e76-4ace-95c1-938e18f78c14.snapshot
2016-10-10 09:12:45,618 INFO  [pool-7-thread-10 CubeManager:314]: Updating cube 
instance 'txn_cube_1009'
2016-10-10 09:12:45,618 WARN  [pool-7-thread-10 CubeValidator:102]: NEW segment 
start does not fit/connect with other segments: 
txn_cube_1009[2016010100_2016030100]
2016-10-10 09:12:45,618 WARN  [pool-7-thread-10 CubeValidator:104]: NEW segment 
end does not fit/connect with other segments: 
txn_cube_1009[2016010100_2016030100]
2016-10-10 09:12:45,622 DEBUG [pool-7-thread-10 HBaseResourceStore:257]: Update 
row /cube/txn_cube_1009.json from oldTs: 1476061965466, to newTs: 
1476061965618, operation result: true
2016-10-10 09:12:45,622 INFO  [pool-7-thread-10 DictionaryGeneratorCLI:60]: 
Checking snapshot of WLT_PUB.REP_DATE_FOR_WEEK_INFO_DIMT0
2016-10-10 09:12:45,622 INFO  [pool-11-thread-1 Broadcaster:101]: new broadcast 
event:BroadcastEvent{type=cube, name=txn_cube_1009, action=update}
usage: CreateDictionaryJob
-cubename  Cube name. For exmaple, flat_item_cube
-inputInput path
-segmentnameCube segment name
2016-10-10 09:12:45,623 ERROR [pool-7-thread-10 HadoopShellExecutable:65]: 
error execute HadoopShellExecutable{id=fa447355-d9e8-475a-addf-f8d1fefd24c4-03, 
name=Build Dimension Dictionary, state=RUNNING}
java.lang.ArrayIndexOutOfBoundsException
2016-10-10 09:12:45,625 INFO  [http-bio-7070-exec-10 CacheController:64]: wipe 
cache type: CUBE event:UPDATE name:txn_cube_1009
2016-10-10 09:12:45,626 INFO  [http-bio-7070-exec-10 CacheService:169]: rebuild 
cache type: CUBE name:txn_cube_1009
2016-10-10 09:12:45,626 DEBUG [pool-7-thread-10 HBaseResourceStore:257]: Update 
row /execute_output/fa447355-d9e8-475a-addf-f8d1fefd24c4-03 from oldTs: 
1476061933401, to newTs: 1476061965623, operation result: true
2016-10-10 09:12:45,626 DEBUG [http-bio-7070-exec-10 CubeManager:855]: Reloaded 
new cube: txn_cube_1009 with reference beingCUBE[name=txn_cube_1009] having 1 
segments:KYLIN_YW3OW19Z7E




**
胡志华 [说明: 说明: cid:image001.png@01CDAD3C.F2D26490]
壹钱包业务运营中心数据分析部
*:021-20667416/18019788229
*:上海徐汇区凯滨路206号平安大厦A座15楼
**




The information in this email is confidential and may be legally privileged. If 
you have received this email in error or are not 

Re: build cube error

2016-04-13 Thread 陈佛林
Thanks.

When i replace the hive view with other tables, every thing is ok.

2016-04-13 17:02 GMT+08:00 wangxianbin1...@gmail.com <
wangxianbin1...@gmail.com>:

> hi!
>
> what do you mean? are you using hive view as lookup table?
>
> if it is so, it is a known issue, check this out,
> https://issues.apache.org/jira/browse/KYLIN-1077
>
> best regards!
>
>
>
> wangxianbin1...@gmail.com
>
> From: 陈佛林
> Date: 2016-04-13 16:51
> To: dev
> Subject: build cube error
> all job at yarn are SUCCESS.
>
>
> 2016-04-13 15:54:58,765 WARN  [http-bio-7070-exec-1]
> service.CacheService:108 : skip cleaning cache for
> 668468ba-f470-4c37-9b12-d66aa44cef30
>
> 2016-04-13 15:54:58,903 ERROR [pool-5-thread-6]
> execution.AbstractExecutable:62 : error execute
> HadoopShellExecutable{id=4e9d2b2a-4f20-4b35-9dc6-bf2185f43486-02,
> name=Build Dimension Dictionary, state=RUNNING}
>
> java.io.IOException: java.lang.NullPointerException
>
> at
> org.apache.kylin.source.hive.HiveTable.getSignature(HiveTable.java:71)
>
> at
> org.apache.kylin.dict.lookup.SnapshotTable.(SnapshotTable.java:64)
>
> at
>
> org.apache.kylin.dict.lookup.SnapshotManager.buildSnapshot(SnapshotManager.java:89)
>
> at
> org.apache.kylin.cube.CubeManager.buildSnapshotTable(CubeManager.java:208)
>
> at
>
> org.apache.kylin.cube.cli.DictionaryGeneratorCLI.processSegment(DictionaryGeneratorCLI.java:59)
>
> at
>
> org.apache.kylin.cube.cli.DictionaryGeneratorCLI.processSegment(DictionaryGeneratorCLI.java:42)
>
> at
>
> org.apache.kylin.engine.mr.steps.CreateDictionaryJob.run(CreateDictionaryJob.java:56)
>
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
>
> at
>
> org.apache.kylin.engine.mr.common.HadoopShellExecutable.doWork(HadoopShellExecutable.java:60)
>
> at
>
> org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:114)
>
> at
>
> org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:50)
>
> at
>
> org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:114)
>
> at
>
> org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:124)
>
> at
>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>
> at
>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>
> at java.lang.Thread.run(Thread.java:745)
>
> Caused by: java.lang.NullPointerException
>
> at
> org.apache.kylin.engine.mr.HadoopUtil.fixWindowsPath(HadoopUtil.java:78)
>
> at
> org.apache.kylin.engine.mr.HadoopUtil.makeURI(HadoopUtil.java:70)
>
> at
> org.apache.kylin.engine.mr.HadoopUtil.getFileSystem(HadoopUtil.java:65)
>
> at
>
> org.apache.kylin.engine.mr.DFSFileTable.getSizeAndLastModified(DFSFileTable.java:78)
>
> at
> org.apache.kylin.source.hive.HiveTable.getSignature(HiveTable.java:56)
>
> ... 16 more
>


Re: build cube error

2016-04-13 Thread wangxianbin1...@gmail.com
hi!

what do you mean? are you using hive view as lookup table?

if it is so, it is a known issue, check this out, 
https://issues.apache.org/jira/browse/KYLIN-1077 

best regards!



wangxianbin1...@gmail.com
 
From: 陈佛林
Date: 2016-04-13 16:51
To: dev
Subject: build cube error
all job at yarn are SUCCESS.
 
 
2016-04-13 15:54:58,765 WARN  [http-bio-7070-exec-1]
service.CacheService:108 : skip cleaning cache for
668468ba-f470-4c37-9b12-d66aa44cef30
 
2016-04-13 15:54:58,903 ERROR [pool-5-thread-6]
execution.AbstractExecutable:62 : error execute
HadoopShellExecutable{id=4e9d2b2a-4f20-4b35-9dc6-bf2185f43486-02,
name=Build Dimension Dictionary, state=RUNNING}
 
java.io.IOException: java.lang.NullPointerException
 
at
org.apache.kylin.source.hive.HiveTable.getSignature(HiveTable.java:71)
 
at
org.apache.kylin.dict.lookup.SnapshotTable.(SnapshotTable.java:64)
 
at
org.apache.kylin.dict.lookup.SnapshotManager.buildSnapshot(SnapshotManager.java:89)
 
at
org.apache.kylin.cube.CubeManager.buildSnapshotTable(CubeManager.java:208)
 
at
org.apache.kylin.cube.cli.DictionaryGeneratorCLI.processSegment(DictionaryGeneratorCLI.java:59)
 
at
org.apache.kylin.cube.cli.DictionaryGeneratorCLI.processSegment(DictionaryGeneratorCLI.java:42)
 
at
org.apache.kylin.engine.mr.steps.CreateDictionaryJob.run(CreateDictionaryJob.java:56)
 
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
 
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
 
at
org.apache.kylin.engine.mr.common.HadoopShellExecutable.doWork(HadoopShellExecutable.java:60)
 
at
org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:114)
 
at
org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:50)
 
at
org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:114)
 
at
org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:124)
 
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 
at java.lang.Thread.run(Thread.java:745)
 
Caused by: java.lang.NullPointerException
 
at
org.apache.kylin.engine.mr.HadoopUtil.fixWindowsPath(HadoopUtil.java:78)
 
at org.apache.kylin.engine.mr.HadoopUtil.makeURI(HadoopUtil.java:70)
 
at
org.apache.kylin.engine.mr.HadoopUtil.getFileSystem(HadoopUtil.java:65)
 
at
org.apache.kylin.engine.mr.DFSFileTable.getSizeAndLastModified(DFSFileTable.java:78)
 
at
org.apache.kylin.source.hive.HiveTable.getSignature(HiveTable.java:56)
 
... 16 more