Yeah, in YARN resource UI, I found more valuable error, this my small cube can be built already.
I have another big cube to build, and encountered new error, sorry for that. I have increased 'hbase.rpc.timeout' property in hbase-site.xml via Ambari, and then restarted Hbase, but the new value did not take effect somehow. BTW, I am using HDP 2.5 Map 1: 62(+14)/104 Map 1: 62(+14)/104 at org.apache.kylin.common.util.CliCommandExecutor.execute(CliCommandExecutor.java:92) at org.apache.kylin.source.hive.CreateFlatHiveTableStep.createFlatHiveTable(CreateFlatHiveTableStep.java:90) at org.apache.kylin.source.hive.CreateFlatHiveTableStep.doWork(CreateFlatHiveTableStep.java:121) at org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:113) at org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:57) at org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:113) at org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:136) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 2017-02-14 13:59:50,453 ERROR [pool-8-thread-1] dao.ExecutableDao:148 : error get all Jobs: java.io.IOException: Failed to get result within timeout, timeout=60000ms at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:206) at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:60) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200) at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:326) at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:301) at org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:166) at org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:161) at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:794) at org.apache.kylin.storage.hbase.HBaseResourceStore.visitFolder(HBaseResourceStore.java:137) at org.apache.kylin.storage.hbase.HBaseResourceStore.listResourcesImpl(HBaseResourceStore.java:107) at org.apache.kylin.common.persistence.ResourceStore.listResources(ResourceStore.java:121) at org.apache.kylin.job.dao.ExecutableDao.getJobIds(ExecutableDao.java:138) at org.apache.kylin.job.manager.ExecutableManager.getAllJobIds(ExecutableManager.java:207) at org.apache.kylin.job.impl.threadpool.DefaultScheduler$FetcherRunner.run(DefaultScheduler.java:85) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 2017-02-14 13:59:50,454 ERROR [pool-8-thread-1] manager.ExecutableManager:209 : error get All Job Ids org.apache.kylin.job.exception.PersistentException: java.io.IOException: Failed to get result within timeout, timeout=60000ms at org.apache.kylin.job.dao.ExecutableDao.getJobIds(ExecutableDao.java:149) at org.apache.kylin.job.manager.ExecutableManager.getAllJobIds(ExecutableManager.java:207) at org.apache.kylin.job.impl.threadpool.DefaultScheduler$FetcherRunner.run(DefaultScheduler.java:85) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.io.IOException: Failed to get result within timeout, timeout=60000ms at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:206) at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:60) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200) at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:326) at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:301) at org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:166) at org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:161) at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:794) at org.apache.kylin.storage.hbase.HBaseResourceStore.visitFolder(HBaseResourceStore.java:137) at org.apache.kylin.storage.hbase.HBaseResourceStore.listResourcesImpl(HBaseResourceStore.java:107) at org.apache.kylin.common.persistence.ResourceStore.listResources(ResourceStore.java:121) at org.apache.kylin.job.dao.ExecutableDao.getJobIds(ExecutableDao.java:138) ... 9 more 2017-02-14 13:59:50,454 WARN [pool-8-thread-1] threadpool.DefaultScheduler:120 : Job Fetcher caught a exception java.lang.RuntimeException: org.apache.kylin.job.exception.PersistentException: java.io.IOException: Failed to get result within timeout, timeout=60000ms ________________________________ From: Billy Liu <billy...@apache.org> Sent: Monday, February 13, 2017 21:46 To: dev Subject: Re: Can not build sample cube Please find more logs from YARN resource UI, you may find the root cause. 2017-02-13 20:53 GMT+08:00 ? ? <biolearn...@hotmail.com>: > Thanks. > > > A further question, when I built my cube, it errored at the step 10. > > Actually I did not see obvious error, but saw the status changed error, so > could you tell me what error it was and how to locate the issue. > > > > 2017-02-13 12:46:07,232 DEBUG [http-bio-7070-exec-3] dao.ExecutableDao:210 > : updating job output, id: 60821d3b-6a2f-4789-a675-85d4109c4ed8-09 > 2017-02-13 12:46:07,234 DEBUG [http-bio-7070-exec-3] > hbase.HBaseResourceStore:262 : Update row > /execute_output/60821d3b-6a2f-4789-a675-85d4109c4ed8-09 > from oldTs: 1486989893923, to newTs: 1486989967232, operation result: true > 2017-02-13 12:46:07,234 INFO [http-bio-7070-exec-3] > manager.ExecutableManager:292 : job id:60821d3b-6a2f-4789-a675-85d4109c4ed8-09 > from ERROR to READY > 2017-02-13 12:46:07,235 DEBUG [http-bio-7070-exec-3] dao.ExecutableDao:210 > : updating job output, id: 60821d3b-6a2f-4789-a675-85d4109c4ed8 > 2017-02-13 12:46:07,236 DEBUG [http-bio-7070-exec-3] > hbase.HBaseResourceStore:262 : Update row > /execute_output/60821d3b-6a2f-4789-a675-85d4109c4ed8 > from oldTs: 1486989893932, to newTs: 1486989967235, operation result: true > 2017-02-13 12:46:07,236 INFO [http-bio-7070-exec-3] > manager.ExecutableManager:292 : job id:60821d3b-6a2f-4789-a675-85d4109c4ed8 > from ERROR to READY > 2017-02-13 12:46:33,025 INFO [pool-8-thread-1] > threadpool.DefaultScheduler:108 : > CubingJob{id=60821d3b-6a2f-4789-a675-85d4109c4ed8, > name=application_hours_cube - 20170122000000_20170122010000 - BUILD - > GMT+08:00 2017-02-13 17:25:29, state=READY} prepare to schedule > 2017-02-13 12:46:33,026 INFO [pool-8-thread-1] > threadpool.DefaultScheduler:112 : > CubingJob{id=60821d3b-6a2f-4789-a675-85d4109c4ed8, > name=application_hours_cube - 20170122000000_20170122010000 - BUILD - > GMT+08:00 2017-02-13 17:25:29, state=READY} scheduled > 2017-02-13 12:46:33,026 INFO [pool-9-thread-10] > execution.AbstractExecutable:99 : Executing AbstractExecutable > (application_hours_cube - 20170122000000_20170122010000 - BUILD - GMT+08:00 > 2017-02-13 17:25:29) > 2017-02-13 12:46:33,026 DEBUG [pool-9-thread-10] dao.ExecutableDao:210 : > updating job output, id: 60821d3b-6a2f-4789-a675-85d4109c4ed8 > 2017-02-13 12:46:33,028 DEBUG [pool-9-thread-10] > hbase.HBaseResourceStore:262 : Update row > /execute_output/60821d3b-6a2f-4789-a675-85d4109c4ed8 > from oldTs: 1486989967235, to newTs: 1486989993026, operation result: true > 2017-02-13 12:46:33,028 INFO [pool-9-thread-10] > manager.ExecutableManager:292 : job id:60821d3b-6a2f-4789-a675-85d4109c4ed8 > from READY to RUNNING > 2017-02-13 12:46:33,033 INFO [pool-8-thread-1] > threadpool.DefaultScheduler:118 : Job Fetcher: 0 should running, 1 actual > running, 1 ready, 1 already succeed, 3 error, 22 discarded, 0 others > 2017-02-13 12:46:33,034 INFO [pool-9-thread-10] > execution.AbstractExecutable:99 : Executing AbstractExecutable (Build > Cube) > 2017-02-13 12:46:33,035 DEBUG [pool-9-thread-10] dao.ExecutableDao:210 : > updating job output, id: 60821d3b-6a2f-4789-a675-85d4109c4ed8-09 > 2017-02-13 12:46:33,036 DEBUG [pool-9-thread-10] > hbase.HBaseResourceStore:262 : Update row > /execute_output/60821d3b-6a2f-4789-a675-85d4109c4ed8-09 > from oldTs: 1486989967232, to newTs: 1486989993035, operation result: true > 2017-02-13 12:46:33,036 INFO [pool-9-thread-10] > manager.ExecutableManager:292 : job id:60821d3b-6a2f-4789-a675-85d4109c4ed8-09 > from READY to RUNNING > 2017-02-13 12:46:33,083 INFO [pool-9-thread-10] > common.MapReduceExecutable:112 : parameters of the MapReduceExecutable: > 2017-02-13 12:46:33,083 INFO [pool-9-thread-10] > common.MapReduceExecutable:113 : -conf /root/apache-kylin-1.6.0- > hbase1.x-bin/conf/kylin_job_conf_inmem.xml -cubename > application_hours_cube -segmentid 7b0c8f25-3a29-4ddd-b85f-1d45c3b4927c > -output /kylin/kylin_metadata/kylin-60821d3b-6a2f-4789-a675- > 85d4109c4ed8/application_hours_cube/cuboid/ -jobname > Kylin_Cube_Builder_application_hours_cube > -cubingJobId 60821d3b-6a2f-4789-a675-85d4109c4ed8 > 2017-02-13 12:46:33,106 INFO [pool-9-thread-10] steps.InMemCuboidJob:98 : > Starting: Kylin_Cube_Builder_application_hours_cube > 2017-02-13 12:46:33,106 INFO [pool-9-thread-10] > common.AbstractHadoopJob:163 : append job jar: /root/apache-kylin-1.6.0- > hbase1.x-bin/lib/kylin-job-1.6.0.jar > 2017-02-13 12:46:33,106 INFO [pool-9-thread-10] > common.AbstractHadoopJob:171 : append kylin.hbase.dependency: > /usr/hdp/2.5.0.0-1245/hbase/lib/hbase-common.jar to mapreduce.application. > classpath > 2017-02-13 12:46:33,106 INFO [pool-9-thread-10] > common.AbstractHadoopJob:188 : Hadoop job classpath is: > $PWD/mr-framework/hadoop/share/hadoop/mapreduce/*:$PWD/ > mr-framework/hadoop/share/hadoop/mapreduce/lib/*:$PWD/ > mr-framework/hadoop/share/hadoop/common/*:$PWD/mr- > framework/hadoop/share/hadoop/common/lib/*:$PWD/mr- > framework/hadoop/share/hadoop/yarn/*:$PWD/mr-framework/ > hadoop/share/hadoop/yarn/lib/*:$PWD/mr-framework/hadoop/ > share/hadoop/hdfs/*:$PWD/mr-framework/hadoop/share/hadoop/ > hdfs/lib/*:$PWD/mr-framework/hadoop/share/hadoop/tools/lib/ > *:/usr/hdp/2.5.0.0-1245/hadoop/lib/hadoop-lzo-0.6.0.2. > 5.0.0-1245.jar:/etc/hadoop/conf/secure,/usr/hdp/2.5.0.0- > 1245/hbase/lib/hbase-common.jar > 2017-02-13 12:46:33,106 INFO [pool-9-thread-10] > common.AbstractHadoopJob:200 : Hive Dependencies Before Filtered: > /usr/hdp/current/hive-client/conf,/usr/hdp/2.5.0.0-1245/ > hive/lib/HikariCP-1.3.9.jar,/usr/hdp/2.5.0.0-1245/hive/lib/ > ST4-4.0.4.jar,/usr/hdp/2.5.0.0-1245/hive/lib/accumulo-core- > 1.7.0.2.5.0.0-1245.jar,/usr/hdp/2.5.0.0-1245/hive/lib/ > accumulo-fate-1.7.0.2.5.0.0-1245.jar,/usr/hdp/2.5.0.0- > 1245/hive/lib/accumulo-start-1.7.0.2.5.0.0-1245.jar,/usr/ > hdp/2.5.0.0-1245/hive/lib/accumulo-trace-1.7.0.2.5.0.0- > 1245.jar,/usr/hdp/2.5.0.0-1245/hive/lib/activation-1.1. > jar,/usr/hdp/2.5.0.0-1245/hive/lib/ant-1.9.1.jar,/usr/ > hdp/2.5.0.0-1245/hive/lib/ant-launcher-1.9.1.jar,/usr/hdp/2. > 5.0.0-1245/hive/lib/antlr-2.7.7.jar,/usr/hdp/2.5.0.0-1245/ > hive/lib/antlr-runtime-3.4.jar,/usr/hdp/2.5.0.0-1245/ > hive/lib/apache-log4j-extras-1.2.17.jar,/usr/hdp/2.5.0.0- > 1245/hive/lib/asm-commons-3.1.jar,/usr/hdp/2.5.0.0-1245/ > hive/lib/asm-tree-3.1.jar,/usr/hdp/2.5.0.0-1245/hive/lib/ > avatica-1.8.0.2.5.0.0-1245.jar,/usr/hdp/2.5.0.0-1245/ > hive/lib/avatica-metrics-1.8.0.2.5.0.0-1245.jar,/usr/hdp/2. > 5.0.0-1245/hive/lib/avro-1.7.5.jar,/usr/hdp/2.5.0.0-1245/ > hive/lib/bonecp-0.8.0.RELEASE.jar,/usr/hdp/2.5.0.0-1245/ > hive/lib/commons-cli-1.2.jar,/usr/hdp/2.5.0.0-1245/hive/lib/ > commons-codec-1.4.jar,/usr/hdp/2.5.0.0-1245/hive/lib/ > commons-collections-3.2.2.jar,/usr/hdp/2.5.0.0-1245/hive/ > lib/commons-compiler-2.7.6.jar,/usr/hdp/2.5.0.0-1245/ > hive/lib/commons-compress-1.4.1.jar,/usr/hdp/2.5.0.0-1245/ > hive/lib/commons-dbcp-1.4.jar,/usr/hdp/2.5.0.0-1245/hive/ > lib/commons-httpclient-3.0.1.jar,/usr/hdp/2.5.0.0-1245/ > hive/lib/commons-io-2.4.jar,/usr/hdp/2.5.0.0-1245/hive/lib/ > commons-lang-2.6.jar,/usr/hdp/2.5.0.0-1245/hive/lib/commons- > logging-1.1.3.jar,/usr/hdp/2.5.0.0-1245/hive/lib/commons- > math-2.1.jar,/usr/hdp/2.5.0.0-1245/hive/lib/commons-pool-1. > 5.4.jar,/usr/hdp/2.5.0.0-1245/hive/lib/commons-vfs2-2.0.jar, > /usr/hdp/2.5.0.0-1245/hive/lib/curator-client-2.6.0.jar,/ > usr/hdp/2.5.0.0-1245/hive/lib/curator-framework-2.6.0.jar,/ > usr/hdp/2.5.0.0-1245/hive/lib/curator-recipes-2.6.0.jar,/ > usr/hdp/2.5.0.0-1245/hive/lib/datanucleus-api-jdo-4.2.1.jar, > /usr/hdp/2.5.0.0-1245/hive/lib/datanucleus-core-4.1.6. > jar,/usr/hdp/2.5.0.0-1245/hive/lib/datanucleus-rdbms-4. > 1.7.jar,/usr/hdp/2.5.0.0-1245/hive/lib/derby-10.10.2.0.jar,/ > usr/hdp/2.5.0.0-1245/hive/lib/dropwizard-metrics-hadoop- > metrics2-reporter-0.1.2.jar,/usr/hdp/2.5.0.0-1245/hive/lib/ > eigenbase-properties-1.1.5.jar,/usr/hdp/2.5.0.0-1245/ > hive/lib/geronimo-annotation_1.0_spec-1.1.1.jar,/usr/hdp/2. > 5.0.0-1245/hive/lib/geronimo-jaspic_1.0_spec-1.0.jar,/usr/ > hdp/2.5.0.0-1245/hive/lib/geronimo-jta_1.1_spec-1.1.1. > jar,/usr/hdp/2.5.0.0-1245/hive/lib/groovy-all-2.4.4.jar, > /usr/hdp/2.5.0.0-1245/hive/lib/guava-14.0.1.jar,/usr/hdp/ > 2.5.0.0-1245/hive/lib/hive-accumulo-handler-1.2.1000.2.5. > 0.0-1245.jar,/usr/hdp/2.5.0.0-1245/hive/lib/hive-accumulo- > handler.jar,/usr/hdp/2.5.0.0-1245/hive/lib/hive-ant-1.2. > 1000.2.5.0.0-1245.jar,/usr/hdp/2.5.0.0-1245/hive/lib/ > hive-ant.jar,/usr/hdp/2.5.0.0-1245/hive/lib/hive-beeline-1. > 2.1000.2.5.0.0-1245.jar,/usr/hdp/2.5.0.0-1245/hive/lib/ > hive-beeline.jar,/usr/hdp/2.5.0.0-1245/hive/lib/hive-cli-1. > 2.1000.2.5.0.0-1245.jar,/usr/hdp/2.5.0.0-1245/hive/lib/ > hive-cli.jar,/usr/hdp/2.5.0.0-1245/hive/lib/hive-common-1.2. > 1000.2.5.0.0-1245.jar,/usr/hdp/2.5.0.0-1245/hive/lib/ > hive-common.jar,/usr/hdp/2.5.0.0-1245/hive/lib/hive- > contrib-1.2.1000.2.5.0.0-1245.jar,/usr/hdp/2.5.0.0-1245/ > hive/lib/hive-contrib.jar,/usr/hdp/2.5.0.0-1245/hive/lib/ > hive-exec-1.2.1000.2.5.0.0-1245.jar,/usr/hdp/2.5.0.0- > 1245/hive/lib/hive-exec.jar,/usr/hdp/2.5.0.0-1245/hive/lib/ > hive-hbase-handler-1.2.1000.2.5.0.0-1245.jar,/usr/hdp/2.5.0. > 0-1245/hive/lib/hive-hbase-handler.jar,/usr/hdp/2.5.0.0- > 1245/hive/lib/hive-hwi-1.2.1000.2.5.0.0-1245.jar,/usr/ > hdp/2.5.0.0-1245/hive/lib/hive-hwi.jar,/usr/hdp/2.5.0.0- > 1245/hive/lib/hive-jdbc-1.2.1000.2.5.0.0-1245-standalone. > jar,/usr/hdp/2.5.0.0-1245/hive/lib/hive-jdbc-1.2.1000.2. > 5.0.0-1245.jar,/usr/hdp/2.5.0.0-1245/hive/lib/hive-jdbc.jar, > /usr/hdp/2.5.0.0-1245/hive/lib/hive-metastore-1.2.1000.2. > 5.0.0-1245.jar,/usr/hdp/2.5.0.0-1245/hive/lib/hive-metastore.jar,/usr/hdp/ > 2.5.0.0-1245/hive/lib/hive-serde-1.2.1000.2.5.0.0-1245.jar,/usr/ > hdp/2.5.0.0-1245/hive/lib/hive-serde.jar,/usr/hdp/2.5.0. > 0-1245/hive/lib/hive-service-1.2.1000.2.5.0.0-1245.jar,/ > usr/hdp/2.5.0.0-1245/hive/lib/hive-service.jar,/usr/hdp/2.5. > 0.0-1245/hive/lib/hive-shims-0.20S-1.2.1000.2.5.0.0-1245. > jar,/usr/hdp/2.5.0.0-1245/hive/lib/hive-shims-0.23-1.2. > 1000.2.5.0.0-1245.jar,/usr/hdp/2.5.0.0-1245/hive/lib/ > hive-shims-1.2.1000.2.5.0.0-1245.jar,/usr/hdp/2.5.0.0- > 1245/hive/lib/hive-shims-common-1.2.1000.2.5.0.0-1245. > jar,/usr/hdp/2.5.0.0-1245/hive/lib/hive-shims-common. > jar,/usr/hdp/2.5.0.0-1245/hive/lib/hive-shims-scheduler- > 1.2.1000.2.5.0.0-1245.jar,/usr/hdp/2.5.0.0-1245/hive/lib/ > hive-shims-scheduler.jar,/usr/hdp/2.5.0.0-1245/hive/lib/ > hive-shims.jar,/usr/hdp/2.5.0.0-1245/hive/lib/htrace-core-3. > 1.0-incubating.jar,/usr/hdp/2.5.0.0-1245/hive/lib/ > httpclient-4.4.jar,/usr/hdp/2.5.0.0-1245/hive/lib/httpcore- > 4.4.jar,/usr/hdp/2.5.0.0-1245/hive/lib/ivy-2.4.0.jar,/usr/ > hdp/2.5.0.0-1245/hive/lib/jackson-annotations-2.4.0.jar, > /usr/hdp/2.5.0.0-1245/hive/lib/jackson-core-2.4.2.jar,/ > usr/hdp/2.5.0.0-1245/hive/lib/jackson-databind-2.4.2.jar,/ > usr/hdp/2.5.0.0-1245/hive/lib/janino-2.7.6.jar,/usr/hdp/2.5. > 0.0-1245/hive/lib/javassist-3.18.1-GA.jar,/usr/hdp/2.5.0.0- > 1245/hive/lib/javax.jdo-3.2.0-m3.jar,/usr/hdp/2.5.0.0-1245/ > hive/lib/jcommander-1.32.jar,/usr/hdp/2.5.0.0-1245/hive/lib/ > jdo-api-3.0.1.jar,/usr/hdp/2.5.0.0-1245/hive/lib/jetty-all- > 7.6.0.v20120127.jar,/usr/hdp/2.5.0.0-1245/hive/lib/jetty- > all-server-7.6.0.v20120127.jar,/usr/hdp/2.5.0.0-1245/ > hive/lib/jline-2.12.jar,/usr/hdp/2.5.0.0-1245/hive/lib/ > jpam-1.1.jar,/usr/hdp/2.5.0.0-1245/hive/lib/jta-1.1.jar,/ > usr/hdp/2.5.0.0-1245/hive/lib/json-20090211.jar,/usr/hdp/2. > 5.0.0-1245/hive/lib/jsr305-3.0.0.jar,/usr/hdp/2.5.0.0-1245/ > hive/lib/libfb303-0.9.3.jar,/usr/hdp/2.5.0.0-1245/hive/lib/ > libthrift-0.9.3.jar,/usr/hdp/2.5.0.0-1245/hive/lib/log4j-1. > 2.16.jar,/usr/hdp/2.5.0.0-1245/hive/lib/mail-1.4.1.jar,/ > usr/hdp/2.5.0.0-1245/hive/lib/maven-scm-api-1.4.jar,/usr/ > hdp/2.5.0.0-1245/hive/lib/maven-scm-provider-svn- > commons-1.4.jar,/usr/hdp/2.5.0.0-1245/hive/lib/maven-scm- > provider-svnexe-1.4.jar,/usr/hdp/2.5.0.0-1245/hive/lib/ > metrics-core-3.1.0.jar,/usr/hdp/2.5.0.0-1245/hive/lib/ > metrics-json-3.1.0.jar,/usr/hdp/2.5.0.0-1245/hive/lib/ > metrics-jvm-3.1.0.jar,/usr/hdp/2.5.0.0-1245/hive/lib/ > mysql-connector-java.jar,/usr/hdp/2.5.0.0-1245/hive/lib/ > netty-3.7.0.Final.jar,/usr/hdp/2.5.0.0-1245/hive/lib/ > ojdbc6.jar,/usr/hdp/2.5.0.0-1245/hive/lib/opencsv-2.3.jar, > /usr/hdp/2.5.0.0-1245/hive/lib/oro-2.0.8.jar,/usr/hdp/2. > 5.0.0-1245/hive/lib/paranamer-2.3.jar,/usr/hdp/2.5.0.0-1245/ > hive/lib/parquet-hadoop-bundle-1.8.1.jar,/usr/hdp/2.5. > 0.0-1245/hive/lib/pentaho-aggdesigner-algorithm-5.1.5- > jhyde.jar,/usr/hdp/2.5.0.0-1245/hive/lib/plexus-utils-1. > 5.6.jar,/usr/hdp/2.5.0.0-1245/hive/lib/protobuf-java-2.5.0. > jar,/usr/hdp/2.5.0.0-1245/hive/lib/ranger-hive-plugin- > impl/eclipselink-2.5.2.jar,/usr/hdp/2.5.0.0-1245/hive/lib/ > ranger-hive-plugin-impl/gson-2.2.4.jar,/usr/hdp/2.5.0.0- > 1245/hive/lib/ranger-hive-plugin-impl/httpclient-4.5.2. > jar,/usr/hdp/2.5.0.0-1245/hive/lib/ranger-hive-plugin- > impl/httpcore-4.4.4.jar,/usr/hdp/2.5.0.0-1245/hive/lib/ > ranger-hive-plugin-impl/httpmime-4.5.2.jar,/usr/hdp/2. > 5.0.0-1245/hive/lib/ranger-hive-plugin-impl/javax. > persistence-2.1.0.jar,/usr/hdp/2.5.0.0-1245/hive/lib/ > ranger-hive-plugin-impl/noggit-0.6.jar,/usr/hdp/2.5.0. > 0-1245/hive/lib/ranger-hive-plugin-impl/ranger-hive- > plugin-0.6.0.2.5.0.0-1245.jar,/usr/hdp/2.5.0.0-1245/hive/ > lib/ranger-hive-plugin-impl/ranger-plugins-audit-0.6.0.2. > 5.0.0-1245.jar,/usr/hdp/2.5.0.0-1245/hive/lib/ranger-hive- > plugin-impl/ranger-plugins-common-0.6.0.2.5.0.0-1245.jar, > /usr/hdp/2.5.0.0-1245/hive/lib/ranger-hive-plugin-impl/ > ranger-plugins-cred-0.6.0.2.5.0.0-1245.jar,/usr/hdp/2.5.0.0- > 1245/hive/lib/ranger-hive-plugin-impl/solr-solrj-5.5.1. > jar,/usr/hdp/2.5.0.0-1245/hive/lib/ranger-hive-plugin- > shim-0.6.0.2.5.0.0-1245.jar,/usr/hdp/2.5.0.0-1245/hive/lib/ > ranger-plugin-classloader-0.6.0.2.5.0.0-1245.jar,/usr/hdp/2. > 5.0.0-1245/hive/lib/regexp-1.3.jar,/usr/hdp/2.5.0.0-1245/ > hive/lib/servlet-api-2.5.jar,/usr/hdp/2.5.0.0-1245/hive/lib/ > snappy-java-1.0.5.jar,/usr/hdp/2.5.0.0-1245/hive/lib/ > stax-api-1.0.1.jar,/usr/hdp/2.5.0.0-1245/hive/lib/ > stringtemplate-3.2.1.jar,/usr/hdp/2.5.0.0-1245/hive/lib/ > super-csv-2.2.0.jar,/usr/hdp/2.5.0.0-1245/hive/lib/ > transaction-api-1.1.jar,/usr/hdp/2.5.0.0-1245/hive/lib/ > velocity-1.5.jar,/usr/hdp/2.5.0.0-1245/hive/lib/xz-1.0.jar,/ > usr/hdp/2.5.0.0-1245/hive/lib/zookeeper-3.4.6.2.5.0.0-1245. > jar,/usr/hdp/2.5.0.0-1245/hive/lib/joda-time-2.8.1.jar,/ > usr/hdp/2.5.0.0-1245/hive-hcatalog/share/hcatalog/hive- > hcatalog-core-1.2.1000.2.5.0.0-1245.jar > 2017-02-13 12:46:33,123 INFO [pool-9-thread-10] > common.AbstractHadoopJob:202 : Hive Dependencies After Filtered: > /usr/hdp/2.5.0.0-1245/hive/lib/hive-exec-1.2.1000.2.5.0. > 0-1245.jar,/usr/hdp/2.5.0.0-1245/hive/lib/hive-metastore- > 1.2.1000.2.5.0.0-1245.jar,/usr/hdp/2.5.0.0-1245/hive- > hcatalog/share/hcatalog/hive-hcatalog-core-1.2.1000.2.5.0.0-1245.jar > 2017-02-13 12:46:33,123 INFO [pool-9-thread-10] > common.AbstractHadoopJob:230 : Kafka Dependencies: > 2017-02-13 12:46:33,132 INFO [pool-9-thread-10] > common.AbstractHadoopJob:358 : Job 'tmpjars' updated -- > file:/usr/hdp/2.5.0.0-1245/hive/lib/hive-exec-1.2.1000.2. > 5.0.0-1245.jar,file:/usr/hdp/2.5.0.0-1245/hive/lib/hive- > metastore-1.2.1000.2.5.0.0-1245.jar,file:/usr/hdp/2.5.0. > 0-1245/hive-hcatalog/share/hcatalog/hive-hcatalog-core-1. > 2.1000.2.5.0.0-1245.jar > 2017-02-13 12:46:33,133 INFO [pool-9-thread-10] common.KylinConfig:120 : > The URI /root/apache-kylin-1.6.0-hbase1.x-bin/bin/../tomcat/ > temp/kylin_job_meta4689348989911515024/meta is recognized as LOCAL_FOLDER > 2017-02-13 12:46:33,133 INFO [pool-9-thread-10] common.KylinConfig:267 : > New KylinConfig 231493112 > 2017-02-13 12:46:33,133 INFO [pool-9-thread-10] > common.KylinConfigBase:130 : Kylin Config was updated with > kylin.metadata.url : /root/apache-kylin-1.6.0-hbase1.x-bin/bin/../tomcat/ > temp/kylin_job_meta4689348989911515024/meta > 2017-02-13 12:46:33,133 INFO [pool-9-thread-10] > persistence.ResourceStore:80 : Using metadata url /root/apache-kylin-1.6.0- > hbase1.x-bin/bin/../tomcat/temp/kylin_job_meta4689348989911515024/meta > for resource store > 2017-02-13 12:46:33,134 DEBUG [pool-9-thread-10] > persistence.ResourceStore:207 : Directly saving resource > /cube/application_hours_cube.json (Store /root/apache-kylin-1.6.0- > hbase1.x-bin/bin/../tomcat/temp/kylin_job_meta4689348989911515024/meta) > 2017-02-13 12:46:33,134 DEBUG [pool-9-thread-10] > persistence.ResourceStore:207 : Directly saving resource > /model_desc/application_hours.json (Store /root/apache-kylin-1.6.0- > hbase1.x-bin/bin/../tomcat/temp/kylin_job_meta4689348989911515024/meta) > 2017-02-13 12:46:33,135 DEBUG [pool-9-thread-10] > persistence.ResourceStore:207 : Directly saving resource > /cube_desc/application_hours_cube.json (Store /root/apache-kylin-1.6.0- > hbase1.x-bin/bin/../tomcat/temp/kylin_job_meta4689348989911515024/meta) > 2017-02-13 12:46:33,135 DEBUG [pool-9-thread-10] > persistence.ResourceStore:207 : Directly saving resource > /table/DEFAULT.APPLICATION_HOURS_1_22_2.json (Store > /root/apache-kylin-1.6.0-hbase1.x-bin/bin/../tomcat/temp/kylin_job_ > meta4689348989911515024/meta) > 2017-02-13 12:46:33,136 DEBUG [pool-9-thread-10] > persistence.ResourceStore:207 : Directly saving resource > /dict/DEFAULT.APPLICATION_HOURS_1_22_2/ORG_ID/f6c994f8- > 7abe-41d9-9322-51d0a26e3ecf.dict (Store /root/apache-kylin-1.6.0- > hbase1.x-bin/bin/../tomcat/temp/kylin_job_meta4689348989911515024/meta) > 2017-02-13 12:46:33,136 DEBUG [pool-9-thread-10] > persistence.ResourceStore:207 : Directly saving resource > /dict/DEFAULT.APPLICATION_HOURS_1_22_2/APPLICATION_ID/ > 75268236-fc9a-43d8-80ec-c067824c7a4c.dict (Store /root/apache-kylin-1.6.0- > hbase1.x-bin/bin/../tomcat/temp/kylin_job_meta4689348989911515024/meta) > 2017-02-13 12:46:33,136 INFO [pool-9-thread-10] > common.AbstractHadoopJob:499 : HDFS meta dir is: > file:///root/apache-kylin-1.6.0-hbase1.x-bin/bin/../tomcat/temp/kylin_job_ > meta4689348989911515024/meta > 2017-02-13 12:46:33,136 INFO [pool-9-thread-10] > common.AbstractHadoopJob:372 : Job 'tmpfiles' updated -- > file:///root/apache-kylin-1.6.0-hbase1.x-bin/bin/../tomcat/temp/kylin_job_ > meta4689348989911515024/meta > 2017-02-13 12:46:33,231 INFO [pool-9-thread-10] > common.CubeStatsReader:215 : Cube is not memory hungry, storage size > estimation multiply 0.25 > 2017-02-13 12:46:33,231 INFO [pool-9-thread-10] > common.CubeStatsReader:218 : Cuboid 1 has 268 rows, each row size is 22 > bytes. Total size is 0.0014057159423828125M. > 2017-02-13 12:46:33,231 INFO [pool-9-thread-10] > common.CubeStatsReader:215 : Cube is not memory hungry, storage size > estimation multiply 0.25 > 2017-02-13 12:46:33,231 INFO [pool-9-thread-10] > common.CubeStatsReader:218 : Cuboid 2 has 254 rows, each row size is 22 > bytes. Total size is 0.0013322830200195312M. > 2017-02-13 12:46:33,231 INFO [pool-9-thread-10] > common.CubeStatsReader:215 : Cube is not memory hungry, storage size > estimation multiply 0.25 > 2017-02-13 12:46:33,231 INFO [pool-9-thread-10] > common.CubeStatsReader:218 : Cuboid 3 has 37334 rows, each row size is 24 > bytes. Total size is 0.21362686157226562M. > 2017-02-13 12:46:33,232 INFO [pool-9-thread-10] steps.InMemCuboidJob:158 > : Having total map input MB 0 > 2017-02-13 12:46:33,232 INFO [pool-9-thread-10] steps.InMemCuboidJob:159 > : Having per reduce MB 500.0 > 2017-02-13 12:46:33,232 INFO [pool-9-thread-10] steps.InMemCuboidJob:160 > : Setting mapred.reduce.tasks=1 > 2017-02-13 12:46:33,304 INFO [pool-9-thread-10] > impl.TimelineClientImpl:299 : Timeline service address: > http://sandbox.hortonworks.com:8188/ws/v1/timeline/ > 2017-02-13 12:46:33,304 INFO [pool-9-thread-10] client.RMProxy:125 : > Connecting to ResourceManager at sandbox.hortonworks.com/172.17.0.2:8050 > 2017-02-13 12:46:33,305 INFO [pool-9-thread-10] client.AHSProxy:42 : > Connecting to Application History server at sandbox.hortonworks.com/172. > 17.0.2:10200 > 2017-02-13 12:46:33,918 INFO [pool-9-thread-10] > mapred.FileInputFormat:249 : Total input paths to process : 1 > 2017-02-13 12:46:33,932 INFO [pool-9-thread-10] > mapreduce.JobSubmitter:198 : number of splits:1 > 2017-02-13 12:46:33,945 INFO [pool-9-thread-10] > mapreduce.JobSubmitter:287 : Submitting tokens for job: > job_1486969096283_0022 > 2017-02-13 12:46:33,961 INFO [pool-9-thread-10] impl.YarnClientImpl:279 : > Submitted application application_1486969096283_0022 > 2017-02-13 12:46:33,964 INFO [pool-9-thread-10] mapreduce.Job:1294 : The > url to track the job: http://sandbox.hortonworks. > com:8088/proxy/application_1486969096283_0022/ > 2017-02-13 12:46:33,964 INFO [pool-9-thread-10] > common.AbstractHadoopJob:506 : tempMetaFileString is : > file:///root/apache-kylin-1.6.0-hbase1.x-bin/bin/../tomcat/temp/kylin_job_ > meta4689348989911515024/meta > 2017-02-13 12:46:33,967 DEBUG [pool-9-thread-10] dao.ExecutableDao:210 : > updating job output, id: 60821d3b-6a2f-4789-a675-85d4109c4ed8-09 > 2017-02-13 12:46:33,972 DEBUG [pool-9-thread-10] > hbase.HBaseResourceStore:262 : Update row > /execute_output/60821d3b-6a2f-4789-a675-85d4109c4ed8-09 > from oldTs: 1486989993035, to newTs: 1486989993967, operation result: true > 2017-02-13 12:46:43,974 INFO [pool-9-thread-10] > mapred.ClientServiceDelegate:278 : Application state is completed. > FinalApplicationStatus=KILLED. Redirecting to job history server > 2017-02-13 12:46:43,995 DEBUG [pool-9-thread-10] dao.ExecutableDao:210 : > updating job output, id: 60821d3b-6a2f-4789-a675-85d4109c4ed8-09 > 2017-02-13 12:46:43,997 DEBUG [pool-9-thread-10] > hbase.HBaseResourceStore:262 : Update row > /execute_output/60821d3b-6a2f-4789-a675-85d4109c4ed8-09 > from oldTs: 1486989993967, to newTs: 1486990003996, operation result: true > 2017-02-13 12:46:54,002 DEBUG [pool-9-thread-10] dao.ExecutableDao:210 : > updating job output, id: 60821d3b-6a2f-4789-a675-85d4109c4ed8-09 > 2017-02-13 12:46:54,004 DEBUG [pool-9-thread-10] > hbase.HBaseResourceStore:262 : Update row > /execute_output/60821d3b-6a2f-4789-a675-85d4109c4ed8-09 > from oldTs: 1486990003996, to newTs: 1486990014002, operation result: true > 2017-02-13 12:46:54,005 INFO [pool-9-thread-10] > manager.ExecutableManager:292 : job id:60821d3b-6a2f-4789-a675-85d4109c4ed8-09 > from RUNNING to ERROR > 2017-02-13 12:46:54,005 DEBUG [pool-9-thread-10] dao.ExecutableDao:210 : > updating job output, id: 60821d3b-6a2f-4789-a675-85d4109c4ed8-09 > 2017-02-13 12:46:54,007 DEBUG [pool-9-thread-10] > hbase.HBaseResourceStore:262 : Update row > /execute_output/60821d3b-6a2f-4789-a675-85d4109c4ed8-09 > from oldTs: 1486990014002, to newTs: 1486990014005, operation result: true > 2017-02-13 12:46:54,008 DEBUG [pool-9-thread-10] dao.ExecutableDao:210 : > updating job output, id: 60821d3b-6a2f-4789-a675-85d4109c4ed8-09 > 2017-02-13 12:46:54,009 DEBUG [pool-9-thread-10] > hbase.HBaseResourceStore:262 : Update row > /execute_output/60821d3b-6a2f-4789-a675-85d4109c4ed8-09 > from oldTs: 1486990014005, to newTs: 1486990014008, operation result: true > 2017-02-13 12:46:54,009 INFO [pool-9-thread-10] > manager.ExecutableManager:292 : job id:60821d3b-6a2f-4789-a675-85d4109c4ed8-09 > from ERROR to ERROR > 2017-02-13 12:46:54,015 DEBUG [pool-9-thread-10] dao.ExecutableDao:210 : > updating job output, id: 60821d3b-6a2f-4789-a675-85d4109c4ed8 > 2017-02-13 12:46:54,017 DEBUG [pool-9-thread-10] > hbase.HBaseResourceStore:262 : Update row > /execute_output/60821d3b-6a2f-4789-a675-85d4109c4ed8 > from oldTs: 1486989993026, to newTs: 1486990014015, operation result: true > 2017-02-13 12:46:54,018 DEBUG [pool-9-thread-10] dao.ExecutableDao:210 : > updating job output, id: 60821d3b-6a2f-4789-a675-85d4109c4ed8 > 2017-02-13 12:46:54,019 DEBUG [pool-9-thread-10] > hbase.HBaseResourceStore:262 : Update row > /execute_output/60821d3b-6a2f-4789-a675-85d4109c4ed8 > from oldTs: 1486990014015, to newTs: 1486990014018, operation result: true > 2017-02-13 12:46:54,020 DEBUG [pool-9-thread-10] dao.ExecutableDao:210 : > updating job output, id: 60821d3b-6a2f-4789-a675-85d4109c4ed8 > 2017-02-13 12:46:54,021 DEBUG [pool-9-thread-10] > hbase.HBaseResourceStore:262 : Update row > /execute_output/60821d3b-6a2f-4789-a675-85d4109c4ed8 > from oldTs: 1486990014018, to newTs: 1486990014020, operation result: true > 2017-02-13 12:46:54,021 INFO [pool-9-thread-10] > manager.ExecutableManager:292 : job id:60821d3b-6a2f-4789-a675-85d4109c4ed8 > from RUNNING to ERROR > 2017-02-13 12:46:54,021 WARN [pool-9-thread-10] > execution.AbstractExecutable:247 : no need to send email, user list is > empty > 2017-02-13 12:46:54,032 INFO [pool-8-thread-1] > threadpool.DefaultScheduler:118 : Job Fetcher: 0 should running, 0 actual > running, 0 ready, 1 already succeed, 4 error, 22 discarded, 0 others > > > > > Lei Wang > > > ________________________________ > From: Billy Liu <billy...@apache.org> > Sent: Wednesday, February 8, 2017 9:56 > To: dev > Subject: Re: Can not build sample cube > > Kylin should be installed at the Hadoop client/edge node, as it will load > the necessary Hadoop configuration from the environment. Only the > customized config files locate at kylin conf directory. The default Hadoop > configuration(*-site.xml) should exist at the Hadoop client node. You could > check the find-*-dependency,sh for further information. > > 2017-02-07 21:44 GMT+08:00 ? ? <biolearn...@hotmail.com>: > > > > > Right now I can build the sample cube - all of such weird environment > > issues had gone away after I changed to install Kylin side HDP 2.5, > before > > that I installed kylin on the external docker-host, instead of inside HDP > > 2.5 docker container. > > > > So is it expected to install Kylin inside HDP? I guess so. > > > > > > > > Thx > > Lei Wang > > > > ________________________________ > > From: ? ? <biolearn...@hotmail.com> > > Sent: Monday, February 6, 2017 21:54 > > To: dev@kylin.apache.org > > Subject: Re: Problem while querying Cube > > > > My bad, :) > > I will go back to my own thread when needed. > > > > > > Thx > > Lei Wang > > > > ???? iPhone > > > > > ? 2017?2?6?,21:52,ShaoFeng Shi <shaofeng...@gmail.com> ??: > > > > > > seem be :) pls ignore and go ahead. > > > Lei, pls use your origin thread to discuss the problem. > > > > > > Get Outlook for iOS > > > > > > > > > > > > > > > On Mon, Feb 6, 2017 at 10:39 PM +0900, "hongbin ma" < > > mahong...@apache.org> wrote: > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > are we mixing two separate issues in the same thread here? > > > > > >> On Mon, Feb 6, 2017 at 9:26 PM, ShaoFeng Shi wrote: > > >> > > >> NoSuchObjectException(message:default.kylin_intermediate_ > > >> kylin_sales_cube_desc_3a41df7d_93e1_4445_9 > > >> usually this error indicates that hive-site.xml wasn't on classpath; a > > >> quick workaround is copying that file to $KYLIN_HOME/conf; the > ultimate > > >> solution is checking why bin/find-hive-dependency.sh didn't find it > > >> automatically. > > >> Get Outlook for iOS > > >> > > >> > > >> > > >> > > >> On Mon, Feb 6, 2017 at 6:43 PM +0900, "? ?" > > >> wrote: > > >> > > >> > > >> > > >> > > >> > > >> > > >> > > >> > > >> > > >> > > >> I manually added core-site.xml under kylin conf, and then started > Kylin. > > >> When I build the sample, it seems go over the wrong place yesterday, > but > > >> errored at another place. > > >> > > >> > > >> > > >> cat core-site.xml > > >> > > >> > > >> fs.defaultFS > > >> hdfs://sandbox.hortonworks.com:8020 > > >> > > >> > > >> > > >> > > >> 2017-02-06 09:34:53,357 INFO [pool-8-thread-3] > > >> metastore.MetaStoreDirectSql:140 : Using direct SQL, underlying DB is > > >> DERBY > > >> 2017-02-06 09:34:53,359 INFO [pool-8-thread-3] > > metastore.ObjectStore:273 > > >> : Initialized ObjectStore > > >> 2017-02-06 09:34:53,590 INFO [pool-8-thread-3] > > >> metastore.HiveMetaStore:664 : Added admin role in metastore > > >> 2017-02-06 09:34:53,592 INFO [pool-8-thread-3] > > >> metastore.HiveMetaStore:673 : Added public role in metastore > > >> 2017-02-06 09:34:53,651 INFO [pool-8-thread-3] > > >> metastore.HiveMetaStore:713 : No user is added in admin role, since > > config > > >> is empty > > >> 2017-02-06 09:34:53,770 INFO [pool-8-thread-3] > > >> metastore.HiveMetaStore:747 : 0: get_databases: > > >> NonExistentDatabaseUsedForHealthCheck > > >> 2017-02-06 09:34:53,771 INFO [pool-8-thread-3] > HiveMetaStore.audit:372 > > : > > >> ugi=root ip=unknown-ip-addr cmd=get_databases: > > >> NonExistentDatabaseUsedForHealthCheck > > >> 2017-02-06 09:34:53,791 INFO [pool-8-thread-3] > > >> metastore.HiveMetaStore:747 : 0: get_table : db=default > > >> tbl=kylin_intermediate_kylin_sales_cube_desc_3a41df7d_93e1_ > > >> 4445_9ca0_882f5f6e9d10 > > >> 2017-02-06 09:34:53,791 INFO [pool-8-thread-3] > HiveMetaStore.audit:372 > > : > > >> ugi=root ip=unknown-ip-addr cmd=get_table : db=default > > >> tbl=kylin_intermediate_kylin_sales_cube_desc_3a41df7d_93e1_ > > >> 4445_9ca0_882f5f6e9d10 > > >> 2017-02-06 09:34:53,812 INFO [pool-8-thread-3] > > >> common.AbstractHadoopJob:506 : tempMetaFileString is : null > > >> 2017-02-06 09:34:53,814 ERROR [pool-8-thread-3] > > >> common.MapReduceExecutable:127 : error execute MapReduceExecutable{id= > > >> baa9531d-fc11-4a60-aa5e-a069d4bee3c2-02, name=Extract Fact Table > > Distinct > > >> Columns, state=RUNNING} > > >> java.lang.RuntimeException: java.io.IOException: > > >> NoSuchObjectException(message:default.kylin_intermediate_ > > >> kylin_sales_cube_desc_3a41df7d_93e1_4445_9ca0_882f5f6e9d10 table not > > >> found) > > >> at org.apache.kylin.source.hive.HiveMRInput$ > > HiveTableInputFormat. > > >> configureJob(HiveMRInput.java:110) > > >> at org.apache.kylin.engine.mr.steps.FactDistinctColumnsJob. > > >> setupMapper(FactDistinctColumnsJob.java:119) > > >> at org.apache.kylin.engine.mr.steps.FactDistinctColumnsJob. > > >> run(FactDistinctColumnsJob.java:103) > > >> at org.apache.kylin.engine.mr.MRUtil.runMRJob(MRUtil.java:92) > > >> at org.apache.kylin.engine.mr.common.MapReduceExecutable. > > >> doWork(MapReduceExecutable.java:120) > > >> at org.apache.kylin.job.execution.AbstractExecutable. > > >> execute(AbstractExecutable.java:113) > > >> at org.apache.kylin.job.execution.DefaultChainedExecutable. > > doWork( > > >> DefaultChainedExecutable.java:57) > > >> at org.apache.kylin.job.execution.AbstractExecutable. > > >> execute(AbstractExecutable.java:113) > > >> at org.apache.kylin.job.impl.threadpool.DefaultScheduler$ > > >> JobRunner.run(DefaultScheduler.java:136) > > >> at java.util.concurrent.ThreadPoolExecutor.runWorker( > > >> ThreadPoolExecutor.java:1145) > > >> at java.util.concurrent.ThreadPoolExecutor$Worker.run( > > >> ThreadPoolExecutor.java:615) > > >> at java.lang.Thread.run(Thread.java:745) > > >> Caused by: java.io.IOException: NoSuchObjectException(message: > > >> default.kylin_intermediate_kylin_sales_cube_desc_ > > 3a41df7d_93e1_4445_9ca0_882f5f6e9d10 > > >> table not found) > > >> at org.apache.hive.hcatalog.mapreduce.HCatInputFormat. > > >> setInput(HCatInputFormat.java:97) > > >> at org.apache.hive.hcatalog.mapreduce.HCatInputFormat. > > >> setInput(HCatInputFormat.java:51) > > >> at org.apache.kylin.source.hive.HiveMRInput$ > > HiveTableInputFormat. > > >> configureJob(HiveMRInput.java:105) > > >> ... 11 more > > >> Caused by: NoSuchObjectException(message:default.kylin_intermediate_ > > >> kylin_sales_cube_desc_3a41df7d_93e1_4445_9ca0_882f5f6e9d10 table not > > >> found) > > >> at org.apache.hadoop.hive.metastore.HiveMetaStore$ > > >> HMSHandler.get_table_core(HiveMetaStore.java:1806) > > >> at org.apache.hadoop.hive.metastore.HiveMetaStore$ > > >> HMSHandler.get_table(HiveMetaStore.java:1776) > > >> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > > >> at sun.reflect.NativeMethodAccessorImpl.invoke( > > >> NativeMethodAccessorImpl.java:57) > > >> at sun.reflect.DelegatingMethodAccessorImpl.invoke( > > >> DelegatingMethodAccessorImpl.java:43) > > >> at java.lang.reflect.Method.invoke(Method.java:606) > > >> at org.apache.hadoop.hive.metastore.RetryingHMSHandler. > > >> invoke(RetryingHMSHandler.java:107) > > >> at com.sun.proxy.$Proxy49.get_table(Unknown Source) > > >> at org.apache.hadoop.hive.metastore.HiveMetaStoreClient. > > >> getTable(HiveMetaStoreClient.java:1209) > > >> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > > >> at sun.reflect.NativeMethodAccessorImpl.invoke( > > >> NativeMethodAccessorImpl.java:57) > > >> at sun.reflect.DelegatingMethodAccessorImpl.invoke( > > >> DelegatingMethodAccessorImpl.java:43) > > >> at java.lang.reflect.Method.invoke(Method.java:606) > > >> at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient. > > >> invoke(RetryingMetaStoreClient.java:152) > > >> at com.sun.proxy.$Proxy50.getTable(Unknown Source) > > >> at org.apache.hive.hcatalog.common.HCatUtil.getTable( > > >> HCatUtil.java:180) > > >> at org.apache.hive.hcatalog.mapreduce.InitializeInput. > > >> getInputJobInfo(InitializeInput.java:105) > > >> at org.apache.hive.hcatalog.mapreduce.InitializeInput. > > >> setInput(InitializeInput.java:86) > > >> at org.apache.hive.hcatalog.mapreduce.HCatInputFormat. > > >> setInput(HCatInputFormat.java:95) > > >> ... 13 more > > >> 2017-02-06 09:34:53,817 DEBUG [pool-8-thread-3] dao.ExecutableDao:210 > : > > >> updating job output, id: baa9531d-fc11-4a60-aa5e-a069d4bee3c2-02 > > >> > > >> > > >> Thx > > >> Lei Wang > > >> ________________________________ > > >> From: ShaoFeng Shi > > >> Sent: Sunday, February 5, 2017 22:19 > > >> To: dev > > >> Subject: Re: Problem while querying Cube > > >> > > >> Most of the developers are on festival vacation, please allow some > > delay; > > >> btw, could you please open a JIRA for it, it looks like a bug. > > >> > > >> 2017-02-05 21:42 GMT+08:00 Shailesh Prajapati : > > >> > > >>> What should i supposed to do now to make query work with limits? > > Because > > >> i > > >>> am keep getting this exception. What will happen if we remove this > > check? > > >>> > > >>> Thanks. > > >>> > > >>> On Sat, Feb 4, 2017 at 8:12 PM, ShaoFeng Shi > > >>> wrote: > > >>> > > >>>> I don't know what that check is necessary, as there is a todo which > > >> says > > >>>> will remove that someday: > > >>>> https://github.com/apache/kylin/blob/master/core- > > >>>> storage/src/main/java/org/apache/kylin/storage/gtrecord/ > > >>>> SortedIteratorMergerWithLimit.java#L127 > > >>>> > > >>>> @Hongbin, any idea? > > >>>> > > >>>> > > >>>> 2017-02-04 17:05 GMT+08:00 Shailesh Prajapati : > > >>>> > > >>>>> Hi ShaoFeng, > > >>>>> > > >>>>> Here is the gist link, > > >>>>> > > >>>>> https://gist.github.com/shaipraj/780a3dcc80aa2080911b7348c76f5b88 > [https://avatars1.githubusercontent.com/u/22189344?v=3&s=400]<https:// > gist.github.com/shaipraj/780a3dcc80aa2080911b7348c76f5b88> > > kylin.log<https://gist.github.com/shaipraj/780a3dcc80aa2080911b7348c76f5b > 88> > gist.github.com > property= > > > > > [https://avatars1.githubusercontent.com/u/22189344?v=3&s=400]<https:// > > gist.github.com/shaipraj/780a3dcc80aa2080911b7348c76f5b88> > > > > kylin.log<https://gist.github.com/shaipraj/ > 780a3dcc80aa2080911b7348c76f5b > > 88> > > gist.github.com > > property= > > > > > > > > >> [https://avatars1.githubusercontent.com/u/22189344?v=3&s=400] > > >> > > >> kylin.log > > >> gist.github.com > > >> property= > > >> > > >> > > >> > > >>>>> > > >>>>> Thanks. > > >>>>> > > >>>>> On Sat, Feb 4, 2017 at 2:24 PM, ShaoFeng Shi > > >>>>> wrote: > > >>>>> > > >>>>>> Hi Shailesh, there is no attachement (attachement isn't supported > > >> by > > >>>>>> mailing list); can you paste the content directly or put it to > > >> gist? > > >>>>>> > > >>>>>> 2017-02-04 15:49 GMT+08:00 Shailesh Prajapati >: > > >>>>>> > > >>>>>>> Hi ShaoFeng, > > >>>>>>> > > >>>>>>> I am attaching a portion of kylin's log. > > >>>>>>> > > >>>>>>> Thanks. > > >>>>>>> > > >>>>>>> On Sat, Feb 4, 2017 at 12:59 PM, ShaoFeng Shi < > > >>>> shaofeng...@apache.org> > > >>>>>>> wrote: > > >>>>>>> > > >>>>>>>> Hi Shailesh, > > >>>>>>>> > > >>>>>>>> Could you please provide the error trace? We need to know where > > >>> the > > >>>>>> error > > >>>>>>>> got thrown. Thanks. > > >>>>>>>> > > >>>>>>>> 2017-02-03 18:06 GMT+08:00 Shailesh Prajapati < > > >>>> shail...@infoworks.io > > >>>>>> : > > >>>>>>>> > > >>>>>>>>> Hi, > > >>>>>>>>> > > >>>>>>>>> We are running Kylin 1.6 and successfully build Cube on it. > > >>>>> Aggregate > > >>>>>>>>> queries are running fine But, with non aggregate query we are > > >>>>> getting > > >>>>>>>>> following exception, > > >>>>>>>>> > > >>>>>>>>> org.apache.kylin.rest.exception.InternalErrorException: Not > > >>>> sorted! > > >>>>>>>> last: > > >>>>>>>>> CUSTOMER_ID=0,ORDER_ID=10345,QUANTITY=null ... and other > > >>> columns. > > >>>>>>>>> > > >>>>>>>>> Query used: select ORDERS.STATUS from ORDER_DETAILS as ORDERS > > >>>> limit > > >>>>> 5; > > >>>>>>>>> > > >>>>>>>>> One more observation, with limit less than 5 even non > > >> aggregate > > >>>>>> queries > > >>>>>>>> are > > >>>>>>>>> also working. > > >>>>>>>>> Please help us resolving this issue. let us know for any other > > >>>>>>>> information. > > >>>>>>>>> > > >>>>>>>>> Thanks > > >>>>>>>>> > > >>>>>>>> > > >>>>>>>> > > >>>>>>>> > > >>>>>>>> -- > > >>>>>>>> Best regards, > > >>>>>>>> > > >>>>>>>> Shaofeng Shi ??? > > >>>>>>>> > > >>>>>>> > > >>>>>>> > > >>>>>>> > > >>>>>>> -- > > >>>>>>> Shailesh > > >>>>>>> > > >>>>>> > > >>>>>> > > >>>>>> > > >>>>>> -- > > >>>>>> Best regards, > > >>>>>> > > >>>>>> Shaofeng Shi ??? > > >>>>>> > > >>>>> > > >>>>> > > >>>>> > > >>>>> -- > > >>>>> Shailesh > > >>>>> > > >>>> > > >>>> > > >>>> > > >>>> -- > > >>>> Best regards, > > >>>> > > >>>> Shaofeng Shi ??? > > >>>> > > >>> > > >>> > > >>> > > >>> -- > > >>> Shailesh > > >>> > > >> > > >> > > >> > > >> -- > > >> Best regards, > > >> > > >> Shaofeng Shi ??? > > >> > > >> > > >> > > >> > > >> > > >> > > > > > > > > > -- > > > Regards, > > > > > > *Bin Mahone | ???* > > > > > > > > > > > > > > > > > >