Hi all,

I built a cube on a Hive fact table of about 300 millions rows, all the
steps are FINISHED with success but it seems I cannot do any query.

select * from myFactTable; (or any other query)

returns From line 1, column 15 to line 1, column 43: Table 'MYFACTTABLE'
not found while executing SQL: "select * from myFactTable LIMIT 50000"

I tried rebuilding completely the cube after a purge, created a new
project, etc. but it doesn't solve the issue.

It seems like a new KYLIN_ table has been created and contains data. How
can I know which HBase table corresponds to which cube?

The "Convert Cuboid Data to HFile" step took a while, the MapReduce job
stayed in Undefined status for a long time but finally Succeeded. When I
click on the link from the Kylin UI it send me to 7070/kylin/N/A instead of
the MapReduce job info, is that a bad sign?

In the logs I don't see any error in the Build N-Dimension Cuboid Data,
Calculate HTable Region Splits, Create HTable

I am not sure if it's event related but in the logs can this:

2015-06-04 10:17:10,910 INFO  [pool-7-thread-10] mapreduce.Job:  map 1%
reduce 0%

2015-06-04 10:17:11,936 INFO  [pool-7-thread-10] mapreduce.Job: Task Id :
attempt_1432057232815_49758_m_000003_2, Status : FAILED

Error: java.lang.RuntimeException: java.lang.ClassNotFoundException: Class
org.apache.hive.hcatalog.mapreduce.HCatInputFormat not found

        at
org.apache.hadoop.conf.Configuration.getClass(Configuration.java:1961)

        at
org.apache.hadoop.mapreduce.task.JobContextImpl.getInputFormatClass(JobContextImpl.java:174)

        at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:726)

        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)

        at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)

        at java.security.AccessController.doPrivileged(Native Method)

        at javax.security.auth.Subject.doAs(Subject.java:415)

        at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594)

        at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)

Caused by: java.lang.ClassNotFoundException: Class
org.apache.hive.hcatalog.mapreduce.HCatInputFormat not found

        at
org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:1867)

        at
org.apache.hadoop.conf.Configuration.getClass(Configuration.java:1959)

        ... 8 more



but then it seems to go on anyway:

2015-06-04 10:17:15,127 INFO  [pool-7-thread-10] mapreduce.Job:  map 2%
reduce 0%

2015-06-04 10:17:18,174 INFO  [pool-7-thread-10] mapreduce.Job:  map 3%
reduce 0%

2015-06-04 10:17:20,202 INFO  [pool-7-thread-10] mapreduce.Job:  map 4%
reduce 0%

2015-06-04 10:17:22,230 INFO  [pool-7-thread-10] mapreduce.Job:  map 5%
reduce 0%

2015-06-04 10:17:23,244 INFO  [pool-7-thread-10] mapreduce.Job:  map 6%
reduce 0%

...

2015-06-04 10:19:33,458 INFO  [pool-7-thread-10] mapreduce.Job:  map 100%
reduce 100%

2015-06-04 10:19:33,481 INFO  [pool-7-thread-10] mapreduce.Job: Job
job_1432057232815_49758 completed successfully


This comes after


Hive Column Cardinality calculation for table

and a step called :


+------------------------------------------------------------------------------------------------------+

| null
                            |

+------------------------------------------------------------------------------------------------------+


Thanks for your help, I would really like to make this PoC work!

Reply via email to