Hi all,

I have started to play with Kylin, which looks very promising.

I am connecting to an existing cluster which already contains Hive tables
and has HBase installed. I am trying to make a POC which would create a
cube for those Hive tables.

The configuration for Hive is in /etc/hive/conf/hive-site.xml and is picked
up correctly by the starting script. At the beginning the HBase dependency
did not work so I added

    --config /path/to/my/hbase/config

to the hbase command in bin/kylin.sh and now it works fine.

I created a simple cube but I am now stuck on the second step : Extract
Fact Table Distinct Columns with the error
"org.apache.hadoop.security.AccessControlException: User xxx cannot submit
applications to queue root.etl.default" (full stack below)

I don't really understand where this root.etl.default queue comes from.
I added

   <property>

       <name>mapred.job.queue.name</name>

       <value>myQueue</value>

    </property>

to my /etc/hive/conf/hive-site.xml file and the step 1 Map Reduce job ran
correctly:


Job Name:INSERT OVERWRITE TABLE kylin...'2015-04-30')(Stage-3)User Name:
myNameQueue:myQueueState:SUCCEEDED


Am I missing something?

By the way is there a way to see what is the actual job that Kylin is
trying to run (in the logs I just see this error)?

java.io.IOException: Failed to run job :
org.apache.hadoop.security.AccessControlException: User afouchs cannot
submit applications to queue root.etl.default
        at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:300)
        at 
org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:432)
        at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
        at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594)
        at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
        at 
org.apache.kylin.job.hadoop.AbstractHadoopJob.waitForCompletion(AbstractHadoopJob.java:123)
        at 
org.apache.kylin.job.hadoop.cube.FactDistinctColumnsJob.run(FactDistinctColumnsJob.java:80)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
        at 
org.apache.kylin.job.common.MapReduceExecutable.doWork(MapReduceExecutable.java:112)
        at 
org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:107)
        at 
org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:50)
        at 
org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:107)
        at 
org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:132)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:744)

result code:2





[pool-7-thread-2]:[2015-05-29
03:16:40,151][ERROR][org.apache.kylin.job.hadoop.cube.FactDistinctColumnsJob.run(FactDistinctColumnsJob.java:83)]
- error in FactDistinctColumnsJob

java.io.IOException: Failed to run job :
org.apache.hadoop.security.AccessControlException: User xxx cannot submit
applications to queue root.etl.default

at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:300)

at
org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:432)

at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)

at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)

at java.security.AccessController.doPrivileged(Native Method)

at javax.security.auth.Subject.doAs(Subject.java:415)

at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594)

at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)

at
org.apache.kylin.job.hadoop.AbstractHadoopJob.waitForCompletion(AbstractHadoopJob.java:123)

at
org.apache.kylin.job.hadoop.cube.FactDistinctColumnsJob.run(FactDistinctColumnsJob.java:80)

at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)

at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)

at
org.apache.kylin.job.common.MapReduceExecutable.doWork(MapReduceExecutable.java:112)

at
org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:107)

at
org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:50)

at
org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:107)

at
org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:132)

at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)

at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)

at java.lang.Thread.run(Thread.java:744)

[pool-7-thread-2]:[2015-05-29
03:16:40,192][ERROR][org.apache.kylin.job.common.MapReduceExecutable.doWork(MapReduceExecutable.java:115)]
- error execute
MapReduceExecutable{id=bd0a5670-e5fc-43c7-b175-4dfe00f950d1-01,
name=Extract Fact Table Distinct Columns, state=RUNNING}

java.io.IOException: Failed to run job :
org.apache.hadoop.security.AccessControlException: User xxx cannot submit
applications to queue root.etl.default

at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:300)

at
org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:432)

at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)

at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)

at java.security.AccessController.doPrivileged(Native Method)

at javax.security.auth.Subject.doAs(Subject.java:415)

at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594)

at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)

at
org.apache.kylin.job.hadoop.AbstractHadoopJob.waitForCompletion(AbstractHadoopJob.java:123)

at
org.apache.kylin.job.hadoop.cube.FactDistinctColumnsJob.run(FactDistinctColumnsJob.java:80)

at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)

at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)

at
org.apache.kylin.job.common.MapReduceExecutable.doWork(MapReduceExecutable.java:112)

at
org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:107)

at
org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:50)

at
org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:107)

at
org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:132)

at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)

at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)

at java.lang.Thread.run(Thread.java:744)

Reply via email to