Hum... your hadoop env looks be problemic; it uses localhost as the Hadoop
NN, is that correct?

2017-02-13 15:41 GMT+08:00 排骨瘦肉丁 <153563...@qq.com>:

> kylin build cube at step 3,why ?configuration error? where?
> java.io.FileNotFoundException: File does not exist:
> hdfs://localhost:9000/home/root/hadoop/tmp/mapred/staging/root398825514/.
> staging/job_local398825514_0001/libjars/hive-exec-1.2.1.jar
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem$17.
> doCall(DistributedFileSystem.java:1072)
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem$17.
> doCall(DistributedFileSystem.java:1064)
>         at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(
> FileSystemLinkResolver.java:81)
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(
> DistributedFileSystem.java:1064)
>         at
> org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.
> getFileStatus(ClientDistributedCacheManager.java:288)
>         at
> org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.
> getFileStatus(ClientDistributedCacheManager.java:224)
>         at
> org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.
> determineTimestamps(ClientDistributedCacheManager.java:99)
>         at
> org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.
> determineTimestampsAndCacheVisibilities(ClientDistributedCacheManager.
> java:57)
>         at
> org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(
> JobSubmitter.java:265)
>         at
> org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(
> JobSubmitter.java:301)
>         at
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(
> JobSubmitter.java:389)
>         at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
>         at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:422)
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(
> UserGroupInformation.java:1614)
>         at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
>         at
> org.apache.kylin.engine.mr.common.AbstractHadoopJob.waitForCompletion(
> AbstractHadoopJob.java:149)
>         at
> org.apache.kylin.engine.mr.steps.FactDistinctColumnsJob.
> run(FactDistinctColumnsJob.java:108)
>         at org.apache.kylin.engine.mr.MRUtil.runMRJob(MRUtil.java:92)
>         at
> org.apache.kylin.engine.mr.common.MapReduceExecutable.
> doWork(MapReduceExecutable.java:120)
>         at
> org.apache.kylin.job.execution.AbstractExecutable.
> execute(AbstractExecutable.java:113)
>         at
> org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(
> DefaultChainedExecutable.java:57)
>         at
> org.apache.kylin.job.execution.AbstractExecutable.
> execute(AbstractExecutable.java:113)
>         at
> org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(
> DefaultScheduler.java:136)
>         at
> java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1142)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:617)
>         at java.lang.Thread.run(Thread.java:745)
>
> result code:2
>
> --
> View this message in context: http://apache-kylin.74782.x6.
> nabble.com/Trouble-with-building-cube-at-step-3-cry-cry-tp7172.html
> Sent from the Apache Kylin mailing list archive at Nabble.com.
>



-- 
Best regards,

Shaofeng Shi 史少锋

Reply via email to