kylin.hdfs.working.dir=/tmp

Well usually the cube building runs smoothly without any such unconditional
issue, its this time when I am having a lookup table with join on a
particular column I am landing into the issue.

Any quick resolve would be highly appreciated!

Thanks,

On Fri, Jul 24, 2015 at 11:55 AM, Maliakkal Padmanabhan, Aroop <
[email protected]> wrote:

> Have seen these issues after a cleanup of hdfs:///tmp .
> What is the value of kylin.hdfs.working.dir in your cluster ?
>
> Thanks,
> /Aroop
>
>
>
> On 7/24/15, 11:21 AM, "Vineet Mishra" <[email protected]> wrote:
>
> >Let me mention the Lookup is Hierarchical.
> >
> >Thanks,
> >
> >On Fri, Jul 24, 2015 at 11:12 AM, Vineet Mishra <[email protected]>
> >wrote:
> >
> >> Hi Li and Shi,
> >>
> >> I am building a cube with some lookup table in between and getting
> >> exception at third step of cube build i.e Build Dimension Dictionary
> >>with
> >> exception saying
> >>
> >> java.io.FileNotFoundException: File does not exist:
> >>
> >>/tmp/kylin-5a2ea405-24a2-45ed-958e-2a7fddd8cc97/sc_o2s_metrics_verified12
> >>3455/fact_distinct_columns/SC
> >> at
> >>
> >>org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSys
> >>tem.java:1093)
> >> at
> >>
> >>org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSys
> >>tem.java:1085)
> >> at
> >>
> >>org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolve
> >>r.java:81)
> >> at
> >>
> >>org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFil
> >>eSystem.java:1085)
> >> at
> >>org.apache.kylin.dict.lookup.FileTable.getSignature(FileTable.java:62)
> >> at
> >>
> >>org.apache.kylin.dict.DictionaryManager.buildDictionary(DictionaryManager
> >>.java:164)
> >> at
> >>org.apache.kylin.cube.CubeManager.buildDictionary(CubeManager.java:154)
> >> at
> >>
> >>org.apache.kylin.cube.cli.DictionaryGeneratorCLI.processSegment(Dictionar
> >>yGeneratorCLI.java:53)
> >> at
> >>
> >>org.apache.kylin.cube.cli.DictionaryGeneratorCLI.processSegment(Dictionar
> >>yGeneratorCLI.java:42)
> >> at
> >>
> >>org.apache.kylin.job.hadoop.dict.CreateDictionaryJob.run(CreateDictionary
> >>Job.java:53)
> >> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> >> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
> >> at
> >>
> >>org.apache.kylin.job.common.HadoopShellExecutable.doWork(HadoopShellExecu
> >>table.java:63)
> >> at
> >>
> >>org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecuta
> >>ble.java:107)
> >> at
> >>
> >>org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultCha
> >>inedExecutable.java:50)
> >> at
> >>
> >>org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecuta
> >>ble.java:107)
> >> at
> >>
> >>org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(Defau
> >>ltScheduler.java:132)
> >> at
> >>
> >>java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java
> >>:1145)
> >> at
> >>
> >>java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.jav
> >>a:615)
> >> at java.lang.Thread.run(Thread.java:745)
> >>
> >> It will be great if you could help out in getting this issue resolved.
> >>
> >> URGENT CALL!
> >>
> >> Thanks,
> >>
>
>

Reply via email to