Hi,all
   There is a business scenario is the detailed query. There are 45 metric for 
design to “RAW”, one dimension and high base,but in “Rowkey” of “Advanced 
Setting”,the Encoding is fixed._length,not dict.
But when build the cube, there are some errors in “#4 Step Name: Build 
Dimension Dictionary”:

com.google.common.util.concurrent.ExecutionError: java.lang.OutOfMemoryError: 
Java heap space
         at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2254)
         at com.google.common.cache.LocalCache.get(LocalCache.java:3985)
         at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3989)
         at 
com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4873)
         at 
org.apache.kylin.dict.DictionaryManager.getDictionaryInfo(DictionaryManager.java:119)
         at 
org.apache.kylin.dict.DictionaryManager.checkDupByContent(DictionaryManager.java:191)
         at 
org.apache.kylin.dict.DictionaryManager.trySaveNewDict(DictionaryManager.java:169)
         at 
org.apache.kylin.dict.DictionaryManager.saveDictionary(DictionaryManager.java:324)
         at 
org.apache.kylin.cube.CubeManager.saveDictionary(CubeManager.java:234)
         at 
org.apache.kylin.cube.cli.DictionaryGeneratorCLI.processSegment(DictionaryGeneratorCLI.java:68)
         at 
org.apache.kylin.cube.cli.DictionaryGeneratorCLI.processSegment(DictionaryGeneratorCLI.java:54)
         at 
org.apache.kylin.engine.mr.steps.CreateDictionaryJob.run(CreateDictionaryJob.java:66)
         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
         at 
org.apache.kylin.engine.mr.common.HadoopShellExecutable.doWork(HadoopShellExecutable.java:63)
         at 
org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:124)
         at 
org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:64)
         at 
org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:124)
         at 
org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:142)
         at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
         at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
         at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.OutOfMemoryError: Java heap space
         at 
org.apache.kylin.dict.CacheDictionary.enableCache(CacheDictionary.java:95)
         at 
org.apache.kylin.dict.TrieDictionaryForest.initForestCache(TrieDictionaryForest.java:394)
         at 
org.apache.kylin.dict.TrieDictionaryForest.init(TrieDictionaryForest.java:77)
         at 
org.apache.kylin.dict.TrieDictionaryForest.readFields(TrieDictionaryForest.java:237)
         at 
org.apache.kylin.dict.DictionaryInfoSerializer.deserialize(DictionaryInfoSerializer.java:74)
         at 
org.apache.kylin.dict.DictionaryInfoSerializer.deserialize(DictionaryInfoSerializer.java:34)
         at 
org.apache.kylin.common.persistence.ResourceStore.getResource(ResourceStore.java:154)
         at 
org.apache.kylin.dict.DictionaryManager.load(DictionaryManager.java:445)
         at 
org.apache.kylin.dict.DictionaryManager$1.load(DictionaryManager.java:102)
         at 
org.apache.kylin.dict.DictionaryManager$1.load(DictionaryManager.java:99)
         at 
com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3584)
         at 
com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2372)
         at 
com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2335)
         at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2250)
         at com.google.common.cache.LocalCache.get(LocalCache.java:3985)
         at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3989)
         at 
com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4873)
         at 
org.apache.kylin.dict.DictionaryManager.getDictionaryInfo(DictionaryManager.java:119)
         at 
org.apache.kylin.dict.DictionaryManager.checkDupByContent(DictionaryManager.java:191)
         at 
org.apache.kylin.dict.DictionaryManager.trySaveNewDict(DictionaryManager.java:169)
         at 
org.apache.kylin.dict.DictionaryManager.saveDictionary(DictionaryManager.java:324)
         at 
org.apache.kylin.cube.CubeManager.saveDictionary(CubeManager.java:234)
         at 
org.apache.kylin.cube.cli.DictionaryGeneratorCLI.processSegment(DictionaryGeneratorCLI.java:68)
         at 
org.apache.kylin.cube.cli.DictionaryGeneratorCLI.processSegment(DictionaryGeneratorCLI.java:54)
         at 
org.apache.kylin.engine.mr.steps.CreateDictionaryJob.run(CreateDictionaryJob.java:66)
         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
         at 
org.apache.kylin.engine.mr.common.HadoopShellExecutable.doWork(HadoopShellExecutable.java:63)
         at 
org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:124)
         at 
org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:64)
         at 
org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:124)
         at 
org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:142)


Sometimes this is wrong:

java.lang.NegativeArraySizeException
         at 
org.apache.hadoop.io.BytesWritable.setCapacity(BytesWritable.java:144)
         at org.apache.hadoop.io.BytesWritable.setSize(BytesWritable.java:123)
         at 
org.apache.hadoop.io.BytesWritable.readFields(BytesWritable.java:179)
         at 
org.apache.hadoop.io.SequenceFile$Reader.getCurrentValue(SequenceFile.java:2178)
         at 
org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:2306)
         at 
org.apache.kylin.engine.mr.steps.CreateDictionaryJob$2.getDictionary(CreateDictionaryJob.java:87)
         at 
org.apache.kylin.cube.cli.DictionaryGeneratorCLI.processSegment(DictionaryGeneratorCLI.java:65)
         at 
org.apache.kylin.cube.cli.DictionaryGeneratorCLI.processSegment(DictionaryGeneratorCLI.java:54)
         at 
org.apache.kylin.engine.mr.steps.CreateDictionaryJob.run(CreateDictionaryJob.java:66)
         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
         at 
org.apache.kylin.engine.mr.common.HadoopShellExecutable.doWork(HadoopShellExecutable.java:63)
         at 
org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:124)
         at 
org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:64)
         at 
org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:124)
         at 
org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:142)
         at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
         at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
         at java.lang.Thread.run(Thread.java:745)

result code:2


Now I use kylin 2.0,can you help me ?


Tkanks!

Reply via email to