[ 
https://issues.apache.org/jira/browse/CARBONDATA-2967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kunal Kapoor updated CARBONDATA-2967:
-------------------------------------
    Issue Type: Bug  (was: Improvement)

> Select is failing on pre-aggregate datamap when thrift server is restarted.
> ---------------------------------------------------------------------------
>
>                 Key: CARBONDATA-2967
>                 URL: https://issues.apache.org/jira/browse/CARBONDATA-2967
>             Project: CarbonData
>          Issue Type: Bug
>            Reporter: Kunal Kapoor
>            Assignee: Kunal Kapoor
>            Priority: Major
>
> *Problem:* 
> NullPointerException is thrown when select query is fired on a datamap. This 
> is because to access dictionary files of the parent table the child table 
> tries to get tablePath from CarbonTable object of the parent. Because the 
> metadata is not populated therefore NullpointerException is thrown.
>  
> 1, 10.2.3.19, executor 1): java.lang.RuntimeException: Error while resolving 
> filter expression 
>         at 
> org.apache.carbondata.core.metadata.schema.table.CarbonTable.resolveFilter(CarbonTable.java:1043)
>  
>         at 
> org.apache.carbondata.core.scan.model.QueryModelBuilder.build(QueryModelBuilder.java:322)
>  
>         at 
> org.apache.carbondata.hadoop.api.CarbonInputFormat.createQueryModel(CarbonInputFormat.java:632)
>  
>         at 
> org.apache.carbondata.spark.rdd.CarbonScanRDD.internalCompute(CarbonScanRDD.scala:419)
>  
>         at 
> org.apache.carbondata.spark.rdd.CarbonRDD.compute(CarbonRDD.scala:78) 
>         at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323) 
>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:287) 
>         at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
>         at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323) 
>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:287) 
>         at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
>         at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323) 
>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:287) 
>         at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96) 
>         at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53) 
>         at org.apache.spark.scheduler.Task.run(Task.scala:109) 
>         at 
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:338) 
>         at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  
>         at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  
>         at java.lang.Thread.run(Thread.java:748) 
> Caused by: java.lang.NullPointerException 
>         at 
> org.apache.carbondata.core.scan.executor.util.QueryUtil.getTableIdentifierForColumn(QueryUtil.java:401)
>  
>         at 
> org.apache.carbondata.core.scan.filter.FilterUtil.getForwardDictionaryCache(FilterUtil.java:1416)
>  
>         at 
> org.apache.carbondata.core.scan.filter.FilterUtil.getFilterValues(FilterUtil.java:712)
>  
>         at 
> org.apache.carbondata.core.scan.filter.resolver.resolverinfo.visitor.DictionaryColumnVisitor.populateFilterResolvedInfo(DictionaryColumnVisitor.java:60)
>  
>         at 
> org.apache.carbondata.core.scan.filter.resolver.resolverinfo.DimColumnResolvedFilterInfo.populateFilterInfoBasedOnColumnType(DimColumnResolvedFilterInfo.java:119)
>  
>         at 
> org.apache.carbondata.core.scan.filter.resolver.ConditionalFilterResolverImpl.resolve(ConditionalFilterResolverImpl.java:107)
>  
>         at 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to