Github user kevinjmh commented on a diff in the pull request:

    https://github.com/apache/carbondata/pull/2900#discussion_r230972236
  
    --- Diff: 
hadoop/src/main/java/org/apache/carbondata/hadoop/api/CarbonTableInputFormat.java
 ---
    @@ -575,6 +576,8 @@ private BitSet setMatchedPartitions(String 
partitionIds, Expression filter,
        */
       public BlockMappingVO getBlockRowCount(Job job, CarbonTable table,
           List<PartitionSpec> partitions) throws IOException {
    +    // no useful information for count star query without filter, so 
disable explain collector
    +    ExplainCollector.remove();
    --- End diff --
    
    You are right. 
    Normal query flow goes to `CarbonInputFormat#getPrunedBlocklets` and 
initialize the pruning info for table we queried.  But count star query without 
filter use a different query plan, it does not go into that method, so no 
pruning info does not init. When it calls default data map to prune(with a null 
filter), exception will occur during settingg pruning info.
    
    One solution is to init the pruning info for this type of query in mrthod 
`getBlockRowCount`. But considering
    no useful information about block/blocklet pruning for such query(actually 
no pruning), I choose to disable the expalin collector instead.


---

Reply via email to