Github user manishgupta88 commented on a diff in the pull request:

    https://github.com/apache/carbondata/pull/2869#discussion_r230389069
  
    --- Diff: 
store/sdk/src/main/java/org/apache/carbondata/sdk/file/CarbonReaderBuilder.java 
---
    @@ -158,14 +172,31 @@ public CarbonReaderBuilder withHadoopConf(String key, 
String value) {
         }
     
         try {
    -      final List<InputSplit> splits =
    -          format.getSplits(new JobContextImpl(job.getConfiguration(), new 
JobID()));
     
    +      if (filterExpression == null) {
    +        job.getConfiguration().set("filter_blocks", "false");
    +      }
    +      List<InputSplit> splits =
    +          format.getSplits(new JobContextImpl(job.getConfiguration(), new 
JobID()));
           List<RecordReader<Void, T>> readers = new ArrayList<>(splits.size());
           for (InputSplit split : splits) {
             TaskAttemptContextImpl attempt =
                 new TaskAttemptContextImpl(job.getConfiguration(), new 
TaskAttemptID());
    -        RecordReader reader = format.createRecordReader(split, attempt);
    +        RecordReader reader;
    +        QueryModel queryModel = format.createQueryModel(split, attempt);
    +        boolean hasComplex = false;
    +        for (ProjectionDimension projectionDimension : 
queryModel.getProjectionDimensions()) {
    +          if (projectionDimension.getDimension().isComplex()) {
    +            hasComplex = true;
    +            break;
    +          }
    +        }
    +        if (useVectorReader && !hasComplex) {
    --- End diff --
    
    as a test scenario..test a query which has a schema of more than 100 
columns and see if it works fine


---

Reply via email to