Github user akashrn5 commented on a diff in the pull request:

    https://github.com/apache/carbondata/pull/2269#discussion_r186724944
  
    --- Diff: 
hadoop/src/main/java/org/apache/carbondata/hadoop/api/DistributableDataMapFormat.java
 ---
    @@ -100,14 +103,18 @@ private static FilterResolverIntf 
getFilterExp(Configuration configuration) thro
         return new RecordReader<Void, ExtendedBlocklet>() {
           private Iterator<ExtendedBlocklet> blockletIterator;
           private ExtendedBlocklet currBlocklet;
    +      private List<DataMap> dataMaps;
     
           @Override public void initialize(InputSplit inputSplit, 
TaskAttemptContext taskAttemptContext)
               throws IOException, InterruptedException {
    -        DataMapDistributableWrapper distributable = 
(DataMapDistributableWrapper) inputSplit;
    -        TableDataMap dataMap = DataMapStoreManager.getInstance()
    +        distributable = (DataMapDistributableWrapper) inputSplit;
    +        TableDataMap tableDataMap = DataMapStoreManager.getInstance()
                 .getDataMap(table, 
distributable.getDistributable().getDataMapSchema());
    -        List<ExtendedBlocklet> blocklets = 
dataMap.prune(distributable.getDistributable(),
    -            
dataMapExprWrapper.getFilterResolverIntf(distributable.getUniqueId()), 
partitions);
    +        dataMaps = 
tableDataMap.getTableDataMaps(distributable.getDistributable());
    +        List<ExtendedBlocklet> blocklets = tableDataMap
    +            .prune(dataMaps,
    --- End diff --
    
    it is just refactoring , so that we can call close reader for the datamaps 
in the distributable, 
    Initially there was a line of code to getDataMaps in Prune method inside 
TabledataMap.java,and we needed all datamaps of that distributable to call 
close reader, so its just a little refractoring to to this


---

Reply via email to