Github user jackylk commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2269#discussion_r186723292
--- Diff:
hadoop/src/main/java/org/apache/carbondata/hadoop/api/DistributableDataMapFormat.java
---
@@ -100,14 +103,18 @@ private static FilterResolverIntf
getFilterExp(Configuration configuration) thro
return new RecordReader<Void, ExtendedBlocklet>() {
private Iterator<ExtendedBlocklet> blockletIterator;
private ExtendedBlocklet currBlocklet;
+ private List<DataMap> dataMaps;
@Override public void initialize(InputSplit inputSplit,
TaskAttemptContext taskAttemptContext)
throws IOException, InterruptedException {
- DataMapDistributableWrapper distributable =
(DataMapDistributableWrapper) inputSplit;
- TableDataMap dataMap = DataMapStoreManager.getInstance()
+ distributable = (DataMapDistributableWrapper) inputSplit;
+ TableDataMap tableDataMap = DataMapStoreManager.getInstance()
.getDataMap(table,
distributable.getDistributable().getDataMapSchema());
- List<ExtendedBlocklet> blocklets =
dataMap.prune(distributable.getDistributable(),
-
dataMapExprWrapper.getFilterResolverIntf(distributable.getUniqueId()),
partitions);
+ dataMaps =
tableDataMap.getTableDataMaps(distributable.getDistributable());
+ List<ExtendedBlocklet> blocklets = tableDataMap
+ .prune(dataMaps,
--- End diff --
I am not sure why need to pass dataMaps to the prune function. And
`tableDataMap.getTableDataMaps` seems strange
---