Github user ravipesala commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2269#discussion_r189220225
--- Diff:
hadoop/src/main/java/org/apache/carbondata/hadoop/api/DistributableDataMapFormat.java
---
@@ -100,14 +103,18 @@ private static FilterResolverIntf
getFilterExp(Configuration configuration) thro
return new RecordReader<Void, ExtendedBlocklet>() {
private Iterator<ExtendedBlocklet> blockletIterator;
private ExtendedBlocklet currBlocklet;
+ private List<DataMap> dataMaps;
@Override public void initialize(InputSplit inputSplit,
TaskAttemptContext taskAttemptContext)
throws IOException, InterruptedException {
- DataMapDistributableWrapper distributable =
(DataMapDistributableWrapper) inputSplit;
- TableDataMap dataMap = DataMapStoreManager.getInstance()
+ distributable = (DataMapDistributableWrapper) inputSplit;
+ TableDataMap tableDataMap = DataMapStoreManager.getInstance()
.getDataMap(table,
distributable.getDistributable().getDataMapSchema());
- List<ExtendedBlocklet> blocklets =
dataMap.prune(distributable.getDistributable(),
-
dataMapExprWrapper.getFilterResolverIntf(distributable.getUniqueId()),
partitions);
+ dataMaps =
tableDataMap.getTableDataMaps(distributable.getDistributable());
--- End diff --
I don't see the benefit of get the datamaps out and close it, I think you
can close inside datamap when prune is done. I think it is not required to
change theinterface for it.
---