Github user ravipesala commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2204#discussion_r183233426
--- Diff:
hadoop/src/main/java/org/apache/carbondata/hadoop/api/CarbonInputFormat.java ---
@@ -359,23 +359,27 @@ protected Expression
getFilterPredicates(Configuration configuration) {
.getProperty(CarbonCommonConstants.USE_DISTRIBUTED_DATAMAP,
CarbonCommonConstants.USE_DISTRIBUTED_DATAMAP_DEFAULT));
DataMapExprWrapper dataMapExprWrapper =
-
DataMapChooser.get().choose(getOrCreateCarbonTable(job.getConfiguration()),
resolver);
+
DataMapChooser.get().chooseCG(getOrCreateCarbonTable(job.getConfiguration()),
resolver);
DataMapJob dataMapJob = getDataMapJob(job.getConfiguration());
List<PartitionSpec> partitionsToPrune =
getPartitionsToPrune(job.getConfiguration());
List<ExtendedBlocklet> prunedBlocklets;
- DataMapLevel dataMapLevel = dataMapExprWrapper.getDataMapType();
- if (dataMapJob != null &&
- (distributedCG ||
- (dataMapLevel == DataMapLevel.FG &&
isFgDataMapPruningEnable(job.getConfiguration())))) {
- DistributableDataMapFormat datamapDstr =
- new DistributableDataMapFormat(carbonTable, dataMapExprWrapper,
segmentIds,
- partitionsToPrune, BlockletDataMapFactory.class.getName());
- prunedBlocklets = dataMapJob.execute(datamapDstr, resolver);
- // Apply expression on the blocklets.
- prunedBlocklets = dataMapExprWrapper.pruneBlocklets(prunedBlocklets);
+ if (distributedCG) {
--- End diff --
Ok, corrected now. First always prune with Default datamap and then further
pruned with remaining datamaps.
---