Github user ravipesala commented on a diff in the pull request:

    https://github.com/apache/carbondata/pull/1471#discussion_r151830437
  
    --- Diff: 
hadoop/src/main/java/org/apache/carbondata/hadoop/api/CarbonTableInputFormat.java
 ---
    @@ -687,16 +689,17 @@ protected Expression 
getFilterPredicates(Configuration configuration) {
         // get tokens for all the required FileSystem for table path
         TokenCache.obtainTokensForNamenodes(job.getCredentials(),
             new Path[] { new Path(absoluteTableIdentifier.getTablePath()) }, 
job.getConfiguration());
    -
    -    TableDataMap blockletMap = DataMapStoreManager.getInstance()
    -        .getDataMap(absoluteTableIdentifier, BlockletDataMap.NAME,
    -            BlockletDataMapFactory.class.getName());
    +    boolean distributedCG = 
Boolean.parseBoolean(CarbonProperties.getInstance()
    +        .getProperty(CarbonCommonConstants.USE_DISTRIBUTED_DATAMAP,
    +            CarbonCommonConstants.USE_DISTRIBUTED_DATAMAP_DEFAULT));
    +    TableDataMap blockletMap =
    +        
DataMapStoreManager.getInstance().chooseDataMap(absoluteTableIdentifier);
         DataMapJob dataMapJob = getDataMapJob(job.getConfiguration());
         List<ExtendedBlocklet> prunedBlocklets;
    -    if (dataMapJob != null) {
    +    if (distributedCG || blockletMap.getDataMapFactory().getDataMapType() 
== DataMapType.FG) {
    --- End diff --
    
    In case of FG datamap the rowid information would be written to temp files 
and writepath is given to FGBlocklet so it does not contain any row information 
when it returns to the driver.
    While executing filter query executor reads rowids from disk and pass to 
the scanner. 


---

Reply via email to