Github user ajantha-bhat commented on a diff in the pull request:

    https://github.com/apache/carbondata/pull/1995#discussion_r170412437
  
    --- Diff: 
hadoop/src/main/java/org/apache/carbondata/hadoop/api/CarbonTableInputFormat.java
 ---
    @@ -353,7 +353,53 @@ public AbsoluteTableIdentifier 
getAbsoluteTableIdentifier(Configuration configur
           List<String> validSegments = segments.getValidSegments();
           streamSegments = segments.getStreamSegments();
           if (validSegments.size() == 0) {
    -        return getSplitsOfStreaming(job, identifier, streamSegments);
    +        if (streamSegments.size() != 0) {
    +          return getSplitsOfStreaming(job, identifier, streamSegments);
    +        }
    +        // check for externalTable segment (Segment_null)
    +        {
    +          // process and resolve the expression
    +          Expression filter = getFilterPredicates(job.getConfiguration());
    +          TableProvider tableProvider = new 
SingleTableProvider(carbonTable);
    +          // this will be null in case of corrupt schema file.
    +          PartitionInfo partitionInfo = 
carbonTable.getPartitionInfo(carbonTable.getTableName());
    +          CarbonInputFormatUtil.processFilterExpression(filter, 
carbonTable, null, null);
    +
    +          // prune partitions for filter query on partition table
    --- End diff --
    
    This code changes are not required. Because we already handle this logic in 
**CarbonFileInputFormat**
    This logic is to get the splits file level. But this is a table level 
reader.
    There should not be any change in this class for this Requirement.


---

Reply via email to