Github user sounakr commented on a diff in the pull request:

    https://github.com/apache/carbondata/pull/2257#discussion_r185403178
  
    --- Diff: 
hadoop/src/main/java/org/apache/carbondata/hadoop/api/CarbonFileInputFormat.java
 ---
    @@ -126,13 +132,27 @@ protected CarbonTable 
getOrCreateCarbonTable(Configuration configuration) throws
     
           FilterResolverIntf filterInterface = 
carbonTable.resolveFilter(filter, tableProvider);
     
    -      String segmentDir = 
CarbonTablePath.getSegmentPath(identifier.getTablePath(), "null");
    +      String segmentDir = null;
    +      if (carbonTable.isTransactionalTable()) {
    +        segmentDir = 
CarbonTablePath.getSegmentPath(identifier.getTablePath(), "null");
    +      } else {
    +        segmentDir = identifier.getTablePath();
    +      }
           FileFactory.FileType fileType = FileFactory.getFileType(segmentDir);
           if (FileFactory.isFileExist(segmentDir, fileType)) {
             // if external table Segments are found, add it to the List
             List<Segment> externalTableSegments = new ArrayList<Segment>();
    -        Segment seg = new Segment("null", null, readCommittedScope);
    -        externalTableSegments.add(seg);
    +        Segment seg;
    +        if (carbonTable.isTransactionalTable()) {
    +          seg = new Segment("null", null, readCommittedScope);
    --- End diff --
    
    Prior Non Transactional implementation, SDK used to write into the Segment 
Path instead of Table Path. i.e. inside /Fact/Part0/Sement_null. It used to 
deliberately pass the segment as "null". Those codes to read and write into the 
segment path is still existing and they make it an external transactional table 
and the code flow goes through CarbonInputFormat.  


---

Reply via email to