Github user gvramana commented on a diff in the pull request:

    https://github.com/apache/carbondata/pull/1706#discussion_r158316333
  
    --- Diff: 
core/src/main/java/org/apache/carbondata/core/metadata/PartitionMapFileStore.java
 ---
    @@ -253,6 +282,96 @@ public void commitPartitions(String segmentPath, final 
String uniqueId, boolean
         }
       }
     
    +  /**
    +   * Clean up invalid data after drop partition in all segments of table
    +   * @param table
    +   * @param currentPartitions Current partitions of table
    +   * @param forceDelete Whether it should be deleted force or check the 
time for an hour creation
    +   *                    to delete data.
    +   * @throws IOException
    +   */
    +  public void cleanSegments(
    +      CarbonTable table,
    +      List<String> currentPartitions,
    +      boolean forceDelete) throws IOException {
    +    SegmentStatusManager ssm = new 
SegmentStatusManager(table.getAbsoluteTableIdentifier());
    +
    +    CarbonTablePath carbonTablePath = CarbonStorePath
    +        
.getCarbonTablePath(table.getAbsoluteTableIdentifier().getTablePath(),
    +            table.getAbsoluteTableIdentifier().getCarbonTableIdentifier());
    +
    +    LoadMetadataDetails[] details = 
ssm.readLoadMetadata(table.getMetaDataFilepath());
    +    // scan through each segment.
    +
    +    for (LoadMetadataDetails segment : details) {
    +
    +      // if this segment is valid then only we will go for deletion of 
related
    +      // dropped partition files. if the segment is mark for delete or 
compacted then any way
    +      // it will get deleted.
    +
    +      if (segment.getSegmentStatus() == SegmentStatus.SUCCESS
    +          || segment.getSegmentStatus() == 
SegmentStatus.LOAD_PARTIAL_SUCCESS) {
    +        List<String> toBeDeletedIndexFiles = new ArrayList<>();
    +        List<String> toBeDeletedDataFiles = new ArrayList<>();
    +        // take the list of files from this segment.
    +        String segmentPath = 
carbonTablePath.getCarbonDataDirectoryPath("0", segment.getLoadName());
    +        String partitionFilePath = getPartitionFilePath(segmentPath);
    +        if (partitionFilePath != null) {
    +          PartitionMapper partitionMapper = 
readPartitionMap(partitionFilePath);
    +          DataFileFooterConverter fileFooterConverter = new 
DataFileFooterConverter();
    +          SegmentIndexFileStore indexFileStore = new 
SegmentIndexFileStore();
    +          indexFileStore.readAllIIndexOfSegment(segmentPath);
    +          Set<String> indexFilesFromSegment = 
indexFileStore.getCarbonIndexMap().keySet();
    +          for (String indexFile : indexFilesFromSegment) {
    +            // Check the partition information in the partiton mapper
    +            List<String> indexPartitions = 
partitionMapper.partitionMap.get(indexFile);
    +            if (indexPartitions == null || 
!currentPartitions.containsAll(indexPartitions)) {
    +              Long fileTimestamp = 
CarbonUpdateUtil.getTimeStampAsLong(indexFile
    +                  
.substring(indexFile.lastIndexOf(CarbonCommonConstants.HYPHEN) + 1,
    +                      indexFile.length() - 
CarbonTablePath.INDEX_FILE_EXT.length()));
    +              if 
(CarbonUpdateUtil.isMaxQueryTimeoutExceeded(fileTimestamp) || forceDelete) {
    --- End diff --
    
    1. mergeindex also should be read based on transactiontimestamp, otherwise 
if droppartition is called and then clean files is called immediately , select 
can read previous index files which might get deleted during reading.
    2. Alter drop partition can also recreate mergeindex map in same  
transactiontimestamp.
    This can be handled along with partitionmap transactiontimestamp 
implementation.


---

Reply via email to