kfaraz commented on code in PR #19059:
URL: https://github.com/apache/druid/pull/19059#discussion_r2881836360


##########
server/src/main/java/org/apache/druid/metadata/IndexerSQLMetadataStorageCoordinator.java:
##########
@@ -1925,15 +1933,16 @@ private Set<DataSegmentPlus> 
createNewIdsOfAppendSegmentsAfterReplace(
         // but a (revoked) REPLACE lock covers this segment
         newInterval = oldInterval;
       }
+      if (!oldSegment.getShardSpec().isNumChunkSupported()) {
+        numChunkNotSupported.add(newInterval);
+      }
 
       // Compute shard spec of the upgraded segment
       final int partitionNum = intervalToCurrentPartitionNum.compute(
           newInterval,
           (i, value) -> value == null ? 0 : value + 1
       );
-      final int numCorePartitions = 
intervalToNumCorePartitions.get(newInterval);
-      ShardSpec shardSpec = new NumberedShardSpec(partitionNum, 
numCorePartitions);
-
+      final ShardSpec shardSpec = 
oldSegment.getShardSpec().withPartitionNum(partitionNum);
       // Create upgraded segment with the correct interval, version and shard 
spec
       String lockVersion = 
upgradeSegmentToLockVersion.get(oldSegment.getId().toString());
       DataSegment dataSegment = DataSegment.builder(oldSegment)

Review Comment:
   Yes, but the number of core partitions is updated later in the code flow, 
right?
   That would effectively change the shard spec.
   Even though the number of core partitions is not used to determine segment 
ID, it would still be cleaner to update the inner `DataSegment` object (or 
maybe construct it lazily) so that it uses the correct shard spec.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to