kfaraz commented on code in PR #16667:
URL: https://github.com/apache/druid/pull/16667#discussion_r1677339285
##########
server/src/main/java/org/apache/druid/metadata/IndexerSQLMetadataStorageCoordinator.java:
##########
@@ -2923,6 +2944,87 @@ public int deleteUpgradeSegmentsForTask(final String
taskId)
);
}
+ @Override
+ public Map<String, String> retrieveUpgradedFromSegmentIds(
+ final String dataSource,
+ final Set<String> segmentIds
+ )
+ {
+ if (segmentIds.isEmpty()) {
+ return Collections.emptyMap();
+ }
+
+ final List<String> segmentIdList = ImmutableList.copyOf(segmentIds);
+ final String sql = StringUtils.format(
+ "SELECT id, upgraded_from_segment_id FROM %s WHERE dataSource =
:dataSource %s",
+ dbTables.getSegmentsTable(),
+ SqlSegmentsMetadataQuery.getParameterizedInConditionForColumn("id",
segmentIdList)
+ );
+ final Map<String, String> upgradedFromSegmentIds = new HashMap<>();
+ connector.retryWithHandle(
+ handle -> {
+ Query<Map<String, Object>> query = handle.createQuery(sql)
Review Comment:
Yes, it would be controlled by the batch size of kill. But it is still
possible for someone to either increase those limits or just fire this action
with a large set of segment IDs. The overlord side should have its own
safeguards.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]