maytasm commented on a change in pull request #9965:
URL: https://github.com/apache/druid/pull/9965#discussion_r439707391
##########
File path:
server/src/main/java/org/apache/druid/server/http/DataSourcesResource.java
##########
@@ -391,6 +396,123 @@ public Response getServedSegmentsInInterval(
return getServedSegmentsInInterval(dataSourceName, full != null,
theInterval::contains);
}
+ @GET
+ @Path("/{dataSourceName}/loadstatus")
+ @Produces(MediaType.APPLICATION_JSON)
+ @ResourceFilters(DatasourceResourceFilter.class)
+ public Response getDatasourceLoadstatus(
+ @PathParam("dataSourceName") String dataSourceName,
+ @QueryParam("interval") @Nullable final String interval,
+ @QueryParam("forceMetadataRefresh") @Nullable final Boolean
forceMetadataRefresh,
+ @QueryParam("simple") @Nullable final String simple,
+ @QueryParam("full") @Nullable final String full
+ )
+ {
+ final Interval theInterval;
+ if (interval == null) {
+ long defaultIntervalOffset = 14 * 24 * 60 * 60 * 1000;
+ long currentTimeInMs = System.currentTimeMillis();
+ theInterval = Intervals.utc(currentTimeInMs - defaultIntervalOffset,
currentTimeInMs);
+ } else {
+ theInterval = Intervals.of(interval.replace('_', '/'));
+ }
+
+ boolean requiresMetadataStorePoll = forceMetadataRefresh == null ? true :
forceMetadataRefresh;
+
+ Optional<Iterable<DataSegment>> segments =
segmentsMetadataManager.iterateAllUsedNonOvershadowedSegmentsForDatasourceInterval(
+ dataSourceName,
+ theInterval,
+ requiresMetadataStorePoll
+ );
+
+ if (!segments.isPresent()) {
+ return logAndCreateDataSourceNotFoundResponse(dataSourceName);
+ }
+
+ if (simple != null) {
+ // Calculate resposne for simple mode
+ Map<SegmentId, SegmentLoadInfo> segmentLoadInfos =
serverInventoryView.getSegmentLoadInfos();
+ int numUnloadedSegments = 0;
+ for (DataSegment segment : segments.get()) {
+ if (!segmentLoadInfos.containsKey(segment.getId())) {
+ numUnloadedSegments++;
+ }
+ }
+ return Response.ok(
+ ImmutableMap.of(
+ dataSourceName,
+ numUnloadedSegments
+ )
+ ).build();
+ } else if (full != null) {
+ // Calculate resposne for full mode
+ final Map<String, Object2LongMap<String>>
underReplicationCountsPerDataSourcePerTier = new HashMap<>();
+ final List<Rule> rules =
metadataRuleManager.getRulesWithDefault(dataSourceName);
+ final Table<SegmentId, String, Integer> segmentsInCluster =
HashBasedTable.create();
+ final DateTime now = DateTimes.nowUtc();
+
+ for (DataSegment segment : segments.get()) {
+ for (DruidServer druidServer : serverInventoryView.getInventory()) {
+ String tier = druidServer.getTier();
+ SegmentId segmentId = segment.getId();
+ DruidDataSource druidDataSource =
druidServer.getDataSource(dataSourceName);
+ if (druidDataSource != null && druidDataSource.getSegment(segmentId)
!= null) {
+ Integer numReplicants = segmentsInCluster.get(segmentId, tier);
+ if (numReplicants == null) {
+ numReplicants = 0;
+ }
+ segmentsInCluster.put(segmentId, tier, numReplicants + 1);
+ }
+ }
+ }
+ for (DataSegment segment : segments.get()) {
Review comment:
Removed this code in DataSourcesResource. Reuse the code for calculating
the underReplicationCountsPerDataSourcePerTier in DruidCoordinator by making
the call to DruidCoordinator. This basically reuse segmentReplicantLookup in
DruidCoordinator. This can make sure that the behavior is consistent between
the full format of the new API and the existing coordinator loadstatus API. For
example, if there is a bug in the full format coordinator loadstatus API where
it is ignoring broadcast rule, then we just have to remember to fix it in one
place ;)
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]