maytasm commented on a change in pull request #11190:
URL: https://github.com/apache/druid/pull/11190#discussion_r626181627
##########
File path:
indexing-service/src/main/java/org/apache/druid/indexing/overlord/TaskLockbox.java
##########
@@ -674,6 +675,64 @@ public TaskLock apply(TaskLockPosse taskLockPosse)
}
}
+ /**
+ * Gets a Map containing intervals locked by active tasks. Intervals locked
+ * by revoked TaskLocks are not included in the returned Map.
+ *
+ * @return Map from Task Id to locked intervals.
+ */
+ public Map<String, DatasourceIntervals> getLockedIntervals()
+ {
+ final Map<String, List<Interval>> taskToIntervals = new HashMap<>();
+ final Map<String, String> taskToDatasource = new HashMap<>();
+
+ // Take a lock and populate the maps
+ giant.lock();
Review comment:
why do we need the lock here?
##########
File path:
indexing-service/src/main/java/org/apache/druid/indexing/overlord/http/OverlordResource.java
##########
@@ -227,6 +227,34 @@ public Response isLeader()
}
}
+ @GET
+ @Path("/lockedIntervals")
+ @Produces(MediaType.APPLICATION_JSON)
+ public Response getTaskLockedIntervals(@Context HttpServletRequest request)
+ {
+ // Perform authorization check
+ final ResourceAction resourceAction = new ResourceAction(
Review comment:
Any particular this API needs a finer grained access control rather than
using the StateResourceFilter ?
The StateResourceFilter is already used for API like /taskStatus which I
think is similar in access control to this API.
##########
File path:
indexing-service/src/main/java/org/apache/druid/indexing/overlord/http/OverlordResource.java
##########
@@ -227,6 +227,34 @@ public Response isLeader()
}
}
+ @GET
+ @Path("/lockedIntervals")
+ @Produces(MediaType.APPLICATION_JSON)
+ public Response getTaskLockedIntervals(@Context HttpServletRequest request)
+ {
+ // Perform authorization check
+ final ResourceAction resourceAction = new ResourceAction(
+ new Resource("lockedIntervals", ResourceType.STATE),
+ Action.READ
+ );
+ final Access authResult = AuthorizationUtils
+ .authorizeResourceAction(request, resourceAction, authorizerMapper);
+ if (!authResult.isAllowed()) {
+ throw new WebApplicationException(
+ Response.status(Response.Status.FORBIDDEN)
+ .entity(StringUtils.format("Access-Check-Result: %s",
authResult.toString()))
+ .build()
+ );
+ }
+
+ // Build the response
+ final LockedIntervalsResponse response = new LockedIntervalsResponse(
+ taskStorageQueryAdapter.getLockedIntervals()
+ );
+ log.warn("Found Intervals: %s", response.getLockedIntervals());
Review comment:
Why is this a WARN log?
##########
File path:
indexing-service/src/main/java/org/apache/druid/indexing/overlord/http/OverlordResource.java
##########
@@ -227,6 +227,34 @@ public Response isLeader()
}
}
+ @GET
+ @Path("/lockedIntervals")
+ @Produces(MediaType.APPLICATION_JSON)
+ public Response getTaskLockedIntervals(@Context HttpServletRequest request)
+ {
+ // Perform authorization check
+ final ResourceAction resourceAction = new ResourceAction(
+ new Resource("lockedIntervals", ResourceType.STATE),
+ Action.READ
+ );
+ final Access authResult = AuthorizationUtils
+ .authorizeResourceAction(request, resourceAction, authorizerMapper);
+ if (!authResult.isAllowed()) {
+ throw new WebApplicationException(
+ Response.status(Response.Status.FORBIDDEN)
+ .entity(StringUtils.format("Access-Check-Result: %s",
authResult.toString()))
+ .build()
+ );
+ }
+
+ // Build the response
+ final LockedIntervalsResponse response = new LockedIntervalsResponse(
Review comment:
Can the API just return Map<String, DatasourceIntervals> (removing the
need for another class)?
##########
File path:
indexing-service/src/main/java/org/apache/druid/indexing/overlord/http/OverlordResource.java
##########
@@ -227,6 +227,34 @@ public Response isLeader()
}
}
+ @GET
+ @Path("/lockedIntervals")
+ @Produces(MediaType.APPLICATION_JSON)
+ public Response getTaskLockedIntervals(@Context HttpServletRequest request)
+ {
+ // Perform authorization check
+ final ResourceAction resourceAction = new ResourceAction(
+ new Resource("lockedIntervals", ResourceType.STATE),
+ Action.READ
+ );
+ final Access authResult = AuthorizationUtils
+ .authorizeResourceAction(request, resourceAction, authorizerMapper);
+ if (!authResult.isAllowed()) {
+ throw new WebApplicationException(
+ Response.status(Response.Status.FORBIDDEN)
+ .entity(StringUtils.format("Access-Check-Result: %s",
authResult.toString()))
+ .build()
+ );
+ }
+
+ // Build the response
+ final LockedIntervalsResponse response = new LockedIntervalsResponse(
Review comment:
Response.ok(taskStorageQueryAdapter.getLockedIntervals()).build()
##########
File path: docs/operations/api-reference.md
##########
@@ -614,6 +614,11 @@ Retrieve information about the segments of a task.
Retrieve a [task completion report](../ingestion/tasks.md#task-reports) for a
task. Only works for completed tasks.
+* `/druid/indexer/v1/lockedIntervals`
+
+Retrieve the list of Intervals locked by currently running
ingestion/compaction tasks. The response contains a Map from
+Task IDs to the list of Intervals locked by the respective Tasks.
Review comment:
Do we need to return interval locked by compaction tasks?
If the existing compaction task are out of date, we already have code to
cancelled and submit for that interval (hence the lock those to-be-cancel
compaction task does not matter). If they are not canceled, then there is
already code to skip the interval of running compaction task.
In the above case, we can just return a datasource --> intervals of locked
by non compaction tasks
##########
File path:
server/src/main/java/org/apache/druid/server/coordinator/duty/CompactSegments.java
##########
@@ -139,6 +142,9 @@ public DruidCoordinatorRuntimeParams
run(DruidCoordinatorRuntimeParams params)
compactionTaskQuery.getGranularitySpec().getSegmentGranularity(),
configuredSegmentGranularity);
indexingServiceClient.cancelTask(status.getId());
+
+ // Remove this from the locked intervals
+ taskToLockedIntervals.remove(status.getId());
Review comment:
As mentioned earlier, this is not really needed. This would be the same
as taskToLockedIntervals not containing lock for compaction task in the first
place.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]