thinkharderdev commented on code in PR #146:
URL: https://github.com/apache/arrow-ballista/pull/146#discussion_r947829639
##########
ballista/rust/scheduler/src/state/task_manager.rs:
##########
@@ -376,11 +375,78 @@ impl<T: 'static + AsLogicalPlan, U: 'static +
AsExecutionPlan> TaskManager<T, U>
.await
}
+ pub(crate) async fn cancel_job(
+ &self,
+ job_id: &str,
+ executor_manager: &ExecutorManager,
+ ) -> Result<()> {
+ let lock = self.state.lock(Keyspace::ActiveJobs, "").await?;
+
+ let running_tasks = self
+ .get_execution_graph(job_id)
+ .await
+ .map(|graph| graph.running_tasks())
+ .unwrap_or_else(|_| vec![]);
+
+ info!(
+ "Cancelling {} running tasks for job {}",
+ running_tasks.len(),
+ job_id
+ );
+
+ self.fail_job_inner(lock, job_id, "Cancelled".to_owned())
+ .await?;
+
+ let mut tasks: HashMap<&str, Vec<protobuf::PartitionId>> =
Default::default();
+
+ for (partition, executor_id) in &running_tasks {
+ if let Some(parts) = tasks.get_mut(executor_id.as_str()) {
+ parts.push(protobuf::PartitionId {
+ job_id: job_id.to_owned(),
+ stage_id: partition.stage_id as u32,
+ partition_id: partition.partition_id as u32,
+ })
+ } else {
+ tasks.insert(
+ executor_id.as_str(),
+ vec![protobuf::PartitionId {
+ job_id: job_id.to_owned(),
+ stage_id: partition.stage_id as u32,
+ partition_id: partition.partition_id as u32,
+ }],
+ );
+ }
+ }
+
+ for (executor_id, partitions) in tasks {
+ if let Ok(mut client) =
executor_manager.get_client(executor_id).await {
+ client
+ .cancel_tasks(CancelTasksParams {
+ partition_id: partitions,
+ })
+ .await?;
+ } else {
+ error!("Failed to get client for executor ID {}", executor_id)
+ }
+ }
+
+ Ok(())
+ }
+
Review Comment:
See below, the task cancellation itself is fast (only setting an
`AtomicBoolean`). We're ultimately limited by the number of CPU cores available
so I would worry that spawning thousands of tokio tasks to try and do the
cancellations concurrently could actually cause a performance issue rather than
alleviate one. If we did do it that way we would need a way to control the
level of concurrency. In general, I think it's probably best to only try and
optimize this once we've observed a performance issue related to it as I think
the solution may be more complicated.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]