mingmwang commented on code in PR #146:
URL: https://github.com/apache/arrow-ballista/pull/146#discussion_r948906957


##########
ballista/rust/scheduler/src/state/task_manager.rs:
##########
@@ -376,11 +375,78 @@ impl<T: 'static + AsLogicalPlan, U: 'static + 
AsExecutionPlan> TaskManager<T, U>
         .await
     }
 
+    pub(crate) async fn cancel_job(
+        &self,
+        job_id: &str,
+        executor_manager: &ExecutorManager,
+    ) -> Result<()> {
+        let lock = self.state.lock(Keyspace::ActiveJobs, "").await?;
+
+        let running_tasks = self
+            .get_execution_graph(job_id)
+            .await
+            .map(|graph| graph.running_tasks())
+            .unwrap_or_else(|_| vec![]);
+
+        info!(
+            "Cancelling {} running tasks for job {}",
+            running_tasks.len(),
+            job_id
+        );
+
+        self.fail_job_inner(lock, job_id, "Cancelled".to_owned())
+            .await?;
+
+        let mut tasks: HashMap<&str, Vec<protobuf::PartitionId>> = 
Default::default();
+
+        for (partition, executor_id) in &running_tasks {
+            if let Some(parts) = tasks.get_mut(executor_id.as_str()) {
+                parts.push(protobuf::PartitionId {
+                    job_id: job_id.to_owned(),
+                    stage_id: partition.stage_id as u32,
+                    partition_id: partition.partition_id as u32,
+                })
+            } else {
+                tasks.insert(
+                    executor_id.as_str(),
+                    vec![protobuf::PartitionId {
+                        job_id: job_id.to_owned(),
+                        stage_id: partition.stage_id as u32,
+                        partition_id: partition.partition_id as u32,
+                    }],
+                );
+            }
+        }
+
+        for (executor_id, partitions) in tasks {
+            if let Ok(mut client) = 
executor_manager.get_client(executor_id).await {
+                client
+                    .cancel_tasks(CancelTasksParams {
+                        partition_id: partitions,
+                    })
+                    .await?;
+            } else {
+                error!("Failed to get client for executor ID {}", executor_id)
+            }
+        }
+
+        Ok(())
+    }
+

Review Comment:
   We will not spawn thousands of tokio tasks here in most cases, the 
concurrency level would  depends on how many executor instances we have in the 
system. I think sending the cancel request to multiple executors concurrently 
is better than sending the request sequentially. The same for assign/launch 
tasks to executors.  Usually in a data center, a remote RPC call could take 5ms 
to 20 ms.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to