juliuszsompolski commented on code in PR #44095:
URL: https://github.com/apache/spark/pull/44095#discussion_r1411245848


##########
connector/connect/server/src/test/scala/org/apache/spark/sql/connect/execution/ReattachableExecuteSuite.scala:
##########
@@ -296,6 +297,32 @@ class ReattachableExecuteSuite extends 
SparkConnectServerTest {
     }
   }
 
+  test("SPARK-46186 interrupt directly after query start") {
+    // This test depends on fast timing.
+    // If something is wrong, it can fail only from time to time.
+    withRawBlockingStub { stub =>
+      val operationId = UUID.randomUUID().toString
+      val interruptRequest = proto.InterruptRequest
+        .newBuilder
+        .setUserContext(userContext)
+        .setSessionId(defaultSessionId)
+        
.setInterruptType(proto.InterruptRequest.InterruptType.INTERRUPT_TYPE_OPERATION_ID)
+        .setOperationId(operationId)
+        .build()
+      val iter = stub.executePlan(
+        buildExecutePlanRequest(buildPlan(MEDIUM_RESULTS_QUERY), operationId = 
operationId))
+      // wait for execute holder to exist, but the execute thread may not have 
started yet.

Review Comment:
   if we send it *too* fast, before the exeuction is even created, the 
interrupt could be a noop, and the query will start running uninterrupted.
   Interrupt returns in interrupt response that it didn't really interrupt any 
operation, but currently the clients don't implement checking this... I think 
trying to fix that further gets into really edge cases ...



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to