JkSelf commented on code in PR #52039:
URL: https://github.com/apache/spark/pull/52039#discussion_r2454046416
##########
sql/core/src/test/scala/org/apache/spark/sql/execution/SQLExecutionSuite.scala:
##########
@@ -50,7 +50,7 @@ class SQLExecutionSuite extends SparkFunSuite with
SQLConfHelper {
}
}
- test("concurrent query execution with fork-join pool (SPARK-13747)") {
+ ignore("concurrent query execution with fork-join pool (SPARK-13747)") {
Review Comment:
@cloud-fan
I looked deeper into the failed unit tests and found that they involve
multiple threads sharing the same SparkContext.
If a job is cancelled in one thread, it can cause exceptions in other
threads that are still running. How should we handle this concurrent scenario?
Would it be acceptable to skip the cancel operation for this test case, or do
you have other suggestions?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]