Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/20622
> Using a thread interrupt to kill the stream appears to be a fundamentally
fragile solution, requiring us to maintain a whitelist of exceptions we think
Spark execution might surface in response to an interrupt.
I think the original issue is it interrupts the thread and also cancels the
Spark job. Then `runJob` may throw `InterruptedException` or `SparkException`
and cause the test flakiness.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]