Github user jose-torres commented on a diff in the pull request:
https://github.com/apache/spark/pull/21428#discussion_r194950709
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/continuous/shuffle/RPCContinuousShuffleReader.scala
---
@@ -79,10 +77,10 @@ private[shuffle] class UnsafeRowReceiver(
private val writerEpochMarkersReceived =
Array.fill(numShuffleWriters)(false)
private val executor =
Executors.newFixedThreadPool(numShuffleWriters)
- private val completion = new
ExecutorCompletionService[UnsafeRowReceiverMessage](executor)
+ private val completion = new
ExecutorCompletionService[RPCContinuousShuffleMessage](executor)
--- End diff --
It cannot be. There's a deadlock scenario where the queue is filled with
records from epoch N before all writers have sent the marker for epoch N - 1.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]