Github user jose-torres commented on a diff in the pull request:
https://github.com/apache/spark/pull/21428#discussion_r194962429
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/continuous/shuffle/RPCContinuousShuffleReader.scala
---
@@ -68,7 +66,7 @@ private[shuffle] class UnsafeRowReceiver(
}
override def receiveAndReply(context: RpcCallContext):
PartialFunction[Any, Unit] = {
- case r: UnsafeRowReceiverMessage =>
+ case r: RPCContinuousShuffleMessage =>
queues(r.writerId).put(r)
--- End diff --
That's a very strange characteristic for an RPC framework.
I don't know what backpressure could mean other than a receiver blocking a
sender from sending more data. In any case, the final shuffle mechanism isn't
going to use the RPC framework, so I added a reference to it. (We can discuss
in a later PR whether we want to leave this mechanism lying around or remove it
once we're confident the TCP-based one is working.)
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]