[ https://issues.apache.org/jira/browse/SPARK-5124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14335264#comment-14335264 ]
Reynold Xin commented on SPARK-5124: ------------------------------------ It seems to me the 2nd would be useful in some cases, but not really necessary, and maybe even belongs in a layer higher than what is proposed here. It might also be too expensive to track, considering you can have thousands of RPC messages a second. And it is also subject to memory leaks, if the receiver doesn't properly discard the message. As you said, none of the Spark code uses it at the moment. > Standardize internal RPC interface > ---------------------------------- > > Key: SPARK-5124 > URL: https://issues.apache.org/jira/browse/SPARK-5124 > Project: Spark > Issue Type: Sub-task > Components: Spark Core > Reporter: Reynold Xin > Assignee: Shixiong Zhu > Attachments: Pluggable RPC - draft 1.pdf, Pluggable RPC - draft 2.pdf > > > In Spark we use Akka as the RPC layer. It would be great if we can > standardize the internal RPC interface to facilitate testing. This will also > provide the foundation to try other RPC implementations in the future. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org