[
https://issues.apache.org/jira/browse/SPARK-5124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14334166#comment-14334166
]
Marcelo Vanzin commented on SPARK-5124:
---------------------------------------
Hi [~zsxwing], I briefly looked over the design and parts of the
implementation, and I have some comments. Overall this looks OK, even if it
kinda does look a lot like akka's actor system, but I don't think there would
be a problem implementing it over a different RPC library.
One thing that I found very confusing is {{RpcEndpoint}} vs.
{{NetworkRpcEndpoint}}. The "R" in RPC means "remote", which sort of implies
networking. I think Reynold touched on this when he talked about using an event
loop for "local actors". Perhaps a more flexible approach would be something
like:
- A generic {{Endpoint}} that defines the {{receive()}} method and other
generic interfaces
- {{RpcEnv}} takes an {{Endpoint}} and a name to register endpoints for remote
connections. Things like "onConnected" and "onDisassociated" become messages
passed to {{receive()}}, kinda like in akka today. This means that there's no
need for a specialized {{RpcEndpoint}} interface.
- "local" Endpoints could be exposed directly, without the need for an
{{RpcEnv}}. For example, as a getter in {{SparkEnv}}. You could have some
wrapper to expose {{send()}} and {{ask()}} so that the client interface looks
similar for remote and local endpoints.
- The default {{Endpoint}} has no thread-safety guarantees. You can wrap an
{{Endpoint}} in an {{EventLoop}} if you want messages to be handled using a
queue, or synchronize your {{receive()}} method (although that can block the
dispatcher thread, which could be bad). But this would easily allow actors to
process multiple messages concurrently is desired.
What do you think?
> Standardize internal RPC interface
> ----------------------------------
>
> Key: SPARK-5124
> URL: https://issues.apache.org/jira/browse/SPARK-5124
> Project: Spark
> Issue Type: Improvement
> Components: Spark Core
> Reporter: Reynold Xin
> Assignee: Shixiong Zhu
> Attachments: Pluggable RPC - draft 1.pdf, Pluggable RPC - draft 2.pdf
>
>
> In Spark we use Akka as the RPC layer. It would be great if we can
> standardize the internal RPC interface to facilitate testing. This will also
> provide the foundation to try other RPC implementations in the future.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]