[ 
https://issues.apache.org/jira/browse/SYSTEMML-2420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16521866#comment-16521866
 ] 

Matthias Boehm commented on SYSTEMML-2420:
------------------------------------------

Yes, in general this is a good start. It would be awesome to reuse Spark's 
{{RpcEnv}} because it would automatically and consistently use the spark 
configurations {{spark.rpc.*}} along with potential changes of default values 
across versions. However, if this creates problems (i.e., if it is difficult to 
realize), we can simply use Netty directly. If I remember correctly that's how 
Spark also implements its RPC communication.

> Communication between ps and workers
> ------------------------------------
>
>                 Key: SYSTEMML-2420
>                 URL: https://issues.apache.org/jira/browse/SYSTEMML-2420
>             Project: SystemML
>          Issue Type: Sub-task
>            Reporter: LI Guobao
>            Assignee: LI Guobao
>            Priority: Major
>
> It aims to implement the parameter exchange between ps and workers. We could 
> leverage spark RPC to setup a ps endpoint in driver node which means that the 
> ps service could be discovered by workers in the network. And then the 
> workers could invoke the pull/push method via RPC using the registered 
> endpoint of ps service. Hence, in details, this tasks consists of registering 
> the ps endpoint in spark rpc framework and using rpc to invoke target method 
> in worker side. We can learn that the spark rpc is implemented in Scala. 
> Hence we need to wrap them in in order to be used in Java. Overall, we could 
> register the ps service with _RpcEndpoint_ and invoke the service with 
> _RpcEndpointRef_.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to