[
https://issues.apache.org/jira/browse/SPARK-3215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14113062#comment-14113062
]
Marcelo Vanzin commented on SPARK-3215:
---------------------------------------
Hi Matei, sorry if I'm missing what you're trying to convey, but I don't see
how any of your questions affect the choice of RPC. The (very) high-level API
in the proposal is an async API based on futures, so a very well understood
idiom. How you translate that into the underlying RPC layer, while important
from an implementation perspective, is sort of irrelevant to the client using
the API, in my view.
> Add remote interface for SparkContext
> -------------------------------------
>
> Key: SPARK-3215
> URL: https://issues.apache.org/jira/browse/SPARK-3215
> Project: Spark
> Issue Type: New Feature
> Components: Spark Core
> Reporter: Marcelo Vanzin
> Labels: hive
> Attachments: RemoteSparkContext.pdf
>
>
> A quick description of the issue: as part of running Hive jobs on top of
> Spark, it's desirable to have a SparkContext that is running in the
> background and listening for job requests for a particular user session.
> Running multiple contexts in the same JVM is not a very good solution. Not
> only SparkContext currently has issues sharing the same JVM among multiple
> instances, but that turns the JVM running the contexts into a huge bottleneck
> in the system.
> So I'm proposing a solution where we have a SparkContext that is running in a
> separate process, and listening for requests from the client application via
> some RPC interface (most probably Akka).
> I'll attach a document shortly with the current proposal. Let's use this bug
> to discuss the proposal and any other suggestions.
--
This message was sent by Atlassian JIRA
(v6.2#6252)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]