[
https://issues.apache.org/jira/browse/SPARK-3110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Hari Shreedharan updated SPARK-3110:
------------------------------------
Description:
The idea is for long running processes like streaming, you'd want the AM to
come back up and reuse the same executors, so it can get the blocks from the
memory of the executors because many streaming systems like Flume cannot really
replay the data once it has been taken out. Even for others which can, the time
period before data "expires" can mean some data could be lost. This is the
first step in a series of patches for this one. The next is to get the AM to
find the executors. My current plan is to use HDFS to keep track of where the
executors are running and then communicate to them via Akka, to get a block
list.
I plan to expose this via SparkSubmit as the last step once we have all of the
other pieces in place.
> Add a "ha" mode in YARN mode to keep executors in between restarts
> ------------------------------------------------------------------
>
> Key: SPARK-3110
> URL: https://issues.apache.org/jira/browse/SPARK-3110
> Project: Spark
> Issue Type: Bug
> Reporter: Hari Shreedharan
>
> The idea is for long running processes like streaming, you'd want the AM to
> come back up and reuse the same executors, so it can get the blocks from the
> memory of the executors because many streaming systems like Flume cannot
> really replay the data once it has been taken out. Even for others which can,
> the time period before data "expires" can mean some data could be lost. This
> is the first step in a series of patches for this one. The next is to get the
> AM to find the executors. My current plan is to use HDFS to keep track of
> where the executors are running and then communicate to them via Akka, to get
> a block list.
> I plan to expose this via SparkSubmit as the last step once we have all of
> the other pieces in place.
--
This message was sent by Atlassian JIRA
(v6.2#6252)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]