from a user's perspective.
Tnks,
Rod
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Running-a-task-once-on-each-executor-tp3203p11908.html
Sent from the Apache Spark User List mailing list archive at Nabbl
/Running-a-task-once-on-each-executor-tp3203p3427.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
t; long as the RDD had atleast one partition on each executor
>
> Deenar
>
>
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Running-a-task-once-on-each-executor-tp3203p3393.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
rn, my function would be called only once per executor as
long as the RDD had atleast one partition on each executor
Deenar
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Running-a-task-once-on-each-executor-tp3203p3393.html
Sent from the Apache Spark Use
lues().mapPartitions(new
> FlatMapFunction, String>() {
> @Override
> public Iterable call(Iterator arg0) throws Exception {
>System.out.println("Usage should call my jar once: " + arg0);
>return Lists.newArrayList();}
> });
>
>
>
> --
> View this messa
st.1001560.n3.nabble.com/Running-a-task-once-on-each-executor-tp3203p3353.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
ecutor rdd.mapPartitions
would return multiple results.
Deenar
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Running-a-task-once-on-each-executor-tp3203p3337.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
RDD to
> ensure that there is one element per executor.
>
> Deenar
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Running-a-task-once-on-each-executor-tp3203p3208.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
context:
http://apache-spark-user-list.1001560.n3.nabble.com/Running-a-task-once-on-each-executor-tp3203p3208.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
mething similar, I just run a
> map on a large RDD that is hash partitioned, this does not guarantee that
> the job would run just once.
>
> Deenar
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Running-a-task-once-on-each-executor-tp3203.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
context:
http://apache-spark-user-list.1001560.n3.nabble.com/Running-a-task-once-on-each-executor-tp3203.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
11 matches
Mail list logo