I remember asking this question on Spark user list and parallelize() was the suggested option to run a closure on all Spark workers. Paolo, I like the idea with foreachPartition() - maybe we can crete a fake RDD with partition number equal to the number of Spark workers and then map each partition to the corresponding worker.
- [Spark-Ignite] How to run exactly one Ignite worker in ea... Paolo Di Tommaso
- Re: [Spark-Ignite] How to run exactly one Ignite wor... Luis Mateos
- Re: [Spark-Ignite] How to run exactly one Ignite... Paolo Di Tommaso
- Re: [Spark-Ignite] How to run exactly one Ig... vkulichenko
- Re: [Spark-Ignite] How to run exactly on... Paolo Di Tommaso
- Re: [Spark-Ignite] How to run exact... Alexey Goncharuk
- Re: [Spark-Ignite] How to run e... Paolo Di Tommaso
