I wonder if SparkConf is dynamically updated on all worker nodes or only during initialization. It can be used to piggyback information. Otherwise I guess you are stuck with Broadcast. Primarily I have had these issues moving legacy MR operators to Spark where MR piggybacks on Hadoop conf pretty heavily, in spark Native application its rarely required. Do you have a usecase like that?
Mayur Rustagi Ph: +1 (760) 203 3257 http://www.sigmoidanalytics.com @mayur_rustagi <https://twitter.com/mayur_rustagi> On Fri, Nov 14, 2014 at 10:28 AM, Tobias Pfeiffer <[email protected]> wrote: > Hi, > > (this is related to my previous question about stopping the > StreamingContext) > > is there any way to send a message from the driver to the executors? There > is all this Akka machinery running, so it should be easy to have something > like > > sendToAllExecutors(message) > > on the driver and > > handleMessage { > case _ => ... > } > > on the executors, right? Surely at least for Broadcast.unpersist() such a > thing must exist, so can I use it somehow (dirty way is also ok) to send a > message to my Spark nodes? > > Thanks > Tobias >
