Agree. I did similar things last week. The only issue is create a subclass
of configuration to implement serializable interface.
The Demi' solution is a bit overkill for this simple requirement

On Tuesday, December 16, 2014, Gerard Maas <gerard.m...@gmail.com> wrote:

> Hi Demi,
>
> Thanks for sharing.
>
> What we usually do is let the driver read the configuration for the job
> and pass the config object to the actual job as a serializable object. That
> way avoids the need of a centralized config sharing point that needs to be
> accessed from the workers. as you have defined in your solution.
> We use chef to write the configuration for the job in the environment it
> belongs to (dev, prod,...) when the job gets deployed to a host node. That
> config file is used to instantiate the job.
>
> We can maintain any number of different environments in that way.
>
> kr, Gerard.
>
>
> On Fri, Dec 12, 2014 at 6:38 PM, Demi Ben-Ari <demi.ben...@gmail.com
> <javascript:_e(%7B%7D,'cvml','demi.ben...@gmail.com');>> wrote:
>>
>> Hi to all,
>>
>> Our problem was passing configuration from Spark Driver to the Slaves.
>> After a lot of time spent figuring out how things work, this is the
>> solution I came up with.
>> Hope this will be helpful for others as well.
>>
>>
>> You can read about it in my Blog Post
>> <http://progexc.blogspot.co.il/2014/12/spark-configuration-mess-solved.html>
>>
>> --
>> Enjoy,
>> Demi Ben-Ari
>> Senior Software Engineer
>> Windward LTD.
>>
>

Reply via email to