Hi Mathieu What does this approach provide that the norm lacks?
So basically each node has its master in this model. Are these supposed to be individual stand alone servers? Thanks Dr Mich Talebzadeh LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>* http://talebzadehmich.wordpress.com On 19 May 2016 at 18:45, Mathieu Longtin <math...@closetwork.org> wrote: > First a bit of context: > We use Spark on a platform where each user start workers as needed. This > has the advantage that all permission management is handled by the OS, so > the users can only read files they have permission to. > > To do this, we have some utility that does the following: > - start a master > - start worker managers on a number of servers > - "submit" the Spark driver program > - the driver then talks to the master, tell it how many executors it needs > - the master tell the worker nodes to start executors and talk to the > driver > - the executors are started > > From here on, the master doesn't do much, neither do the process manager > on the worker nodes. > > What I would like to do is simplify this to: > - Start the driver program > - Start executors on a number of servers, telling them where to find the > driver > - The executors connect directly to the driver > > Is there a way I could do this without the master and worker managers? > > Thanks! > > > -- > Mathieu Longtin > 1-514-803-8977 >