On Wed, Jul 23, 2014 at 4:19 PM, Andrew Ash <and...@andrewash.com> wrote: > > > In standalone mode, each SparkContext you initialize gets its own set of > executors across the cluster. So for example if you have two shells open, > they'll each get two JVMs on each worker machine in the cluster. >
Dumb question offline -- do you mean they'll each get one JVM on each worker? or if it's two, what drives the two each?