Yes, that's what I meant (thanks for the correction).

>From the tests run, it seems best is to start workers with default mem (or
bit higher) and give much more memory/most of the memory to executors;
since most of the work will be done in executor jvm and the worker jvm
seems more like node manager for that node.


On Sat, Jan 25, 2014 at 6:32 AM, Archit Thakur <[email protected]>wrote:

>
>
>
> On Fri, Jan 24, 2014 at 11:29 PM, Manoj Samel <[email protected]>wrote:
>
>> On cluster with HDFS + Spark (in standalone deploy mode), there is a
>> master node + 4 worker nodes. When a spark-shell connects to master, it
>> creates 4 executor JVMs on each of the 4 worker nodes.
>>
>
> No, It creates 1 (4 in total) executor JVM on each of the 4 worker nodes.
>
>>
>> When the application reads a HDFS files and does computations in RDDs,
>> what work gets done on master, worker, executor and driver  ?
>>
>> Thanks,
>>
>
>

Reply via email to