Hi Darin,

This is the piece of code
<https://github.com/mesos/spark-ec2/blob/v3/deploy_templates.py> doing the
actual work (Setting the memory). As you can see, it leaves 15Gb of ram for
OS on a > 100Gb machine... 2Gb RAM on a 10-20Gb machine etc.
You can always set SPARK_WORKER_MEMORY/SPARK_EXECUTOR_MEMORY to change
these values.

Thanks
Best Regards


On Thu, Aug 14, 2014 at 6:02 PM, Darin McBeath <ddmcbe...@yahoo.com.invalid>
wrote:

> I started up a cluster on EC2 (using the provided scripts) and specified a
> different instance type for the master and the the worker nodes.  The
> cluster started fine, but when I looked at the cluster (via port 8080), it
> showed that the amount of memory available to the worker nodes did not
> match the instance type I had specified.  Instead, the amount of memory for
> the worker nodes matched the master node.  I did verify that the correct
> instance types had been started for the master and worker nodes.
>
> Curious as to whether this is expected behavior or if this might be a bug?
>
> Thanks.
>
> Darin.
>

Reply via email to