[
https://issues.apache.org/jira/browse/MESOS-2985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14612212#comment-14612212
]
Vinod Kone commented on MESOS-2985:
-----------------------------------
Hey [~marco-mesos]. How come I'm unable to resolve this issue as "won't fix"
with the new work flow? When I click Workflow->resolve I don't even see an
option to set the resolution type!?
> Wrong spark.executor.memory when using different EC2 master and worker
> machine types
> ------------------------------------------------------------------------------------
>
> Key: MESOS-2985
> URL: https://issues.apache.org/jira/browse/MESOS-2985
> Project: Mesos
> Issue Type: Bug
> Components: ec2
> Reporter: Stefano Parmesan
>
> _(this is a mirror of
> [SPARK-8726|https://issues.apache.org/jira/browse/SPARK-8726])_
> By default, {{spark.executor.memory}} is set to the [min(slave_ram_kb,
> master_ram_kb);|https://github.com/mesos/spark-ec2/blob/e642aa362338e01efed62948ec0f063d5fce3242/deploy_templates.py#L32]
> when using the same instance type for master and workers you will not
> notice, but when using different ones (which makes sense, as the master
> cannot be a spot instance, and using a big machine for the master would be a
> waste of resources) the default amount of memory given to each worker is
> capped to the amount of RAM available on the master (ex: if you create a
> cluster with an m1.small master (1.7GB RAM) and one m1.large worker (7.5GB
> RAM), spark.executor.memory will be set to 512MB).
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)