One more question: Is there reason why Spark throws an error when
requesting too much memory instead of capping it to the maximum value (as
YARN would do by default)?

Thanks!

2015-02-10 17:32 GMT+01:00 Zsolt Tóth <toth.zsolt....@gmail.com>:

> Hi,
>
> I'm using Spark in yarn-cluster mode and submit the jobs programmatically
> from the client in Java. I ran into a few issues when tried to set the
> resource allocation properties.
>
> 1. It looks like setting spark.executor.memory, spark.executor.cores and
> spark.executor.instances have no effect because ClientArguments checks only
> for the command line arguments (--num-executors, --executors cores, etc.).
> Is it possible to use the properties in yarn-cluster mode instead of the
> command line arguments?
>
> 2. My nodes have 5GB memory but when I set --executor-memory to 4g
> (overhead 384m), I get the exception that the required executor memory is
> above the max threshold of this cluster. It looks like this threshold is
> the value of the yarn.scheduler.maximum-allocation-mb property. Is that
> correct?
>
> Thanks,
> Zsolt
>

Reply via email to