I am slightly confused about the "--executor-memory" setting. My yarn
cluster has a maximum container memory of 8192MB.

When I specify "--executor-memory 8G" in my spark-shell, no container can
be started at all. It only works when I lower the executor memory to 7G.
But then, on yarn, I see 2 container per node, using 16G of memory.

Then on the spark UI, it shows that each worker has 4GB of memory, rather
than 7.

Can someone explain the relationship among the numbers I see here?

Thanks.

Reply via email to