Hi all,
I have a python Spark application that I'm running using spark-submit in
yarn-cluster mode.
If I run ps -aux | grep <application name> in the submitter node, I can
find the client process that submitted the application, usually with around
300-600 MB memory use (%MEM around 1.0-2.0 in a node with 30 GB memory).

Is there anything that I can do to make this smaller? Also, as far as I
know in yarn-cluster mode after the application is launched the client then
does nothing, what is the memory used for?

Thank you,
Nisrina.

Reply via email to