Hi all,

I have a question about reservations in YARN in Fair Scheduler case. I have setup a small cluster - 3 nodes with 8GB RAM and 4vcpus each. I have submitted a single Spark job - SparkPi with 100000 iterations to be exact and web UI reports all memory (24GB) as used but it also marks 6GB of memory as reserved. If I understand the docs correctly, reservations are made on some existing free resources (which currently do not fit application needs) and not on resources that will be available in the future. So if 6GB is marked as reserved I would expect used memory to be rather 18GB instead of 24GB.

Could anyone shed some light on how reservations actually work in YARN? and how is it possible that all memory is marked as used and yet still 6GB is marked as reserved?

A few details about the cluster:
OS: CentOS 7.3
Java: 1.8
Hadoop: 2.6.5 (configured with external shuffle service)
Spark: 1.6.2 (configured with dynamic allocation)

Regards,
Zbyszek


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to