I have a machine with an 8GB total memory, on which there are other
applications installed.

The Spark application must run 1 driver and two jobs at a time. I have
configured 8 cores in total.
The machine (without Spark) has 4GB of free RAM (the other half RAM is used
by other applications).

So I have configured 1 worker with a total memory of 2800MB of RAM. The
driver is configured to use 512MB limit (2 cores) and 762MB per executor.
The driver launch a driver process and a Spark Stream (always on) job,
occupying 512MB + 762MB (using 4 cores in total).
The other jobs will use 762MB each, so, when the whole app in started and
the 2 jobs (and the driver) are up, I should consume the whole 2.8GB of
memory.

Now, the free RAM. I said I have circa 4GB of RAM, so I should obtain 4 -
2.8 = 1.2GB of free RAM.
When jobs starts, however, I can see the free memory during execution is
near to 200MB.


Why this behaviour? Why Spark is using practically all the available RAM if
I use only 1 worker with a 2.8GB limit in total? 



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Free-memory-while-launching-jobs-tp26872.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to