Each executor reserves some memory for storing RDDs in memory, and
some for executor operations like shuffling. The number you see is
memory reserved for storing RDDs, and defaults to about 0.6 of the
total (spark.storage.memoryFraction).

On Fri, Aug 29, 2014 at 2:32 AM, SK <skrishna...@gmail.com> wrote:
> Hi,
>
> I am using a cluster where each node has 16GB (this is the executor memory).
> After I complete an MLlib job, the executor tab shows the following:
>
> Memory: 142.6 KB Used (95.5 GB Total)
>
> and individual worker nodes have the Memory Used values as 17.3 KB / 8.6 GB
> (this is different for different nodes). What does the second number signify
> (i.e.  8.6 GB and 95.5 GB)? If 17.3 KB was used out of the total memory of
> the node, should it not be 17.3 KB/16 GB?
>
> thanks
>
>
>
> --
> View this message in context: 
> http://apache-spark-user-list.1001560.n3.nabble.com/Memory-statistics-in-the-Application-detail-UI-tp13082.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to