Hi,

As I understand, by default in Spark a fraction of the executor memory
(60%) is reserved for RDD caching. So if there's no explicit caching in the
code (eg. rdd.cache() etc.), or if we persist RDD with
StorageLevel.DISK_ONLY, is this part of memory wasted? Does Spark allocates
the RDD cache memory dynamically? Or does spark automatically caches RDDs
when it can?

I've posted this question in user list but got no response there, so I try
the dev list. Sorry for spam.

Thanks.

-- 
*JU Han*

Data Engineer @ Botify.com

+33 0619608888

Reply via email to