As far as I know, if you are talking about RDD.cache(), the answer is the 
executor only caches the partition it requires.

Cheers,
-z

________________________________________
From: zwithouta <[email protected]>
Sent: Tuesday, April 14, 2020 18:28
To: [email protected]
Subject: [Spark Core]: Does an executor only cache the partitions it requires 
for its computations or always the full RDD?

Provided caching is activated for a RDD, does each executor of a cluster only
cache the partitions it requires for its computations or always the full
RDD?



--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/

---------------------------------------------------------------------
To unsubscribe e-mail: [email protected]


---------------------------------------------------------------------
To unsubscribe e-mail: [email protected]

Reply via email to