You don’t need to. It is not static allocated to RDD cache, it is just an up limit. If you don’t use up the memory by RDD cache, it is always available for other usage. except those one also controlled by some memoryFraction conf. e.g. spark.shuffle.memoryFraction which you also set the up limit.
Best Regards, Raymond Liu From: 牛兆捷 [mailto:[email protected]] Sent: Thursday, September 04, 2014 2:27 PM To: Patrick Wendell Cc: [email protected]; [email protected] Subject: Re: memory size for caching RDD But is it possible to make t resizable? When we don't have many RDD to cache, we can give some memory to others. 2014-09-04 13:45 GMT+08:00 Patrick Wendell <[email protected]<mailto:[email protected]>>: Changing this is not supported, it si immutable similar to other spark configuration settings. On Wed, Sep 3, 2014 at 8:13 PM, 牛兆捷 <[email protected]<mailto:[email protected]>> wrote: > Dear all: > > Spark uses memory to cache RDD and the memory size is specified by > "spark.storage.memoryFraction". > > One the Executor starts, does Spark support adjusting/resizing memory size > of this part dynamically? > > Thanks. > > -- > *Regards,* > *Zhaojie* -- Regards, Zhaojie
