Hi,

The docs say: Fraction of Java heap to use for Spark's memory cache. This
should not be larger than the "old" generation of objects in the JVM, which
by default is given 2/3 of the heap, but you can increase it if you
configure your own old generation size.

if we are not caching any RDDs, does it mean that we only have
1-memoryFraction heap available for "normal" JVM objects? Would it make
sense then to set memoryFraction to 0?

Thanks,

Grega
--
[image: Inline image 1]
*Grega Kešpret*
Analytics engineer

Celtra — Rich Media Mobile Advertising
celtra.com <http://www.celtra.com/> |
@celtramobile<http://www.twitter.com/celtramobile>

<<celtra_logo.png>>

Reply via email to