These are 2 parameters and default value for these are 0.6 and 0.2 which is
around 80%. I am wondering where remaining 0.2 % goes. Is it for JVM for
other memory requirements?

If yes, then what is spark.memory.fraction used for.

My understanding is that if we have 10GB of memory per executor then for
storage and shuffle/execution we will have 7.5 Gb.

Now out of this 7.5 Gb 60% is for storage and 20% for shuffle/execution
where other 20% goes.

I am using 1.5.1 and I am not even sure that spark.memory.fraction is used
in that version.

Any help will be appreciated.

Thanks

Reply via email to