in YarnAllocator i see that memoryOverhead is by default set to
math.max((MEMORY_OVERHEAD_FACTOR * executorMemory).toInt,
MEMORY_OVERHEAD_MIN))

this does not take into account spark.memory.offHeap.size i think. should
it?

something like:

math.max((MEMORY_OVERHEAD_FACTOR * executorMemory + offHeapMemory).toInt,
MEMORY_OVERHEAD_MIN))

Reply via email to