hi,
Please check if your os supports memory overcommit. I doubted this caused by
your os bans the memory overcommitment, and the os kills the process when
memory overcommitment is detected (the spark executor is chosen to kill).
This is why you receive sigterm, and executor failed with the signal
Hi,
I saw the following error message in executor logs:
*Java HotSpot(TM) 64-Bit Server VM warning: INFO:
os::commit_memory(0x000662f0, 520093696, 0) failed; error='Cannot
allocate memory' (errno=12)*
By increasing RAM of my nodes to 40 GB each, I was able to get rid of RPC
connection fa