hi,
Please check if your os supports memory overcommit. I doubted this caused by
your os bans the memory overcommitment, and the os kills the process when
memory overcommitment is detected (the spark executor is chosen to kill).
This is why you receive sigterm, and executor failed with the signal
Hi,
I saw the following error message in executor logs:
*Java HotSpot(TM) 64-Bit Server VM warning: INFO:
os::commit_memory(0x000662f0, 520093696, 0) failed; error='Cannot
allocate memory' (errno=12)*
By increasing RAM of my nodes to 40 GB each, I was able to get rid of RPC
connection fa
Hi,
I think I have a situation where spark is silently failing to write data to
my Cassandra table. Let me explain my current situation.
I have a table consisting of around 402 million records. The table consists
of 84 columns. Table schema is something like this:
*id (text) | datetime (tim