yeachan153 commented on PR #36434:
URL: https://github.com/apache/spark/pull/36434#issuecomment-1463752819

   > Exit code 137 generally refers to out of memory at the container level, 
can you increase the overhead and see if it still occurs for you?
   
   I can't think of why it would run out of memory, all it's doing is grabbing 
the process and killing it. All I've done is initialise the spark session and 
then trigger a node to shutdown. I've tried increasing the overhead to 0.8 for 
the K8s resources and set the executor memory to 7g, which already is 
excessive. I'm still getting an exit code 137. My metrics say that the most 
that I used was ~350mb as well.
   
   I ran with Spark 3.3.2 on GKE 1.23.14-gke.1800 - with your changes to the 
decom script applied as a patch.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to