HeartSaVioR commented on PR #42567:
URL: https://github.com/apache/spark/pull/42567#issuecomment-1687107008

   I'd still like to understand how this is different from not capping the 
memory at all. Does capping the memory avoid RocksDB using the memory 
excessively? Or is there no difference between capping with soft limit vs no 
capping at all?
   
   Also, there is another aspect to think of - OOM kills the executor which 
could affect all stateful, stateless, batch queries. This error will only 
affect stateful queries. If people intends to set the limit on RocksDB memory 
usage considering this fact, soft limiting would break the intention, although 
they may still need to restart the cluster or at least executor to apply the 
new setting of memory limit on RocksDB. Looks to be very tricky to adjust from 
users' point of view when the error happens...
   
   Ideally we will need to rebalance the state if the memory hit happens, but 
maybe not happening in the short term.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to