nickva commented on issue #5879:
URL: https://github.com/apache/couchdb/issues/5879#issuecomment-3851706918

   Thanks for responding @oisheeaa.
   
   From the logs evidence, it doesn't seems like there is any specific error in 
the logs, for example from a process repeatedly crashing? The oom logs and 
`mem3_distribution : node couchdb@primary down, reason: net_tick_timeout` come 
from nodes disconnecting or being killed by the OOM.
   
   > During the 3.5.1 rolling upgrade attempt, the cluster fully dropped (both 
primary + secondary target groups went unhealthy).
   
   During the upgrade before taking the node down do you put the node in 
maintenance mode? That's what we usually do - use maintenance mode on the node, 
then wait some time (a few minutes) for all the connections to drain before 
upgrading (load balancer is automatically setup to exclude the node from 
routing based on the output of the `/_up` endpoint).
   
   > /_node/*/_system + queues / process counts
   
   When it gets close too OOM-ing what does the memory breakdown look like (how 
many bytes used by processes, binary, etc)?
   
   Wonder if you could share some of your vm.args setting or config settings? 
Especially if you have any custom settings (changing scheduling, or busy 
waiting, any allocator settings). For config setting any custom limits, what's 
your max_dbs_open set to? 
   
   In general, it's hard to tell what may be using the memory. A few way to try 
to control the memory might be to lower max_dbs_open a bit (but if you lower it 
too much you might see all_dbs_active errors in the logs).
   
   I had never tried these settings but if nothing else works maybe it's 
something to experiment with (especially if you have a test/staging environment)
   
   
https://elixirforum.com/t/elixir-erlang-docker-containers-ram-usage-on-different-oss-kernels/57251/12
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to