That's good to know, although "enough" is not very specific :)  I'm not
finding any clear information in the docs about memory usage & allocation.

Can you provide some info about which details drive the memory
requirements, and by what amounts?  I'm guessing:

  * configured replications(max_jobs * worker_batch_size * <record size>)
  * per unused db (small?)
  * per open db (x internal cache/buffer?)
  * per open index (x internal cache/buffer ?)
  * database_compaction.doc_buffer_size (x sum(smoosh.<channel>.capacity)
as an upper bound)
  * other important factors?

Any erlang settings that are memory-specific?

I expect leaving lots of free memory available for the OS to do its own
read buffering is also a good idea.  Deciding on what "lots" means might
involve a calculation like:

  * count active databases + active indexes in those databases
  * Count the sizes of the hottest 80% of the btree's for all of them (how?
some func(id-size, page-size, record-count), something similar for indexes)
  * ??


On Mon, Jul 13, 2020 at 9:52 PM Joan Touzet <[email protected]> wrote:

> This is coming from the Erlang VM and telling you that you're nearly out
> of available memory. CouchDB doesn't react well to running out of RAM;
> it usually crashes.
>
> While this warning will be suppressed in future versions of CouchDB, you
> should probably check that you have enough RAM in your CouchDB
> server/container/VM/etc.
>
> On 2020-07-13 6:23 p.m., Arturo Mardones wrote:
> > Hello at All!
> >
> > I'm getting this message very often
> >
> > [info] 2020-07-13T21:19:09.240457Z [email protected] <0.56.0> --------
> > alarm_handler: {set,{system_memory_high_wa
> > termark,[]}}
> >
> > I've reviewed some older mails and mention that is not important, and
> even
> > is related to the client browser cache?
> >
> > Anyone can give me some link or light about if I really can discard this
> > message, and what really means
> >
> > Thanks!!!
> >
> > Arturo.
> >
>

Reply via email to