On 2020-01-03 13:01, Amit Kapila wrote:

2020-01-02 19:51:05.687 CET [24138:3] FATAL:  insufficient file
descriptors available to start server process
2020-01-02 19:51:05.687 CET [24138:4] DETAIL:  System allows 19, we
need at least 20.
2020-01-02 19:51:05.687 CET [24138:5] LOG:  database system is shut down

Here, I think it is clear that the failure happens because we are
setting the value of max_files_per_process as 26 which is low for this
machine.  It seems to me that the reason it is failing is that before
reaching set_max_safe_fds, it has already seven open files.  Now, I
see on my CentOS system, the value of already_open files is 3, 6 and 6
respectively for versions HEAD, 12 and 10.  We can easily see the
number of already opened files by changing the error level from DEBUG2
to LOG for elog message in set_max_safe_fds.  It is not very clear to
me how many files we can expect to be kept open during startup?  Can
the number vary on different setups?

Hm, where does it get the limit from?  Is it something we set?

Why is this machine different from everybody else when it comes to this limit?

ulimit -a says:

$ ulimit -a
time(cpu-seconds)    unlimited
file(blocks)         unlimited
coredump(blocks)     unlimited
data(kbytes)         262144
stack(kbytes)        4096
lockedmem(kbytes)    672036
memory(kbytes)       2016108
nofiles(descriptors) 1024
processes            1024
threads              1024
vmemory(kbytes)      unlimited
sbsize(bytes)        unlimited

Is there any configuration setting I could do on the machine to increase this limit?

/Mikael


Reply via email to