On Fri, Jan 3, 2020 at 6:34 PM Mikael Kjellström <mikael.kjellst...@mksoft.nu> wrote: > > > On 2020-01-03 13:01, Amit Kapila wrote: > > > 2020-01-02 19:51:05.687 CET [24138:3] FATAL: insufficient file > > descriptors available to start server process > > 2020-01-02 19:51:05.687 CET [24138:4] DETAIL: System allows 19, we > > need at least 20. > > 2020-01-02 19:51:05.687 CET [24138:5] LOG: database system is shut down > > > > Here, I think it is clear that the failure happens because we are > > setting the value of max_files_per_process as 26 which is low for this > > machine. It seems to me that the reason it is failing is that before > > reaching set_max_safe_fds, it has already seven open files. Now, I > > see on my CentOS system, the value of already_open files is 3, 6 and 6 > > respectively for versions HEAD, 12 and 10. We can easily see the > > number of already opened files by changing the error level from DEBUG2 > > to LOG for elog message in set_max_safe_fds. It is not very clear to > > me how many files we can expect to be kept open during startup? Can > > the number vary on different setups? > > Hm, where does it get the limit from? Is it something we set? > > Why is this machine different from everybody else when it comes to this > limit? >
The problem we are seeing on this machine is that I think we have seven files opened before we reach function set_max_safe_fds during startup. Now, it is not clear to me why it is opening extra file(s) during start-up as compare to other machines. I think this kind of problem could occur if one has set shared_preload_libraries and via that, some file is getting opened which is not closed or there is some other configuration due to which this extra file is getting opened. -- With Regards, Amit Kapila. EnterpriseDB: http://www.enterprisedb.com