On Mon, Feb 24, 2020 at 7:42 PM Thomas Munro <thomas.mu...@gmail.com> wrote:
> On Mon, Feb 24, 2020 at 12:24 PM Tom Lane <t...@sss.pgh.pa.us> wrote:
> > On reflection, trying to make ReserveExternalFD serve two different
> > use-cases was pretty messy.  Here's a version that splits it into two
> > functions.  I also took the trouble to fix dblink.
>
> +    /*
> +     * We don't want more than max_safe_fds / 3 FDs to be consumed for
> +     * "external" FDs.
> +     */
> +    if (numExternalFDs < max_safe_fds / 3)

I suppose there may be users who have set ulimit -n high enough to
support an FDW workload that connects to very many hosts, who will now
need to set max_files_per_process higher to avoid the new error now
that we're doing this accounting.  That doesn't seem to be a problem
in itself, but I wonder if the error message should make it clearer
that it's our limit they hit here.


Reply via email to