On Feb 13, 2007, at 12:15 PM, Tom Lane wrote:
Interesting. So accept() fails because it can't allocate an FD, which
means that the select condition isn't cleared, so we keep retrying
forever. I don't see what else we could do though. Having the
postmaster abort on what might well be a transient condition doesn't
sound like a hot idea. We could possibly sleep() a bit before
retrying,
just to not suck 100% CPU, but that doesn't really *fix* anything ...
Well, not only that, but the machine is currently writing to the
postmaster log at the rate of 2-3MB/s. ISTM some kind of sleep
(perhaps growing exponentially to some limit) would be a good idea.
I've been meaning to bug you about increasing cuckoo's FD limit
anyway;
it keeps failing in the regression tests.
ulimit is set to 1224 open files, though I seem to keep bumping
into that
(anyone know what the system-level limit is, or how to change it?)
On my OS X machine, "ulimit -n unlimited" seems to set the limit to
10240 (or so a subsequent ulimit -a reports). But you could probably
fix it using the buildfarm parameter that cuts the number of
concurrent
regression test runs.
Odd... that works on my MBP (sudo bash; ulimit -n unlimited) and I
get 12288. But the same thing doesn't work on cuckoo, which is a G4;
the limit stays at 1224 no matter what. Perhaps because I'm setting
maxfiles in launchd.conf.
In any case, I've upped it to a bit over 2k; we'll see what that
does. I find it interesting that aubrac isn't affected by this, since
it's still running with the default of only 256 open files.
I'm thinking we might want to change the default value for
max_files_per_process on OS X, or have initdb test it like it does
for other things.
--
Jim Nasby [EMAIL PROTECTED]
EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)
---------------------------(end of broadcast)---------------------------
TIP 6: explain analyze is your friend