Jeff Davis <[EMAIL PROTECTED]> writes: > On Wed, 2007-05-09 at 17:29 -0700, Joshua D. Drake wrote: >> Sounds to me like you just need to up the total amount of open files >> allowed by the operating system.
> It looks more like the opposite, here's the docs for > max_files_per_process: I think Josh has got the right advice. The manual is just saying that you can reduce max_files_per_process to avoid the failure, but it's not making any promises about the performance penalty for doing that. Apparently Ralph's app needs a working set of between 800 and 1000 open files to have reasonable performance. > That is a lot of tables. Maybe a different OS will handle it better? > Maybe there's some way that you can use fewer connections and then the > OS could still handle it? Also, it might be worth rethinking the database structure to reduce the number of tables. But for a quick-fix, increasing the kernel limit seems like the easiest answer. regards, tom lane ---------------------------(end of broadcast)--------------------------- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq