Re: [HACKERS] [GENERAL] Bottlenecks with large number of relation segment files

2013-08-06 Thread KONDO Mitsumasa
(2013/08/05 19:28), Andres Freund wrote: On 2013-08-05 18:40:10 +0900, KONDO Mitsumasa wrote: (2013/08/05 17:14), Amit Langote wrote: So, within the limits of max_files_per_process, the routines of file.c should not become a bottleneck? It may not become bottleneck. 1 FD consumes 160 byte in

Re: [HACKERS] [GENERAL] Bottlenecks with large number of relation segment files

2013-08-06 Thread KONDO Mitsumasa
(2013/08/05 21:23), Tom Lane wrote: Andres Freund and...@2ndquadrant.com writes: ... Also, there are global limits to the amount of filehandles that can simultaneously opened on a system. Yeah. Raising max_files_per_process puts you at serious risk that everything else on the box will

Re: [HACKERS] [GENERAL] Bottlenecks with large number of relation segment files

2013-08-06 Thread Andres Freund
On 2013-08-06 19:19:41 +0900, KONDO Mitsumasa wrote: (2013/08/05 21:23), Tom Lane wrote: Andres Freund and...@2ndquadrant.com writes: ... Also, there are global limits to the amount of filehandles that can simultaneously opened on a system. Yeah. Raising max_files_per_process puts

Re: [HACKERS] [GENERAL] Bottlenecks with large number of relation segment files

2013-08-06 Thread KONDO Mitsumasa
(2013/08/06 19:33), Andres Freund wrote: On 2013-08-06 19:19:41 +0900, KONDO Mitsumasa wrote: (2013/08/05 21:23), Tom Lane wrote: Andres Freund and...@2ndquadrant.com writes: ... Also, there are global limits to the amount of filehandles that can simultaneously opened on a system. Yeah.

Re: [HACKERS] [GENERAL] Bottlenecks with large number of relation segment files

2013-08-05 Thread KONDO Mitsumasa
Hi Amit, (2013/08/05 15:23), Amit Langote wrote: May the routines in fd.c become bottleneck with a large number of concurrent connections to above database, say something like pgbench -j 8 -c 128? Is there any other place I should be paying attention to? What kind of file system did you use?

Re: [HACKERS] [GENERAL] Bottlenecks with large number of relation segment files

2013-08-05 Thread Amit Langote
On Mon, Aug 5, 2013 at 5:01 PM, KONDO Mitsumasa kondo.mitsum...@lab.ntt.co.jp wrote: Hi Amit, (2013/08/05 15:23), Amit Langote wrote: May the routines in fd.c become bottleneck with a large number of concurrent connections to above database, say something like pgbench -j 8 -c 128? Is there

Re: [HACKERS] [GENERAL] Bottlenecks with large number of relation segment files

2013-08-05 Thread KONDO Mitsumasa
(2013/08/05 17:14), Amit Langote wrote: So, within the limits of max_files_per_process, the routines of file.c should not become a bottleneck? It may not become bottleneck. 1 FD consumes 160 byte in 64bit system. See linux manual at epoll. Regards, -- Mitsumasa KONDO NTT Open Source Software

Re: [HACKERS] [GENERAL] Bottlenecks with large number of relation segment files

2013-08-05 Thread Andres Freund
On 2013-08-05 18:40:10 +0900, KONDO Mitsumasa wrote: (2013/08/05 17:14), Amit Langote wrote: So, within the limits of max_files_per_process, the routines of file.c should not become a bottleneck? It may not become bottleneck. 1 FD consumes 160 byte in 64bit system. See linux manual at epoll.

Re: [HACKERS] [GENERAL] Bottlenecks with large number of relation segment files

2013-08-05 Thread Tom Lane
Andres Freund and...@2ndquadrant.com writes: ... Also, there are global limits to the amount of filehandles that can simultaneously opened on a system. Yeah. Raising max_files_per_process puts you at serious risk that everything else on the box will start falling over for lack of available FD