Scott thank you for advice. > If you've got one job that needs lots of mem and lot of jobs that > don't, look at my recommendation to lower work_mem for all the low mem > requiring jobs. If you can split those heavy lifting jobs out to > another user, then you can use a pooler like pgbouncer to do admission > control by limiting that heavy lifter to a few connections at a time. > The rest will wait in line behind it.
I will deacrease this parametr to 32MB because this DB cluster is for WEB application, thats why there is notning to do heavyweight querys. > You are definitely running your server out of memory then. Can you > throw say 256G into it? It's usually worth every penny to throw memory > at the problem. Reducing usage will help a lot for now tho. Unfortunately no, all that i can grow up my memory for 72GB. If i set another 32GB to server, what shared_buffer i should use 8GB, 2GB or 18GB 1/4 from 72GB? In this period of time i can set vm.overcommit_ratio=500 or 700 but this is very dangerous i think. Because all process can allocate (RAM+SWAP)*vm.overcommit_ratio/100 as i understand? Once again thank you very much for link, i am read about it and graph. About max_connection i reply later, now i calculate it. 2013/11/6 Scott Marlowe <scott.marl...@gmail.com> > Also also, the definitive page for postgres and dirty pages etc is here: > > http://www.westnet.com/~gsmith/content/linux-pdflush.htm > > Not sure if it's out of date with more modern kernels. Maybe Greg will > chime in. > -- С уважением Селявка Евгений