On Mon, 21 May 2007, Hannes Dorbath wrote:

We have a fixed amount of 16 PG backend processes. Once created they stay forever. OLTP load is distributed over them in a SQF fashion.

As loading a TSearch dictionary takes a few moments, we have a script that connects to each backend on start up and pushes the dictionary into ram by simply calling ts_debug('foo');

The dictionary has a file size of 9.8MB.

If we don't call the script `free' prints:

            total       used       free     shared    buffers     cached
Mem:       4048056     953192    3094864          0          4     359300
-/+ buffers/cache:     593888    3454168

After the script has called ts_debug('foo') on each backend:

            total       used       free     shared    buffers     cached
Mem:       4048056    2374508    1673548          0          4     370340
-/+ buffers/cache:    2004164    2043892


Is it supposed to blast so much memory?

dictionary is just a program ! Are you usin custom dictionaries ?

what does 'select * from ts_debug('foo');' prints ?


This is PG 8.2.4 on x86_64.




        Regards,
                Oleg
_____________________________________________________________
Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),
Sternberg Astronomical Institute, Moscow University, Russia
Internet: [EMAIL PROTECTED], http://www.sai.msu.su/~megera/
phone: +007(495)939-16-83, +007(495)939-23-83

---------------------------(end of broadcast)---------------------------
TIP 3: Have you checked our extensive FAQ?

              http://www.postgresql.org/docs/faq

Reply via email to