Re: [PERFORM] How to troubleshoot high mem usage by postgres?

2010-02-28 Thread Tom Lane
Chris ctlaj...@gmail.com writes:
 @Tom Lane:
 As I mentioned above I am not doing everything in a single
 transaction. However I do want to try your suggestion regarding
 getting a memory context map. But I'm afraid I don't know how to do
 what you are describing. How can I set the ulimit of postmaster?

Depends on the script you are using to start the postmaster.  One way is
to call ulimit in the startup script right before it invokes the
postmaster.  However, if you have something like

su - postgres -c postmaster ...

then I'd try putting it in the postgres user's ~/.profile or
~/.bash_profile instead; the su is likely to reset such things.

 And
 does the postmaster stderr output go to the postgres log file?

Also depends.  Look at the startup script and see where it redirects
postmaster's stderr to.  You might have to modify the script --- some
are known to send stderr to /dev/null :-(

Sorry to be so vague, but different packagers have different ideas
about how to do this.

regards, tom lane

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] How to troubleshoot high mem usage by postgres?

2010-02-27 Thread Chris
Hi, I'm having an issue where a postgres process is taking too much
memory when performing many consecutive inserts and updates from a PHP
script (running on the command line). I would like to know what sort
of logging I can turn on to help me determine what is causing memory
to be consumed and not released.

Most PHP scripts are not long-running and properly releasing the
resources using the provided functions in the pgsql PHP extension is
not necessary. However since I do have a long-running script, I have
taken steps to ensure everything is being properly released when it is
no longer needed (I am calling the functions provided, but I don't
know if the pgsql extension is doing the right thing). In spite of
this, the longer the script runs and processes records, the more
memory increases. It increases to the point that system memory is
exhausted and it starts swapping. I killed the process at this point.

I monitored the memory with top. here are the results.. the first is
10 seconds after my script started running. The second is about 26
seconds.

  PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  DATA COMMAND
17461 postgres  16   0  572m 405m  14m S 20.0 10.7   0:10.65 422m postmaster
17460 root  15   0  136m  14m 4632 S 10.6  0.4   0:06.16  10m php
17462 postgres  15   0  193m  46m 3936 D  3.3  1.2   0:01.77  43m postmaster


  PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  DATA COMMAND
17461 postgres  16   0 1196m 980m  17m S 19.0 26.0   0:25.72 1.0g postmaster
17460 root  15   0  136m  14m 4632 R 10.3  0.4   0:14.31  10m php
17462 postgres  16   0  255m 107m 3984 R  3.0  2.9   0:04.19 105m postmaster


If I am indeed doing everything I can to release the resources (and
I'm 95% sure I am) then it looks like the pgsql extension is at fault
here.
Regardless of who/what is at fault, I need to fix it. And to do that I
need to find out what isn't getting released properly. How would I go
about that?

Thanks,
Chris

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] How to troubleshoot high mem usage by postgres?

2010-02-27 Thread Ben Chobot
On Feb 27, 2010, at 2:29 PM, Chris wrote:

 Hi, I'm having an issue where a postgres process is taking too much
 memory when performing many consecutive inserts and updates from a PHP

[snip]

In your postgresql.conf file, what are the settings for work_mem and 
shared_buffers?




Re: [PERFORM] How to troubleshoot high mem usage by postgres?

2010-02-27 Thread Craig Ringer

On 28/02/2010 6:29 AM, Chris wrote:


If I am indeed doing everything I can to release the resources (and
I'm 95% sure I am) then it looks like the pgsql extension is at fault
here.


Before assuming some particular thing is at fault, you need to collect 
some information to determine what is actually happening.


What are your queries?

What are the resource-releasing functions you're using, and how?

Can you boil this down to a simple PHP test-case that connects to a 
dummy database and repeats something that causes the backend to grow in 
memory usage? Trying to do this - by progressively cutting things out of 
your test until it stops growing - will help you track down what, 
exactly, is causing the growth.


It'd be helpful if you could also provide some general system 
information, as shown here:


   http://wiki.postgresql.org/wiki/Guide_to_reporting_problems

--
Craig Ringer

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] How to troubleshoot high mem usage by postgres?

2010-02-27 Thread Tom Lane
Chris ctlaj...@gmail.com writes:
 Hi, I'm having an issue where a postgres process is taking too much
 memory when performing many consecutive inserts and updates from a PHP
 script (running on the command line). I would like to know what sort
 of logging I can turn on to help me determine what is causing memory
 to be consumed and not released.

Are you doing all these inserts/updates within a single transaction?

If so, I think the odds are very good that what's eating the memory
is the list of pending trigger actions, resulting from either
user-created triggers or foreign-key check triggers.  The best way
of limiting the problem is to commit more often.

If you want to try to confirm that, what I would do is run the
postmaster under a more restrictive ulimit setting, so that it
runs out of memory sometime before the system starts to swap.
When it does run out of memory, you'll get a memory context map
printed to postmaster stderr, and that will show which context
is eating all the memory.  If it's AfterTriggerEvents then my
guess above is correct --- otherwise post the map for inspection.

regards, tom lane

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] How to troubleshoot high mem usage by postgres?

2010-02-27 Thread Chris
On Sat, Feb 27, 2010 at 3:38 PM, Ben Chobot be...@silentmedia.com wrote:
 In your postgresql.conf file, what are the settings for work_mem and
 shared_buffers?

I have not done any tuning on this db yet (it is a dev box). It is
using defaults.
shared_buffers = 32MB
#work_mem = 1MB


I do appreciate the several quick responses and I will work on
responding to the them.

@Craig Ringer:
select version() reports:
PostgreSQL 8.4.2 on x86_64-redhat-linux-gnu, compiled by GCC gcc (GCC)
4.1.2 20071124 (Red Hat 4.1.2-42), 64-bit
The system has 4GB of RAM.
The postgres log currently does not show any useful information. Only
thing in there for today is an Unexpected EOF on client connection
because I killed the process after it started swapping.

The test input for my PHP script is a csv file with about 450,000
records in it. The php script processes the each csv record in a
transaction, and on average it executes 2 insert or update statements
per record. I don't think the specific statements executed are
relevant (they are just basic INSERT and UPDATE statements).

I will try to come up with a short script that reproduces the problem.

@Tom Lane:
As I mentioned above I am not doing everything in a single
transaction. However I do want to try your suggestion regarding
getting a memory context map. But I'm afraid I don't know how to do
what you are describing. How can I set the ulimit of postmaster? And
does the postmaster stderr output go to the postgres log file? If not,
where can I find it?

Thanks again,
Chris

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] How to troubleshoot high mem usage by postgres?

2010-02-27 Thread Chris
On Sat, Feb 27, 2010 at 3:38 PM, Craig Ringer
cr...@postnewspapers.com.au wrote:
 Can you boil this down to a simple PHP test-case that connects to a dummy
 database and repeats something that causes the backend to grow in memory
 usage? Trying to do this - by progressively cutting things out of your test
 until it stops growing - will help you track down what, exactly, is causing
 the growth.

Thank you for your suggestion. I have done this, and in doing so I
have also discovered why this problem is occurring.

My application uses a class that abstracts away the db interaction, so
I do not normally use the pg_* functions directly. Any time any
statement was executed, it created a new named prepared statement. I
wrongly assumed that calling pg_free_result() on the statement
resource would free this prepared statement inside of postgres.

I will simply modify the class to use an empty statement name if there
is no need for it to be named (which I actually need very infrequently
anyway).

I have attached the script I created to test with, for those who are
interested. The first line of the script has the connection string. I
used a db called testdb. run from the command line with:
php -f test3.php

Note my comment in the php file
 UNCOMMENT THIS LINE AND MEMORY ISSUE IS FIXED

Thanks for the help everyone.
Chris
attachment: test3.php

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance