[PERFORM] profiling postgresql queries?
hi i'm not totally sure i should ask on this mailing list - so if you think i should better ask someplace else, please let me know. the problem i have is that specific queries (inserts and updates) take a long time to run. of course i do vacuum analyze frequently. i also use explain analyze on queries. the problem is that both the inserts and updated operate on heavy-tirggered tables. and it made me wonder - is there a way to tell how much time of backend was spent on triggers, index updates and so on? like: total query time: 1 secons trigger a: 0.50 second trigger b: 0.25 second index update: 0.1 second something like this. is it possible? will it be ever possible? hubert -- hubert lubaczewski Network Operations Center eo Networks Sp. z o.o. ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [PERFORM] profiling postgresql queries?
hubert lubaczewski [EMAIL PROTECTED] writes: and it made me wonder - is there a way to tell how much time of backend was spent on triggers, index updates and so on? In CVS tip, EXPLAIN ANALYZE will break out the time spent in each trigger. This is not in any released version, but if you're desperate you could load up a play server with your data and test. regards, tom lane ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [PERFORM] profiling postgresql queries?
On Tue, Apr 12, 2005 at 10:18:31AM -0400, Alex Turner wrote: Speaking of triggers... Is there any plan to speed up plpgsql tiggers? Fairly simple crosstable insert triggers seem to slow my inserts to a crawl. plpgsql is quite fast actually. if some triggers slow inserts too much, i guess you should be able to spped them up with some performance review of trigger code. depesz -- hubert lubaczewski Network Operations Center eo Networks Sp. z o.o. signature.asc Description: Digital signature
Re: [PERFORM] profiling postgresql queries?
On Tue, Apr 12, 2005 at 12:46:43PM +0200, hubert lubaczewski wrote: the problem is that both the inserts and updated operate on heavy-tirggered tables. and it made me wonder - is there a way to tell how much time of backend was spent on triggers, index updates and so on? like: total query time: 1 secons trigger a: 0.50 second trigger b: 0.25 second index update: 0.1 second EXPLAIN ANALYZE in 8.1devel (CVS HEAD) prints a few statistics for triggers: EXPLAIN ANALYZE UPDATE foo SET x = 10 WHERE x = 20; QUERY PLAN -- Index Scan using foo_x_idx on foo (cost=0.00..14.44 rows=10 width=22) (actual time=0.184..0.551 rows=7 loops=1) Index Cond: (x = 20) Trigger row_trig1: time=1.625 calls=7 Trigger row_trig2: time=1.346 calls=7 Trigger stmt_trig1: time=1.436 calls=1 Total runtime: 9.659 ms (6 rows) 8.1devel changes frequently (sometimes requiring initdb) and isn't suitable for production, but if the trigger statistics would be helpful then you could set up a test server and load a copy of your database into it. Just beware that because it's bleeding edge, it might destroy your data and it might behave differently than released versions. -- Michael Fuhr http://www.fuhr.org/~mfuhr/ ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [PERFORM] profiling postgresql queries?
On Tue, Apr 12, 2005 at 08:43:59AM -0600, Michael Fuhr wrote: 8.1devel changes frequently (sometimes requiring initdb) and isn't suitable for production, but if the trigger statistics would be helpful then you could set up a test server and load a copy of your database into it. Just beware that because it's bleeding edge, it might destroy your data and it might behave differently than released versions. great. this is exactly what i need. thanks for hint. depesz -- hubert lubaczewski Network Operations Center eo Networks Sp. z o.o. signature.asc Description: Digital signature