Re: [PERFORM] PostgreSQL 9.2.3 performance problem caused Exclusive locks
On Thu, 14 Mar 2013 06:53:55 +0200, Jeff Janes jeff.ja...@gmail.com wrote: On Friday, March 8, 2013, Emre Hasegeli wrote: PostgreSQL writes several following logs during the problem which I never saw before 9.2.3: LOG: process 4793 acquired ExclusiveLock on extension of relation 305605 of database 16396 after 2348.675 ms The key here is not that it is an ExclusiveLock, but rather than it is the relation extension lock. I don't think the extension lock is ever held across user-code, or transaction boundaries, or anything like that. It is held over some small IOs. So if it blocked on that for over 2 seconds, you almost surely have some serious IO congestion. And this particular message is probably more a symptom of that congestion than anything else. You said you rolled back to 9.2.2 and the stalling is still there. Are you still seeing the log message, or are you now seeing silently stalls? Did you roll back all other changes that were made at the same time as the upgrade to 9.2.3 (kernel versions, filesystem changes/versions, etc.)? I did not try with different kernel or file system. It was not because of 9.2.3, same problem occurred in both 9.2.2 and 9.2.3. Increasing max connections make it worse. It lasts almost 15 minutes in the last time. There were not much disk utilization while it is happening, top was pointing out most of the CPU usage on the %sy column, there were no IO wait. I saw allocstalls increasing on atop. There were a lot of slow insert statements in the logs except ExclusiveLock waits. We were using 64 GiB of shared buffers. RhodiumToad suggested to reduce it on the IRC channel. It did not happen since then. It was a real problem for us. I could not find anything related to it. I cannot let it happen again on the production environment but I would be happy to share more experience, if it would help you fix it. -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance
[PERFORM] Speed of EXCECUTE in PL/PGSQL
Hi, I have PostgreSQL 9.0.12 on Windows. I have some simple function: CREATE OR REPLACE FUNCTION sfunction() RETURNS BOOL AS $BODY$ DECLARE q TEXT; r RECORD; BEGIN q='SELECT 1 from tb_klient LIMIT 0'; FOR r IN EXECUTE q LOOP END LOOP; RETURN NULL; RETURN NULL; END; $BODY$ LANGUAGE 'plpgsql'; And some simple Query: explain analyze SELECT sfunction() AS value FROM ( SELECT 5604913 AS id ,5666 AS idtowmag ) AS c LEFT OUTER JOIN tg_tm AS tm ON (tm.ttm_idtowmag=c.idtowmag); When I run this query explain analyze is: Subquery Scan on a (cost=0.00..0.27 rows=1 width=8) (actual time=24.041..24.042 rows=1 loops=1) - Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.001..0.002 rows=1 loops=1) Total runtime: 24.068 ms But when I change: 1. Table tb_klient to some other table (but not any other - queries with some tables are still slow) or 2. FOR r IN EXECUTE q change to FOR r IN SELECT 1 from tb_klient LIMIT 0 or 3. add LEFT OUTER JOIN tb_klient AS kl ON (kl.k_idklienta=c.idtowmag) to query Explain analyze of query is: Subquery Scan on a (cost=0.00..0.27 rows=1 width=8) (actual time=1.868..1.869 rows=1 loops=1) - Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.002..0.002 rows=1 loops=1) Total runtime: 1.894 ms Explain analyze of SELECT 1 from tb_klient LIMIT 0 is: Limit (cost=0.00..0.13 rows=1 width=0) (actual time=0.001..0.001 rows=0 loops=1) - Seq Scan on tb_klient (cost=0.00..854.23 rows=6823 width=0) (never executed) Total runtime: 0.025 ms tb_klient has 8200 rows and 77 cols. Why speed of executing (or planning) some very simple query from string in pl/pgsql is dependent from whole query or why FOR r IN EXECUTE q is significally slower from FOR r IN query? --- Artur Zajac
Re: [PERFORM] Speed of EXCECUTE in PL/PGSQL
On Thu, Mar 14, 2013 at 2:22 PM, Artur Zając aza...@ang.com.pl wrote: Hi, I have PostgreSQL 9.0.12 on Windows. I have some simple function: CREATE OR REPLACE FUNCTION sfunction() RETURNS BOOL AS $BODY$ DECLARE q TEXT; r RECORD; BEGIN q='SELECT 1 from tb_klient LIMIT 0'; FOR r IN EXECUTE q LOOP END LOOP; RETURN NULL; RETURN NULL; END; $BODY$ LANGUAGE 'plpgsql'; And some simple Query: explain analyze SELECT sfunction() AS value FROM ( SELECT 5604913 AS id ,5666 AS idtowmag ) AS c LEFT OUTER JOIN tg_tm AS tm ON (tm.ttm_idtowmag=c.idtowmag); When I run this query explain analyze is: Subquery Scan on a (cost=0.00..0.27 rows=1 width=8) (actual time=24.041..24.042 rows=1 loops=1) - Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.001..0.002 rows=1 loops=1) Total runtime: 24.068 ms But when I change: 1. Table tb_klient to some other table (but not any other – queries with some tables are still slow) or 2. “FOR r IN EXECUTE q” change to “FOR r IN SELECT 1 from tb_klient LIMIT 0” or 3. add “LEFT OUTER JOIN tb_klient AS kl ON (kl.k_idklienta=c.idtowmag)” to query Explain analyze of query is: Subquery Scan on a (cost=0.00..0.27 rows=1 width=8) (actual time=1.868..1.869 rows=1 loops=1) - Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.002..0.002 rows=1 loops=1) Total runtime: 1.894 ms Explain analyze of “SELECT 1 from tb_klient LIMIT 0” is: Limit (cost=0.00..0.13 rows=1 width=0) (actual time=0.001..0.001 rows=0 loops=1) - Seq Scan on tb_klient (cost=0.00..854.23 rows=6823 width=0) (never executed) Total runtime: 0.025 ms tb_klient has 8200 rows and 77 cols. Why speed of executing (or planning) some very simple query from string in pl/pgsql is dependent from whole query or why “FOR r IN EXECUTE q” is significally slower from “FOR r IN query”? kinda hard to follow you here. but, it looks like you are adding LIMIT 0 which makes performance comparison unfair? merlin -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance
Re: [PERFORM] Speed of EXCECUTE in PL/PGSQL
On 03/14/2013 03:22 PM, Artur Zając wrote: Why speed of executing (or planning) some very simple query from string in pl/pgsql is dependent from whole query or why “FOR r IN EXECUTE q” is significally slower from “FOR r IN query”? The whole point of EXECUTE is that it's reparsed and planned each time. You should expect it to be quite a bit slower, and avoid using EXECUTE wherever possible. cheers andrew -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance
Re: [PERFORM] New server setup
On 15/03/13 07:54, Bruce Momjian wrote: Only use SSDs with a BBU cache, and don't set SSD caches to write-through because an SSD needs to cache the write to avoid wearing out the chips early, see: http://momjian.us/main/blogs/pgblog/2012.html#August_3_2012 I not convinced about the need for BBU with SSD - you *can* use them without one, just need to make sure about suitable longevity and also the presence of (proven) power off protection (as discussed previously). It is worth noting that using unproven or SSD known to be lacking power off protection with a BBU will *not* save you from massive corruption (or device failure) upon unexpected power loss. Also, in terms of performance, the faster PCIe SSD do about as well by themselves as connected to a RAID card with BBU. In fact they will do better in some cases (the faster SSD can get close to the max IOPS many RAID cards can handle...so more than a couple of 'em plugged into one card will be throttled by its limitations). Cheers Mark -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance
Re: [PERFORM] New server setup
On 15/03/13 10:37, Mark Kirkwood wrote: Also, in terms of performance, the faster PCIe SSD do about as well by themselves as connected to a RAID card with BBU. Sorry - I meant to say the faster **SAS** SSD do..., since you can't currently plug PCIe SSD into RAID cards (confusingly, some of the PCIe guys actually have RAID card firmware on their boards...Intel 910 I think). Cheers Mark -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance
Re: [PERFORM] New server setup
On Fri, Mar 15, 2013 at 10:37:55AM +1300, Mark Kirkwood wrote: On 15/03/13 07:54, Bruce Momjian wrote: Only use SSDs with a BBU cache, and don't set SSD caches to write-through because an SSD needs to cache the write to avoid wearing out the chips early, see: http://momjian.us/main/blogs/pgblog/2012.html#August_3_2012 I not convinced about the need for BBU with SSD - you *can* use them without one, just need to make sure about suitable longevity and also the presence of (proven) power off protection (as discussed previously). It is worth noting that using unproven or SSD known to be lacking power off protection with a BBU will *not* save you from massive corruption (or device failure) upon unexpected power loss. I don't think any drive that corrupts on power-off is suitable for a database, but for non-db uses, sure, I guess they are OK, though you have to be pretty money-constrainted to like that tradeoff. -- Bruce Momjian br...@momjian.ushttp://momjian.us EnterpriseDB http://enterprisedb.com + It's impossible for everything to be true. + -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance
Re: [PERFORM] New server setup
On 15/03/13 11:34, Bruce Momjian wrote: I don't think any drive that corrupts on power-off is suitable for a database, but for non-db uses, sure, I guess they are OK, though you have to be pretty money-constrainted to like that tradeoff. Agreed - really *all* SSD should have capacitor (or equivalent) power off protection...that fact that it's a feature present on only a handful of drives is...disappointing. -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance