[PERFORM] limit + order by is slow if no rows in result set

2007-02-12 Thread Brian Cox
There are 1.9M rows in ts_defects and indexes on b.ts_id (primary key) d.ts_biz_event_id and d.ts_occur_date. Both queries below return 0 rows. The 1st runs fast and the 2nd 400x slower. The 2nd query differs from the 1st only by the addition of limit 1. Why the big difference in

Re: [PERFORM] limit + order by is slow if no rows in result set

2007-02-12 Thread Brian Cox
Hi Heikki, Thanks for your response. Please run EXPLAIN ANALYZE on both queries, and send back the results. [EMAIL PROTECTED] jsp]$ PGPASSWORD=quality psql -U admin -d cemdb -h 192.168.1.30 -c 'explain analyze select * from ts_defects d join ts_biz_events b on b.ts_id = d.ts_biz_event_id

[PERFORM] Deleting millions of rows

2009-02-02 Thread Brian Cox
I'm using 8.3.5. Table ts_defects has 48M rows. Through psql: delete from ts_defects; Result: out of memory/Can't allocate size: 32 I then did 10 or so deletes to get rid of the rows. Afterwards, inserts into or queries on this table performed significantly slower. I tried a vacuum analyze, but

Re: [PERFORM] Deleting millions of rows

2009-02-02 Thread Brian Cox
Robert Haas [robertmh...@gmail.com] wrote: Thanks for your response. Does the table have triggers on it? Does it have indexes? What is the result of pg_relation_size() on that table? No triggers; 3 indexes cemdb= select pg_relation_size('ts_defects'); pg_relation_size --

Re: [PERFORM] Deleting millions of rows

2009-02-02 Thread Brian Cox
David Wilson [david.t.wil...@gmail.com] wrote: Is this table the target of any foreign keys? There are 2 on delete cascade FKs that reference this table. Brian -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription:

Re: [PERFORM] Deleting millions of rows

2009-02-02 Thread Brian Cox
Tom Lane [...@sss.pgh.pa.us] wrote: It's the pending trigger list. He's got two trigger events per row, which at 40 bytes apiece would approach 4GB of memory. Apparently it's a 32-bit build of Postgres, so he's running out of process address space. Yes, this is a 32 bit Postgres running on a

[PERFORM] slow queries

2009-02-28 Thread Brian Cox
Actually, they're all deadlocked. The question is why? Here's a brief background. The ts_defects table is partitioned by occurrence date; each partition contains the rows for 1 day. When the data gets old enough, the partition is dropped. Since the correct partition can be determined from the

Re: [PERFORM] slow queries

2009-03-02 Thread Brian Cox
Tom Lane [...@sss.pgh.pa.us] wrote: [ shrug... ] You tell us. To me it sounds a whole lot like some client program sitting on an open transaction that has a nonexclusive lock on the table to be dropped. That transaction wasn't necessarily doing any useful work; it might have just been waiting

Re: [PERFORM] slow queries

2009-03-02 Thread Brian Cox
Tom Lane [...@sss.pgh.pa.us] wrote: If you hadn't left out the granted column we could be more sure, but what it looks like to me is the DROP (pid 13842) is stuck behind the IDLE transaction (pid 13833). In particular these two rows of pg_locks look like a possible conflict: relation

Re: [PERFORM] slow queries

2009-03-02 Thread Brian Cox
Tom Lane [...@sss.pgh.pa.us] wrote: Well, that's certainly a sufficient reason, if perhaps not the only reason. Dropping ts_defects_20090227 will require removal of FK triggers on ts_transets, and we can't do that concurrently with transactions that might be trying to fire those triggers. Now

[PERFORM] multiple threads inserting into the same table

2009-03-23 Thread Brian Cox
The application log shows that 99652 rows are being inserted into relation ts_stats_transet_user_daily. 5 threads are doing the inserts. The schema is lengthy, but it has a synthetic primary key (ts_id int8 not null) and the following constraints: alter table ts_stats_transet_user_daily add

Re: [PERFORM] multiple threads inserting into the same table

2009-03-23 Thread Brian Cox
Tom Lane [...@sss.pgh.pa.us] wrote: pg_stat_activity says those five threads are doing nothing except sitting around with open transactions. You sure you don't have a bug on the application side? regards, tom lane This is a java app. A thread dump reveals that these 5

Re: [PERFORM] multiple threads inserting into the same table

2009-03-23 Thread Brian Cox
Scott Marlowe [scott.marl...@gmail.com] wrote: Might you have a firewall that's killing the connections? What does netstat -an on the client side say about these connections? I don't think so: 1) app and postgres are on the same machine and 2) this has been the set up for months and I don't

Re: [PERFORM] multiple threads inserting into the same table

2009-03-23 Thread Brian Cox
David Wilson [david.t.wil...@gmail.com] wrote: How many indexes are there on ts_stats_transset_user_daily? 10: create index ts_stats_transet_user_daily_DayIndex on ts_stats_transet_user_daily (ts_day); create index ts_stats_transet_user_daily_HourIndex on ts_stats_transet_user_daily

[PERFORM] determining the locks that will be held by a query

2009-04-07 Thread Brian Cox
I know that EXPLAIN will show the query plan. I know that pg_locks will show the locks currently held for activity transactions. Is there a way to determine what locks a query will hold when it is executed? Thanks, Brian -- Sent via pgsql-performance mailing list

[PERFORM] determining the locks that will be held by a query

2009-04-10 Thread Brian Cox
I sent this out on 4/7 and either missed a response or didn't get one. If this is the wrong forum, I'd appreciate a redirect. I know that EXPLAIN will show the query plan. I know that pg_locks will show the locks currently held for activity transactions. Is there a way to determine what locks a

[PERFORM] autovacuum hung?

2009-05-29 Thread Brian Cox
autovacuum has been running on 2 tables for 5 hours. There tables are not huge (see below). For the past ~1 hour, I've shut off all other activity on this database. The other table being vacuumed has more rows (1897810). Anyone have any ideas about why this is taking so long? Thanks, Brian

Re: [PERFORM] autovacuum hung?

2009-05-29 Thread Brian Cox
Alvaro Herrera [alvhe...@commandprompt.com] wrote: What's vacuum_cost_delay? #vacuum_cost_delay = 0 # 0-1000 milliseconds #vacuum_cost_page_hit = 1 # 0-1 credits #vacuum_cost_page_miss = 10 # 0-1 credits #vacuum_cost_page_dirty = 20

Re: [PERFORM] autovacuum hung?

2009-05-31 Thread Brian Cox
Tom Lane [...@sss.pgh.pa.us] wrote: Are those processes actually doing anything, or just waiting? strace or local equivalent would be the most conclusive check. These must not have been hung, because they finally completed (after 10-15 hrs - some time between 11pm and 8am). Question is why

Re: [PERFORM] autovacuum hung?

2009-05-31 Thread Brian Cox
Tom Lane [...@sss.pgh.pa.us] wrote: They might have been blocked behind some other process that was sitting in an open transaction for some reason. The other likely cause is badly chosen autovacuum delay, but I think that was already covered. Well, after I noticed this running for a while, I

Re: [PERFORM] autovacuum hung?

2009-05-31 Thread Brian Cox
Tom Lane [...@sss.pgh.pa.us] wrote: No, no, and no. What would be best is to find out what actually happened. The evidence is gone now, but if you see it again please take a closer look. OK. You mentioned strace. It's got a lot of options; any in particular that would be useful if this

[PERFORM] very slow selects on a small table

2009-06-17 Thread Brian Cox
There are 4 threads (4 postgres processes) loading all rows from a table with 50,018 rows. The table has a int8 PK that is incremented by 1 for each new row and the PK is used by the threads to partition the rows so that each loads distinct rows. As you can see below, these 4 SELECTs have been

Re: [PERFORM] very slow selects on a small table

2009-06-17 Thread Brian Cox
Tom Lane [...@sss.pgh.pa.us] wrote: That is a pretty odd trace for a Postgres backend; apparently it's repeatedly acquiring and releasing a meg or two worth of memory, which is not very normal within a single query. Can you tell us more about the query it's running? An EXPLAIN plan would be

Re: [PERFORM] very slow selects on a small table

2009-06-17 Thread Brian Cox
Tom Lane [...@sss.pgh.pa.us] wrote: Um, are you sure that is the query that PID 7397 is running? It doesn't match your previous pg_stat_activity printout, nor do I see anything about partitioning by PKs. Umm, indeed. I had to construct the query by hand and left out the partition part.

Re: [PERFORM] very slow selects on a small table

2009-06-17 Thread Brian Cox
Tom Lane [...@sss.pgh.pa.us] wrote: This seems like kind of a stupid plan anyway (which PG version was this exactly?) but certainly the big issue is the catastrophically bad rowcount estimate for the indexscan. Do you have ANALYZE stats for ts_stats_transetgroup_user_daily at all (look in

Re: [PERFORM] very slow selects on a small table

2009-06-17 Thread Brian Cox
Tom Lane [...@sss.pgh.pa.us] wrote: OK, so what's the entry for column ts_id? Is this what you requested? Brian cemdb=# select * from pg_stats where tablename='ts_stats_transetgroup_user_daily' and attname = 'ts_id'; schemaname |tablename | attname | null_frac |

Re: [PERFORM] very slow selects on a small table

2009-06-18 Thread Brian Cox
Tom Lane [...@sss.pgh.pa.us] wrote: Yup. So according to those stats, all ts_id values fall in the range 61 .. 6000250068. It's no wonder it's not expecting to find anything between 0 and 10. I think maybe you forgot to re-analyze after loading data ... although

Re: [PERFORM] very slow selects on a small table

2009-06-18 Thread Brian Cox
Grzegorz Jakiewicz [gryz...@gmail.com] wrote: this might be quite bogus question, just a hit - but what is your work_mem set to ? Guys, isn't postgresql giving hudge cost, when it can't sort in memory ? work_mem = 64MB -- Sent via pgsql-performance mailing list

[PERFORM] select max() much slower than select min()

2009-06-18 Thread Brian Cox
ts_stats_transet_user_interval has ~48M rows. ts_id is the PK and there is an index on ts_interval_start_time. I reindexed it and ran vacuum analyze. Only SELECTs have been done since these operations. cemdb=# explain select min(ts_id) from ts_stats_transet_user_interval a where 0=0 and

Re: [PERFORM] select max() much slower than select min()

2009-06-18 Thread Brian Cox
Kevin Grittner [kevin.gritt...@wicourts.gov] wrote: Is there any correlation between ts_id and ts_interval_start_time? only vaguely: increasing ts_interval_start_time implies increasing ts_id but there may be many rows (100,000's) with the same ts_interval_start_time Perhaps if you tried min

Re: [PERFORM] select max() much slower than select min()

2009-06-19 Thread Brian Cox
Tom Lane [...@sss.pgh.pa.us] wrote: Some experimentation suggests that it might help to provide a 2-column index on (ts_id, ts_interval_start_time). This is still going to be scanned in order by ts_id, but it will be possible to check the ts_interval_start_time condition in the index,

Re: [PERFORM] select max() much slower than select min()

2009-06-19 Thread Brian Cox
David Rees [dree...@gmail.com] wrote: Along those lines, couldn't you just have the DB do the work? select max(ts_id), min(ts_id) from ... where ts_interval_start_time = ... and ... Then you don't have to transfer 500k ids across the network... I guess you didn't read the entire thread: I

Re: [PERFORM] autovacuum hung?

2009-07-10 Thread Brian Cox
Tom Lane [...@sss.pgh.pa.us] wrote: Oh, and don't forget the more-complete pg_locks state. We'll want all the columns of pg_locks, not just the ones you showed before. auto vacuum of ts_user_sessions_map has been running for 17 hours. This table has 2,204,488 rows. I hope that I've captured

Re: [PERFORM] Planner question - bit data types

2009-09-07 Thread Brian Cox
pgsql-performance-ow...@postgresql.org wrote: On Sat, Sep 5, 2009 at 8:19 PM, Karl Denningerk...@denninger.net wrote: There was a previous thread and I referenced it. I don't have the other one in my email system any more to follow up to it. I give up; the attack-dog crowd has

[PERFORM] query looping?

2010-01-04 Thread Brian Cox
The query shown below [select count(distinct...] seems to be looping (99-101% CPU as shown by top for 11+ hours). This using postgres 8.3.5 on a dual quad core machine (Intel(R) Xeon(R) CPU X5460 @ 3.16GHz) with 32G RAM. Can I provide any other info to help investigate this issue? Or any

Re: [PERFORM] query looping?

2010-01-04 Thread Brian Cox
On 01/04/2010 04:53 PM, Robert Haas [robertmh...@gmail.com] wrote: PREPARE foo AS the query, with the $x entries still in there EXPLAIN EXECUTE foo(the values); Thanks for the response. Results below. Brian cemdb= prepare foo as select count(distinct b.ts_id) from

Re: [PERFORM] query looping?

2010-01-05 Thread Brian Cox
SELECT SUM(1) FROM ts_stats_transetgroup_user_weekly b WHERE ts_interval_start_time [value] AND ts_interval_start_time [value]; ...and similarly for the bitmap index scan. cemdb= SELECT SUM(1) FROM ts_stats_transetgroup_user_weekly b WHERE ts_interval_start_time = '2009-12-28' AND

Re: [PERFORM] query looping?

2010-01-07 Thread Brian Cox
On 01/05/2010 08:34 PM, Robert Haas [robertmh...@gmail.com] wrote: - If you have other queries where this index helps (even though it is hurting this one), then you're going to have to find a way to execute the query without using bound parameters - i.e. with the actual values in there instead

[PERFORM] query has huge variance in execution times

2010-03-30 Thread Brian Cox
postgres 8.3.5 on RHEL4 update 6 This query starts executing at 18:41: cemdb= select query_start,current_query from pg_stat_activity where procpid=10022; query_start | current_query

Re: [PERFORM] query has huge variance in execution times

2010-03-31 Thread Brian Cox
On 03/31/2010 12:37 AM, David Wilson [david.t.wil...@gmail.com] wrote: These won't necessarily get the same plan. If you want to see what plan the prepared query is getting, you'll need to prepare it (prepare foo as query) and then explain *that* via explain execute foo. The prepared version

[PERFORM] query slow; strace output worrisome

2010-04-05 Thread Brian Cox
The SELECT show below has been running for 30+mins and the strace output is alarming: [r...@dione ~]# strace -p 10083 Process 10083 attached - interrupt to quit creat(umovestr: Input/output error 0x2, 0377)= 1025220608 creat(umovestr: Input/output error 0x2,

Re: [PERFORM] query slow; strace output worrisome

2010-04-05 Thread Brian Cox
On 04/05/2010 09:53 PM, Craig Ringer [cr...@postnewspapers.com.au] wrote: Anything in `dmesg' (command) or /var/log/syslog ? nothing out of the ordinary. Brian -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription:

Re: [PERFORM] query slow; strace output worrisome

2010-04-06 Thread Brian Cox
On 04/06/2010 01:18 AM, Craig Ringer [cr...@postnewspapers.com.au] wrote: I'm wondering if the issue is with strace rather than Pg. That is to say, that strace is trying to print: Thanks, Craig: I do think that this is a strace issue. As for what Pg is doing: creat() returns -1 on error and a

[PERFORM] could not open relation...

2010-04-09 Thread Brian Cox
I saw this in the postgres log. Anyone know what would cause this? Thanks, Brian postgres 8.3.5 on RHEL4 update 6 [3358-cemdb-admin-2010-04-09 04:00:19.029 PDT]ERROR: could not open relation with OID 170592 [3358-cemdb-admin-2010-04-09 04:00:19.029 PDT]STATEMENT: select

Re: [PERFORM] could not open relation...

2010-04-10 Thread Brian Cox
On 04/10/2010 12:29 AM, Tom Lane [...@sss.pgh.pa.us] wrote: Seems a bit off-topic for pgsql-performance, What would be the appropriate forum? but anyway: the main known cause for that is if one of the tables used in the query got dropped (by another session) just after the query started.