Re: [PERFORM] Bad performance of SELECT ... where id IN (...)

2009-09-30 Thread Ivan Voras

Xia Qingran wrote:

On Sun, Sep 27, 2009 at 1:03 AM, Tom Lane t...@sss.pgh.pa.us wrote:

Xia Qingran qingran@gmail.com writes:

I have a big performance problem in my SQL select query:
select * from event where user_id in
(500,499,498, ... ,1,0);
The above SELECT always spends 1200ms.

Your EXPLAIN ANALYZE shows that the actual runtime is only about 240ms.
So either the planning time is about 1000ms, or transmitting and
displaying the 134K rows produced by the query takes that long, or some
combination of the two.  I wouldn't be too surprised if it's the data
display that's slow; but if it's the planning time that you're unhappy
about, updating to a more recent PG release might possibly help.  What
version is this anyway?

   regards, tom lane


Oh, It is a problem.


I don't see where the Total runtime information is in your first message.

Also, did you run VACUUM FULL ANALYZE lately?


Forgot to talk about my platform. I am running PostgreSQL 8.4.0 on
FreeBSD 7.2-amd64 box, which has dual Xeon 5410 CPUs, 8GB memory and 2
SATA disks.

And my postgresql.conf is listed as follow:
---

listen_addresses = '*'  # what IP address(es) to listen on;
port = 5432 # (change requires restart)
max_connections = 88# (change requires restart)
superuser_reserved_connections = 3
ssl = off   # (change requires restart)
tcp_keepalives_idle = 0 # TCP_KEEPIDLE, in seconds;
tcp_keepalives_interval = 0 # TCP_KEEPINTVL, in seconds;
tcp_keepalives_count = 0# TCP_KEEPCNT;
shared_buffers = 2048MB # min 128kB or max_connections*16kB


For start I think you will need to make shared_buffers larger than your 
index to get decent performance - try setting it to 4096 MB and see if 
it helps.



temp_buffers = 32MB # min 800kB
max_prepared_transactions = 150 # can be 0 or more, 0 to shutdown the
prepared transactions.
work_mem = 8MB  # min 64kB


Depending on the type of your workload (how many clients are connected 
and how complex are the queries) you might want to increase work_mem 
also. Try 16 MB - 32 MB or more and see if it helps.



fsync = off # turns forced synchronization on or off
synchronous_commit = off# immediate fsync at commit


Offtopic - you probably know what you are doing by disabling these, right?


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] CPU cost of operators

2009-09-30 Thread Matthew Wakeling


Episode umpteen of the ongoing saga with my GiST indexes.

For some reason, GiST uses loads of CPU. I have a query that runs entirely 
out of cache, and it takes ages. This much I have tried to fix and failed 
so far.


What I would now like to do is to tell postgres about it, so that the 
EXPLAINs are correct. Is there a way to tell Postgres that an operator has 
a large CPU cost? I can tell it what the join selectivity is, but I can't 
find anything about CPU cost.


Matthew

--
Unfortunately, university regulations probably prohibit me from eating
small children in front of the lecture class.
   -- Computer Science Lecturer

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] CPU cost of operators

2009-09-30 Thread Robert Haas
On Wed, Sep 30, 2009 at 1:12 PM, Matthew Wakeling matt...@flymine.org wrote:

 Episode umpteen of the ongoing saga with my GiST indexes.

 For some reason, GiST uses loads of CPU. I have a query that runs entirely
 out of cache, and it takes ages. This much I have tried to fix and failed so
 far.

 What I would now like to do is to tell postgres about it, so that the
 EXPLAINs are correct. Is there a way to tell Postgres that an operator has a
 large CPU cost? I can tell it what the join selectivity is, but I can't find
 anything about CPU cost.

Not that I know of, but seems like it would be a reasonable extension.

...Robert

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] CPU cost of operators

2009-09-30 Thread Robert Haas
On Wed, Sep 30, 2009 at 4:13 PM, Robert Haas robertmh...@gmail.com wrote:
 On Wed, Sep 30, 2009 at 1:12 PM, Matthew Wakeling matt...@flymine.org wrote:

 Episode umpteen of the ongoing saga with my GiST indexes.

 For some reason, GiST uses loads of CPU. I have a query that runs entirely
 out of cache, and it takes ages. This much I have tried to fix and failed so
 far.

 What I would now like to do is to tell postgres about it, so that the
 EXPLAINs are correct. Is there a way to tell Postgres that an operator has a
 large CPU cost? I can tell it what the join selectivity is, but I can't find
 anything about CPU cost.

 Not that I know of, but seems like it would be a reasonable extension.

Er, wait... if you set the 'COST' parameter for the backing function,
does that work?
.
...Robert

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] CPU cost of operators

2009-09-30 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 Er, wait... if you set the 'COST' parameter for the backing function,
 does that work?

It's supposed to...

regards, tom lane

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance