Re: [PERFORM] Setting vacuum_freeze_min_age really low

2013-05-13 Thread Robert Haas
On Sun, May 12, 2013 at 8:50 AM, Andres Freund and...@2ndquadrant.com wrote:
 [ a response that I entirely agree with ]

+1 to all that.

It's maybe worth noting that it's probably fairly uncommon for vacuum
to read a page and not dirty it, because if the page is all-visible,
we won't read it.  And if it's not all-visible, and there's nothing
else interesting to do with it, we'll probably make it all-visible,
which will dirty it.  It can happen, if for example we vacuum a page
with no dead tuples while the inserting transaction is still running,
or committed but not yet all-visible.  Of course, in those cases we
won't be able to freeze, either.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Setting vacuum_freeze_min_age really low

2013-05-13 Thread Andres Freund
On 2013-05-13 13:21:54 -0400, Robert Haas wrote:
 On Sun, May 12, 2013 at 8:50 AM, Andres Freund and...@2ndquadrant.com wrote:
  [ a response that I entirely agree with ]
 
 +1 to all that.

 It's maybe worth noting that it's probably fairly uncommon for vacuum
 to read a page and not dirty it, because if the page is all-visible,
 we won't read it.

But only if 50(?)+ pages are marked all-visible in one go, otherwise we
afair won't skip afair. And we don't skip them at all during full table
vacuums.

  And if it's not all-visible, and there's nothing
 else interesting to do with it, we'll probably make it all-visible,
 which will dirty it.  It can happen, if for example we vacuum a page
 with no dead tuples while the inserting transaction is still running,
 or committed but not yet all-visible.  Of course, in those cases we
 won't be able to freeze, either.

IIRC the actual values below which we freeze are always computed
relative to GetOldestXmin() (and have to, otherwise rows will suddently
appear visible). In many, many environment thats lagging behind quite a
bit. Longrunning user transactions, pg_dump, hot_standby_feedback,
vacuum_defer_cleanup_age...

Also, even if the *whole* page isn't all visible because e.g. there just
was another row inserted we still freeze individual rows.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] RT3.4 query needed a lot more tuning with 9.2 than it did with 8.1

2013-05-13 Thread Robert Haas
On Tue, Apr 30, 2013 at 7:20 AM, Christoph Berg
christoph.b...@credativ.de wrote:
-  Nested Loop  
 (cost=24.57..844.83 rows=62335 width=4) (actual time=0.109..0.633 rows=23 
 loops=1)
  -  Bitmap Heap Scan 
 on acl acl_2  (cost=8.90..61.36 rows=33 width=10) (actual time=0.070..0.112 
 rows=22 loops=1)
Recheck Cond: 
 rightname)::text = 'OwnTicket'::text) AND ((objecttype)::text = 
 'RT::System'::text)) OR (((rightname)::text = 'OwnTicket'::text) AND 
 ((objecttype)::text = 'RT::Queue'::text) AND (objectid = 10)))
-  BitmapOr  
 (cost=8.90..8.90 rows=35 width=0) (actual time=0.064..0.064 rows=0 loops=1)
  -  
 Bitmap Index Scan on acl1  (cost=0.00..4.47 rows=22 width=0) (actual 
 time=0.036..0.036 rows=8 loops=1)

 Index Cond: (((rightname)::text = 'OwnTicket'::text) AND ((objecttype)::text 
 = 'RT::System'::text))
  -  
 Bitmap Index Scan on acl1  (cost=0.00..4.41 rows=13 width=0) (actual 
 time=0.026..0.026 rows=14 loops=1)

 Index Cond: (((rightname)::text = 'OwnTicket'::text) AND ((objecttype)::text 
 = 'RT::Queue'::text) AND (objectid = 10))
  -  Bitmap Heap Scan 
 on groups groups_3  (cost=15.67..23.73 rows=1 width=30) (actual 
 time=0.022..0.023 rows=1 loops=22)
Recheck Cond: 
 ((acl_2.principalid = id) OR type)::text = (acl_2.principaltype)::text) 
 AND (instance = 10) AND ((domain)::text = 'RT::Queue-Role'::text)) OR 
 (((type)::text = (acl_2.principaltype)::text) AND (instance = 999028) AND 
 ((domain)::text = 'RT::Ticket-Role'::text
Filter: 
 domain)::text = 'SystemInternal'::text) OR ((domain)::text = 
 'UserDefined'::text) OR ((domain)::text = 'ACLEquivalence'::text) OR 
 (((domain)::text = 'RT::Queue-Role'::text) AND (instance = 10)) OR 
 (((domain)::text = 'RT::Ticket-Role'::text) AND (instance = 999028))) AND 
 (((acl_2.principalid = id) AND ((acl_2.principaltype)::text = 'Group'::text) 
 AND (((domain)::text = 'SystemInternal'::text) OR ((domain)::text = 
 'UserDefined'::text) OR ((domain)::text = 'ACLEquivalence'::text))) OR 
 (domain)::text = 'RT::Queue-Role'::text) AND (instance = 10)) OR 
 (((domain)::text = 'RT::Ticket-Role'::text) AND (instance = 999028))) AND 
 ((type)::text = (acl_2.principaltype)::text
-  BitmapOr  
 (cost=15.67..15.67 rows=2 width=0) (actual time=0.019..0.019 rows=0 loops=22)
  -  
 Bitmap Index Scan on groups_pkey  (cost=0.00..4.76 rows=1 width=0) (actual 
 time=0.005..0.005 rows=1 loops=22)

 Index Cond: (acl_2.principalid = id)
  -  
 BitmapOr  (cost=10.66..10.66 rows=1 width=0) (actual time=0.013..0.013 rows=0 
 loops=22)
- 
  Bitmap Index Scan on groups2  (cost=0.00..5.33 rows=1 width=0) (actual 
 time=0.007..0.007 rows=0 loops=22)
   
Index Cond: (((type)::text = (acl_2.principaltype)::text) AND (instance = 
 10) AND ((domain)::text = 'RT::Queue-Role'::text))
- 
  Bitmap Index Scan on groups2  (cost=0.00..5.33 rows=1 width=0) (actual 
 time=0.006..0.006 rows=0 loops=22)
   
Index Cond: (((type)::text = (acl_2.principaltype)::text) AND (instance = 
 999028) AND ((domain)::text = 'RT::Ticket-Role'::text))

The planner is estimating this the outer side of this nested loop will
produce 33 rows and that the inner side will produce 1.  One would
assume that the row estimate for the join product couldn't be more
than 33 * 1 = 33 rows, but the planner is estimating 62335 rows, which
seems like nonsense.  The actual result cardinality is 23.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] RT3.4 query needed a lot more tuning with 9.2 than it did with 8.1

2013-05-13 Thread Mark Felder
On Tue, 30 Apr 2013 06:20:55 -0500, Christoph Berg  
christoph.b...@credativ.de wrote:



Hi,
this is more of a report than a question, because we thought this
would be interesting to share.
We recently (finally) migrated an Request Tracker 3.4 database running
on 8.1.19 to 9.2.4. The queries used by rt3.4 are sometimes weird, but
8.1 coped without too much tuning. The schema looks like this:


What version of DBIx-SearchBuilder do you have on that server? The RT guys  
usually recommend you have the latest possible so RT is performing the  
most sane/optimized queries possible for your database. I honestly don't  
know if it will make a difference for you, but it's worth a shot.



--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] RT3.4 query needed a lot more tuning with 9.2 than it did with 8.1

2013-05-13 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 The planner is estimating this the outer side of this nested loop will
 produce 33 rows and that the inner side will produce 1.  One would
 assume that the row estimate for the join product couldn't be more
 than 33 * 1 = 33 rows, but the planner is estimating 62335 rows, which
 seems like nonsense.

You know, of course, that the join size estimate isn't arrived at that
way.  Still, this point does make it seem more like a planner bug and
less like bad input stats.  It would be nice to see a self-contained
example ...

regards, tom lane


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] RT3.4 query needed a lot more tuning with 9.2 than it did with 8.1

2013-05-13 Thread Robert Haas
On Mon, May 13, 2013 at 4:14 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Robert Haas robertmh...@gmail.com writes:
 The planner is estimating this the outer side of this nested loop will
 produce 33 rows and that the inner side will produce 1.  One would
 assume that the row estimate for the join product couldn't be more
 than 33 * 1 = 33 rows, but the planner is estimating 62335 rows, which
 seems like nonsense.

 You know, of course, that the join size estimate isn't arrived at that
 way.  Still, this point does make it seem more like a planner bug and
 less like bad input stats.  It would be nice to see a self-contained
 example ...

Yeah, I remember there have been examples like this that have come up
before.  Unfortunately, I haven't fully grokked what's actually going
on here that allows this kind of thing to happen.  Refresh my memory
on where the relevant code is?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] RT3.4 query needed a lot more tuning with 9.2 than it did with 8.1

2013-05-13 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 On Mon, May 13, 2013 at 4:14 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 You know, of course, that the join size estimate isn't arrived at that
 way.  Still, this point does make it seem more like a planner bug and
 less like bad input stats.  It would be nice to see a self-contained
 example ...

 Yeah, I remember there have been examples like this that have come up
 before.  Unfortunately, I haven't fully grokked what's actually going
 on here that allows this kind of thing to happen.  Refresh my memory
 on where the relevant code is?

The point is that we estimate the size of a joinrel independently of
any particular input paths for it, and indeed before we've built any
such paths.  So this seems like a bug somewhere in selectivity
estimation, but I'm not prepared to speculate as to just where.

regards, tom lane


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] statistics target for columns in unique constraint?

2013-05-13 Thread Marti Raudsepp
On Mon, May 13, 2013 at 6:01 PM, ach alanchi...@gmail.com wrote:
 what I'm wondering is, since
 the unique constraint already covers the whole table and all rows in
 entirety, is it really necessary for statistics to be set that high on
 those?

AFAIK if there are exact-matching unique constraints/indexes for a
query's WHERE clause, the planner will deduce that the query only
returns 1 row and won't consult statistics at all.

 Or does that only serve to slow down inserts to that table?

It doesn't slow down inserts directly. Tables are analyzed in the
background by autovacuum. However, I/O traffic from autovacuum analyze
may slow down inserts running concurrently.

Regards,
Marti


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Hardware suggestions for maximum read performance

2013-05-13 Thread Mike McCann
On May 7, 2013, at 4:21 PM, Jeff Janes wrote:

 On Thu, May 2, 2013 at 6:35 PM, Scott Marlowe scott.marl...@gmail.com wrote:
 On Thu, May 2, 2013 at 5:11 PM, Mike McCann mcc...@mbari.org wrote:
  Hello,
 
  We are in the fortunate situation of having more money than time to help
  solve our PostgreSQL 9.1 performance problem.
 
  Our server hosts databases that are about 1 GB in size with the largest
  tables having order 10 million 20-byte indexed records. The data are loaded
  once and then read from a web app and other client programs.  Some of the
  queries execute ORDER BY on the results. There are typically less than a
  dozen read-only concurrent connections to any one database.
 
 I wouldn't count on this being a problem that can be fixed merely by throwing 
 money at it.
 
 How many rows does any one of these queries need to access and then ORDER BY?
 
 ...
 
 
  HP ProLiant DL360p Gen 8
  Dual Intel Xeon 2.4GHz 4-core E5-2609 CPUs
  64GB RAM
  2x146GB 15K SAS hard drives
  3x200GB SATA SLC SSDs
  + the usual accessories (optical drive, rail kit, dual power supplies)
 
 If your DB is 1G, and will grow to 10G then the IO shouldn't be any
 problem, as the whole db should be cached in memory.
 
 
 But it can take a surprisingly long time to get it cached in the first place, 
 from a cold start.
 
 If that is the problem, pg_prewarm could help.  
 
 
 Cheers,
 
 Jeff

Thank you everyone for your suggestions.

It's clear that our current read performance was not limited by hardware.  An 
'explain analyze' for a sample query is:

stoqs_march2013_s=# show work_mem;
 work_mem 
--
 1MB
(1 row)

stoqs_march2013_s=# explain analyze select * from stoqs_measuredparameter order 
by datavalue;
QUERY PLAN  
  
--
 Sort  (cost=541002.15..549456.68 rows=3381814 width=20) (actual 
time=6254.780..7244.074 rows=3381814 loops=1)
   Sort Key: datavalue
   Sort Method: external merge  Disk: 112424kB
   -  Seq Scan on stoqs_measuredparameter  (cost=0.00..55359.14 rows=3381814 
width=20) (actual time=0.011..354.385 rows=3381814 loops=1)
 Total runtime: 7425.854 ms
(5 rows)


Increasing work_mem to 355 MB improves the performance by a factor of 2:

stoqs_march2013_s=# set work_mem='355MB';
SET
stoqs_march2013_s=# explain analyze select * from stoqs_measuredparameter order 
by datavalue;
QUERY PLAN  
  
--
 Sort  (cost=422106.15..430560.68 rows=3381814 width=20) (actual 
time=2503.078..2937.130 rows=3381814 loops=1)
   Sort Key: datavalue
   Sort Method: quicksort  Memory: 362509kB
   -  Seq Scan on stoqs_measuredparameter  (cost=0.00..55359.14 rows=3381814 
width=20) (actual time=0.016..335.745 rows=3381814 loops=1)
 Total runtime: 3094.601 ms
(5 rows)

I tried changing random_page_cost to from 4 to 1 and saw no change.

I'm wondering now what changes might get this query to run in less than one 
second.  If all the data is in memory, then will faster CPU and memory be the 
things that help?

We have an alternate (a bit more conventional) server configuration that we are 
considering:

HP ProLiant DL360p Gen 8
Dual Intel Xeon 3.3GHz 4-core E5-2643 CPUs
128GB PC3-12800 RAM
16x146GB 15K SAS hard drives
HP Smart Array P822/2GB FBWC controller + P420i w/ 2GB FBWC
+ the usual accessories (optical drive, rail kit, dual power supplies)


All suggestions welcomed!

-Mike

--
Mike McCann
Software Engineer
Monterey Bay Aquarium Research Institute
7700 Sandholdt Road
Moss Landing, CA 95039-9644
Voice: 831.775.1769  Fax: 831.775.1736 http://www.mbari.org



Re: [PERFORM] statistics target for columns in unique constraint?

2013-05-13 Thread Mark Kirkwood

On 14/05/13 10:10, Marti Raudsepp wrote:

On Mon, May 13, 2013 at 6:01 PM, ach alanchi...@gmail.com wrote:

what I'm wondering is, since
the unique constraint already covers the whole table and all rows in
entirety, is it really necessary for statistics to be set that high on
those?


AFAIK if there are exact-matching unique constraints/indexes for a
query's WHERE clause, the planner will deduce that the query only
returns 1 row and won't consult statistics at all.


Or does that only serve to slow down inserts to that table?


It doesn't slow down inserts directly. Tables are analyzed in the
background by autovacuum. However, I/O traffic from autovacuum analyze
may slow down inserts running concurrently.




A higher number in stats target means larger stats structures - which in 
turn means that the planning stage of *all* queries may be impacted - 
e.g takes up more memory, slightly slower as these larger structures are 
read, iterated over, free'd etc.


So if your only access is via a defined unique key, then (as Marti 
suggests) - a large setting for stats target would seem to be unnecessary.


If you have access to a test environment I'd recommend you model the 
effect of reducing stats target down (back to the default of 100 or even 
to the old version default of 10).


A little - paranoia - maybe switch on statement logging and ensure that 
there are no *other* ways this table is accessed...the fact that the 
number was cranked up from the default is a little suspicious!


Regards

Mark




--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Hardware suggestions for maximum read performance

2013-05-13 Thread Jeff Janes
On Mon, May 13, 2013 at 3:36 PM, Mike McCann mcc...@mbari.org wrote:


 Increasing work_mem to 355 MB improves the performance by a factor of 2:

 stoqs_march2013_s=# set work_mem='355MB';
 SET
 stoqs_march2013_s=# explain analyze select * from stoqs_measuredparameter
 order by datavalue;
 QUERY PLAN


 --
  Sort  (cost=422106.15..430560.68 rows=3381814 width=20) (actual
 time=2503.078..2937.130 rows=3381814 loops=1)
Sort Key: datavalue
Sort Method: quicksort  Memory: 362509kB
-  Seq Scan on stoqs_measuredparameter  (cost=0.00..55359.14
 rows=3381814 width=20) (actual time=0.016..335.745 rows=3381814 loops=1)
  Total runtime: 3094.601 ms
 (5 rows)


 I tried changing random_page_cost to from 4 to 1 and saw no change.

 I'm wondering now what changes might get this query to run in less than
 one second.



I think you are worrying about the wrong thing here.  What is a web app
going to do with 3,381,814 rows, once it obtains them?  Your current
testing is not testing the time it takes to stream that data to the client,
or for the client to do something meaningful with that data.

If you only plan to actually fetch a few dozen of those rows, then you
probably need to incorporate that into your test, either by using a LIMIT,
or by using a mock-up of the actual application to do some timings.

Also, what is the type and collation of the column you are sorting on?
 non-'C' collations of text columns sort about 3 times slower than 'C'
collation does.



 If all the data is in memory, then will faster CPU and memory be the
 things that help?


Yes, those would help (it is not clear to me which of the two would help
more), but I think you need to rethink your design of sending the entire
database table to the application server for each page-view.


Cheers,

Jeff


Re: [PERFORM] Hardware suggestions for maximum read performance

2013-05-13 Thread Mike McCann
On May 13, 2013, at 4:24 PM, Jeff Janes wrote:

 On Mon, May 13, 2013 at 3:36 PM, Mike McCann mcc...@mbari.org wrote:
 
 Increasing work_mem to 355 MB improves the performance by a factor of 2:
 
 stoqs_march2013_s=# set work_mem='355MB';
 SET
 stoqs_march2013_s=# explain analyze select * from stoqs_measuredparameter 
 order by datavalue;
 QUERY PLAN
 
 --
  Sort  (cost=422106.15..430560.68 rows=3381814 width=20) (actual 
 time=2503.078..2937.130 rows=3381814 loops=1)
Sort Key: datavalue
Sort Method: quicksort  Memory: 362509kB
-  Seq Scan on stoqs_measuredparameter  (cost=0.00..55359.14 rows=3381814 
 width=20) (actual time=0.016..335.745 rows=3381814 loops=1)
  Total runtime: 3094.601 ms
 (5 rows)
 
 I tried changing random_page_cost to from 4 to 1 and saw no change.
 
 I'm wondering now what changes might get this query to run in less than one 
 second.  
 
 
 I think you are worrying about the wrong thing here.  What is a web app going 
 to do with 3,381,814 rows, once it obtains them?  Your current testing is not 
 testing the time it takes to stream that data to the client, or for the 
 client to do something meaningful with that data.
 
 If you only plan to actually fetch a few dozen of those rows, then you 
 probably need to incorporate that into your test, either by using a LIMIT, or 
 by using a mock-up of the actual application to do some timings.
 
 Also, what is the type and collation of the column you are sorting on?  
 non-'C' collations of text columns sort about 3 times slower than 'C' 
 collation does.
 
  
 If all the data is in memory, then will faster CPU and memory be the things 
 that help?
 
 Yes, those would help (it is not clear to me which of the two would help 
 more), but I think you need to rethink your design of sending the entire 
 database table to the application server for each page-view.
 
 
 Cheers,
 
 Jeff

Hi Jeff,

The datavalue column is double precision:

stoqs_march2013_s=# \d+ stoqs_measuredparameter
  Table 
public.stoqs_measuredparameter
 Column |   Type   |  Modifiers 
  | Storage | Description 
+--+--+-+-
 id | integer  | not null default 
nextval('stoqs_measuredparameter_id_seq'::regclass) | plain   | 
 measurement_id | integer  | not null   
  | plain   | 
 parameter_id   | integer  | not null   
  | plain   | 
 datavalue  | double precision | not null   
  | plain   | 
Indexes:
stoqs_measuredparameter_pkey PRIMARY KEY, btree (id)
stoqs_measuredparameter_measurement_id_parameter_id_key UNIQUE 
CONSTRAINT, btree (measurement_id, parameter_id)
stoqs_measuredparameter_datavalue btree (datavalue)
stoqs_measuredparameter_measurement_id btree (measurement_id)
stoqs_measuredparameter_parameter_id btree (parameter_id)
Foreign-key constraints:
stoqs_measuredparameter_measurement_id_fkey FOREIGN KEY (measurement_id) 
REFERENCES stoqs_measurement(id) DEFERRABLE INITIALLY DEFERRED
stoqs_measuredparameter_parameter_id_fkey FOREIGN KEY (parameter_id) 
REFERENCES stoqs_parameter(id) DEFERRABLE INITIALLY DEFERRED
Has OIDs: no


Thanks for the suggestion and advice to examine the web app performance.  We've 
actually taken quite a few steps to optimize how the web app works. The example 
query I provided is a simple worst-case one that we can use to help us decide 
on the proper hardware.  An actual query performed by the web app is:

stoqs_march2013_s=# explain analyze SELECT stoqs_measuredparameter.id,
stoqs_march2013_s-#stoqs_parameter.name AS parameter__name,
stoqs_march2013_s-#stoqs_parameter.standard_name AS 
parameter__standard_name,
stoqs_march2013_s-#stoqs_measurement.depth AS measurement__depth,
stoqs_march2013_s-#stoqs_measurement.geom AS measurement__geom,
stoqs_march2013_s-#stoqs_instantpoint.timevalue AS 
measurement__instantpoint__timevalue,
stoqs_march2013_s-#stoqs_platform.name AS 
measurement__instantpoint__activity__platform__name,
stoqs_march2013_s-#stoqs_measuredparameter.datavalue AS datavalue,
stoqs_march2013_s-#stoqs_parameter.units AS parameter__units
stoqs_march2013_s-# FROM stoqs_parameter p1,
stoqs_march2013_s-#  stoqs_measuredparameter
stoqs_march2013_s-#  INNER JOIN stoqs_measurement ON 
(stoqs_measuredparameter.measurement_id = 

Re: [PERFORM] Hardware suggestions for maximum read performance

2013-05-13 Thread Scott Marlowe
On Mon, May 13, 2013 at 5:58 PM, Mike McCann mcc...@mbari.org wrote:

 We assume that steps taken to improve the worst-case query scenario will
 also improve these kind of queries.  If anything above pops out as needing
 better planning please let us know that too!

Bad assumption. If your real workload will be queries like the one
here that takes 700 ms, but you'll be running 10,000 of them a second,
you're tuning / hardware choices are going to be much different then
if your query is going to be the previous 7 second one. Use realistic
queries, not ones that are nothing like what your real ones will be.
then use pgbench and its ability to run custom sql scripts to get a
REAL idea how your hardware performs. Note that if you will run the
slow query you posted like once a minute and roll it up or cache it
then don't get too worried about it. Pay attention to the queries that
will add up, in aggregate, to your greatest load.


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance