Re: [PERFORM] postgres performance: comparing 2 data centers

2004-06-06 Thread Rod Taylor
 The members table contains about 500k rows.  It has an index on
 (group_id, member_id) and on (member_id, group_id).


Yes, bad stats are causing it to pick a poor plan, but you're giving it
too many options (which doesn't help) and using space up unnecessarily.

Keep (group_id, member_id)
Remove (member_id, group_id)
Add (member_id)

An index on just member_id is actually going to perform better than
member_id, group_id since it has a smaller footprint on the disk.

Anytime where both group_id and member_id are in the query, the
(group_id, member_id) index will likely be used.

-- 
Rod Taylor rbt [at] rbt [dot] ca

Build A Brighter Lamp :: Linux Apache {middleware} PostgreSQL
PGP Key: http://www.rbt.ca/signature.asc


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PERFORM] postgres performance: comparing 2 data centers

2004-06-04 Thread Merlin Moncure
Michael wrote:
 I have two instances of a production application that uses Postgres
7.2,
 deployed in two different data centers for about the last 6 months.
The
 sizes, schemas, configurations, hardware, and access patterns of the
two
 databases are nearly identical, but one consistently takes at least 5x
 longer than the other for some common operations.  During this time,
CPU
 usage and IO on the slow database are both high (sustained); I'm not
 sure about the fast database.  These common operations are chatty - at
 least tens of thousands of queries over a 5 to 60 minute stretch - but
 the queries themselves are fairly simple.  The query plans are
identical
 across both databases, and the data distribution is comparable.  The
 tables involved in these common operations change frequently, and are
 indexed according to these queries.  The queries use the indexes as
 expected.  The tables involved have 50k-500k rows.

Have you isolated any hardware issues?  For example, if you are using
ATA cables, and one is kinked or too long, you could be having ATA
errors which cause bizarre and intermittent slowdowns and pauses,
especially in raid systems.  Do a filesystem diagnostic to check this.

Merlin

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [PERFORM] postgres performance: comparing 2 data centers

2004-06-04 Thread Tom Lane
Michael Nonemacher [EMAIL PROTECTED] writes:
 I have two instances of a production application that uses Postgres 7.2,
 deployed in two different data centers for about the last 6 months.  The
 sizes, schemas, configurations, hardware, and access patterns of the two
 databases are nearly identical, but one consistently takes at least 5x
 longer than the other for some common operations.

Does VACUUM VERBOSE show comparable physical sizes (in pages) for the
key tables in both databases?  Maybe the slow one has lots of dead space
in the tables (not indexes).  It would be useful to look at EXPLAIN
ANALYZE output of both databases for some of those common ops, too.
It could be that you're getting different plans in the two cases for
some reason.

 We 'vacuum analyze' nightly, and we recently rebuilt the indexes on the
 slow database (using reindex table).  This cut the number of index pages
 dramatically: from ~1800 to ~50, but didn't noticeably change the time
 or CPU utilization for the common operations described above.

That's pretty suspicious.

If it's not dead space or plan choice, the only other thing I can think
of is physical tuple ordering.  You might try CLUSTERing on the
most-heavily-used index of each table.

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PERFORM] postgres performance: comparing 2 data centers

2004-06-04 Thread Michael Nonemacher
Slight update:

Thanks for the replies; this is starting to make a little more sense...

I've managed to track down the root of the problem to a single query on
a single table.  I have a query that looks like this:
   select count(*) from members where group_id = ? and member_id 
0;

The members table contains about 500k rows.  It has an index on
(group_id, member_id) and on (member_id, group_id).

It seems like the statistics are wildly different depending on whether
the last operation on the table was a 'vacuum analyze' or an 'analyze'.
Vacuum or vacuum-analyze puts the correct number (~500k) in
pg_class.reltuples, but analyze puts 7000 in pg_class.reltuples.  The
reltuples numbers associated with this table's indexes are unchanged.

After a vacuum-analyze, the query correctly uses the index on (group_id,
member_id), and runs very fast (sub-millisecond reported by explain
analyze).  After an analyze, the query uses the (member_id, group_id)
index, and the query takes much longer (150ms reported by explain
analyze).  (Yes, I said the 2 databases were using the same query plan;
it turns out they're only sometimes using the same query plan. :( )

A similar problem happens to some of my other tables (according to
pg_class.reltuples), although I haven't seen query performance change as
much.

Any idea what could cause this bad analyze behavior?  Any guesses why
this has happened in one of my data centers but not both?  (Coincidence
isn't a big stretch here.)  What can I do to stop or change this
behavior?  Apologies if this is a known problem...

mike

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Michael
Nonemacher
Sent: Friday, June 04, 2004 10:43 AM
To: [EMAIL PROTECTED]
Subject: [PERFORM] postgres performance: comparing 2 data centers


I have two instances of a production application that uses Postgres 7.2,
deployed in two different data centers for about the last 6 months.  The
sizes, schemas, configurations, hardware, and access patterns of the two
databases are nearly identical, but one consistently takes at least 5x
longer than the other for some common operations.  During this time, CPU
usage and IO on the slow database are both high (sustained); I'm not
sure about the fast database.  These common operations are chatty - at
least tens of thousands of queries over a 5 to 60 minute stretch - but
the queries themselves are fairly simple.  The query plans are identical
across both databases, and the data distribution is comparable.  The
tables involved in these common operations change frequently, and are
indexed according to these queries.  The queries use the indexes as
expected.  The tables involved have 50k-500k rows.

We 'vacuum analyze' nightly, and we recently rebuilt the indexes on the
slow database (using reindex table).  This cut the number of index pages
dramatically: from ~1800 to ~50, but didn't noticeably change the time
or CPU utilization for the common operations described above.

When running pgbench, both databases have very similar results (200-260
over multiple runs with 5 concurrent threads).

I know of a few things I can do to make this operation marginally
simpler, but I'm most interested in the difference between the two
databases.

I haven't come up with a theory that explains each of these things. What
are some things I can look into to track this down further?

mike

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if
your
  joining column's datatypes do not match

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [PERFORM] postgres performance: comparing 2 data centers

2004-06-04 Thread Rod Taylor
On Fri, 2004-06-04 at 18:07, Michael Nonemacher wrote:
 Slight update:
 
 Thanks for the replies; this is starting to make a little more sense...
 
 I've managed to track down the root of the problem to a single query on
 a single table.  I have a query that looks like this:
select count(*) from members where group_id = ? and member_id 
 0;
 
 The members table contains about 500k rows.  It has an index on
 (group_id, member_id) and on (member_id, group_id).
 
 It seems like the statistics are wildly different depending on whether
 the last operation on the table was a 'vacuum analyze' or an 'analyze'.


Yes, bad stats are causing it to pick a poor plan (might be better in
7.5), but you're giving it too many options (which doesn't help) and
using diskspace up unnecessarily.

Keep (group_id, member_id)
Remove (member_id, group_id)
Add (member_id)

An index on just member_id is actually going to perform better than
member_id, group_id since it has a smaller footprint on the disk.

Anytime where both group_id and member_id are in the query, the
(group_id, member_id) index will likely be used.



---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [PERFORM] postgres performance: comparing 2 data centers

2004-06-04 Thread Michael Nonemacher
Agreed.

We originally created the indexes this way because we sometimes do
searches where one of the columns is constrained using =, and the other
using a range search, but it's not clear to me how much Postgres
understands multi-column indexes.  Will I get the gain I'd expect from a
(member_id, group_id) index on a query like where member_id = ? and
group_id  ??

I've since found a few other often-used tables where the reltuples
counts generated by 'analyze' are off by a factor of 5 or more.  In the
short term, I'm just trying to eliminate the automatic-analyzes where
possible and make sure they're followed up quickly with a 'vacuum' where
it's not possible.

Is analyze generating bad stats a known issue?  Is there anything I
could be doing to aggravate or work around the problem?

mike

-Original Message-
From: Rod Taylor [mailto:[EMAIL PROTECTED] 
Sent: Friday, June 04, 2004 5:27 PM
To: Michael Nonemacher
Cc: Postgresql Performance
Subject: Re: [PERFORM] postgres performance: comparing 2 data centers


 The members table contains about 500k rows.  It has an index on 
 (group_id, member_id) and on (member_id, group_id).


Yes, bad stats are causing it to pick a poor plan, but you're giving it
too many options (which doesn't help) and using space up unnecessarily.

Keep (group_id, member_id)
Remove (member_id, group_id)
Add (member_id)

An index on just member_id is actually going to perform better than
member_id, group_id since it has a smaller footprint on the disk.

Anytime where both group_id and member_id are in the query, the
(group_id, member_id) index will likely be used.

-- 
Rod Taylor rbt [at] rbt [dot] ca

Build A Brighter Lamp :: Linux Apache {middleware} PostgreSQL PGP Key:
http://www.rbt.ca/signature.asc


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [PERFORM] postgres performance: comparing 2 data centers

2004-06-04 Thread Greg Stark
Michael Nonemacher [EMAIL PROTECTED] writes:

 Agreed.
 
 We originally created the indexes this way because we sometimes do
 searches where one of the columns is constrained using =, and the other
 using a range search, but it's not clear to me how much Postgres
 understands multi-column indexes.  Will I get the gain I'd expect from a
 (member_id, group_id) index on a query like where member_id = ? and
 group_id  ??

It will use them, whether you see a gain depends on the distribution of your
data. Does the group_id  ? exclude enough records that it's worth having to
do all the extra i/o the bigger index would require?

Personally I think the other poster was a bit hasty to assert unconditionally
that it's never worth it. If you have a lot of records for every member_id and
very few of which will be greater than '?' then it might be worth it. If
however you'll only ever have on the order of a hundred or fewer records per
member_id and a significant chunk of them will have group_id  '?' then it
will probably be a wash or worse.

There's another side to the story though. In a web site or other OLTP
application you may find you're better off with the multi-column index. Even
if it performs less well on average than the smaller single column index when
users have reasonable numbers of groups. That's becuase you're guaranteed
(assuming postgres is using it) that even if a user someday has an obscene
number of groups he won't suddenly break your web site by driving your
database into the ground.

There is a difference between latency and bandwidth, and between average and
worst-case. Sometimes it's necessary to keep an eye on worst-case scenarios
and not just average bandwidth. 

But that said. If you are reasonably certain that you'll never or rarely have
thousands of groups per user you're probably better off with the indexes the
other person described.

 I've since found a few other often-used tables where the reltuples
 counts generated by 'analyze' are off by a factor of 5 or more.  In the
 short term, I'm just trying to eliminate the automatic-analyzes where
 possible and make sure they're followed up quickly with a 'vacuum' where
 it's not possible.

 Is analyze generating bad stats a known issue?  Is there anything I
 could be doing to aggravate or work around the problem?

I would suggest trying a VACUUM FULL and then retrying the ANALYZE. I suspect
you might have a lot of dead tuples at the beginning of your table which is
confusing the sampling. If that's it, then yes it's known and in fact already
improved in what will be 7.5. You may be able to avoid the situation by
vacuuming more frequently.

If that doesn't solve it then I would suggest trying to raise the statistics
targets for the columns in question with 

 ALTER TABLE name ALTER  column SET STATISTICS integer

The default is 100 iirc. You could try 200 or even more.

-- 
greg


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [PERFORM] postgres performance: comparing 2 data centers

2004-06-04 Thread Tom Lane
Michael Nonemacher [EMAIL PROTECTED] writes:
 It seems like the statistics are wildly different depending on whether
 the last operation on the table was a 'vacuum analyze' or an 'analyze'.
 Vacuum or vacuum-analyze puts the correct number (~500k) in
 pg_class.reltuples, but analyze puts 7000 in pg_class.reltuples.

Okay, this is a known issue: in 7.4 and earlier, ANALYZE is easily
fooled as to the total number of rows in the table.  It samples the
initial portion of the table and assumes that the density of live rows
per page in that section is representative of the rest of the table.
Evidently that assumption is way off for your table.  There's an
improved sampling algorithm in CVS tip that we hope will avoid this
error in 7.5 and beyond, but the immediate problem for you is what
to do in 7.4.  I'd suggest either VACUUM FULL or CLUSTER to clean out
the existing dead space; then you should look into whether you need
to increase your vacuum frequency and/or FSM settings to keep it from
getting into this state again.  Ideally the average dead space per
page *should* be consistent over the whole table, and the fact that
it isn't suggests strongly that you've got space-management issues
to deal with.

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html