Re: [PERFORM] [GENERAL] How to know which queries are to be optimised?

2004-08-12 Thread Richard Huxton
Ulrich Wisser wrote:
You can log queries that run for at least a specified amount of time.
This will be useful in finding what the long running queries are.
You can then use explain analyse to see why they are long running.
But is there a tool that could compile a summary out of the log? The log 
grows awefully big after a short time.
You might want to look at the Practical Query Analyser - haven't used 
it myself yet, but it seems a sensible idea.

http://pqa.projects.postgresql.org/
--
  Richard Huxton
  Archonet Ltd
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [PERFORM] Hardware upgrade for a high-traffic database

2004-08-12 Thread Merlin Moncure
 This example looks fine, but since userid 51 evidently only has 35
 posts, there's not much time needed to read 'em all and sort 'em.  The
 place where the double-column index will win big is on userids with
 hundreds of posts.
 
 You have to keep in mind that each index costs time to maintain during
 inserts/updates.  So adding an index just because it makes a few
queries
 a little faster probably isn't a win.  You need to make tradeoffs.

IMNSHO, in Jason's case he needs to do everything possible to get his
frequently run queries going as quick as possible.  ISTM he can give up
a little on the update side, especially since he is running fsync=false.
A .3-.5 sec query multiplied over 50-100 users running concurrently adds
up quick.  Ideally, you are looking up records based on a key that takes
you directly to the first record you want and is pointing to the next
number of records in ascending order.  I can't stress enough how
important this is so long as you can deal with the index/update
overhead.  

I don't have a huge amount of experience with this in pg, but one of the
tricks we do in the ISAM world is a 'reverse date' system, so that you
can scan forwards on the key to pick up datetimes in descending order.
This is often a win because the o/s cache may assume read/forwards
giving you more cache hits.   There are a few different ways to do this,
but imagine:

create table t
(
id int,
ts  timestamp default now(),
iv  interval  default ('01/01/2050'::timestamp - now())
);

create index t_idx on t(id, iv);
select * from t where id = k order by id, iv limit 5;

The above query should do a much better job pulling up data and should
be easier on your cache.  A further win might be to cluster the table on
this key if the table is really big.

note: interval is poor type to do this with, because it's a 12 byte type
(just used it here for demonstration purposes because it's easy).  With
a little trickery you can stuff it into a time type or an int4 type
(even better!).  If you want to be really clever you can do it without
adding any data to your table at all through functional indexes.

Since the planner can use the same index in the extraction and ordering,
you get some savings...not much, but worthwhile when applied over a lot
of users.  Knowing when and how to apply multiple key/functional indexes
will make you feel like you have 10 times the database you are using
right now.

Merlin

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [PERFORM] [GENERAL] How to know which queries are to be optimised?

2004-08-12 Thread Christopher Kings-Lynne
 I do a vacuum full analyze every night.
 How can I see if my FSM setting is appropriate?

On a busy website, run vacuum analyze once an hour, or even better, use
contrib/pg_autovacuum

Chris



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [PERFORM] [GENERAL] How to know which queries are to be optimised?

2004-08-12 Thread Christopher Kings-Lynne
 But is there a tool that could compile a summary out of the log? The log
 grows awefully big after a short time.

Actually, yes there is.  Check out www.pgfoundry.org.  I think it's called
pqa or postgres query analyzer or somethign.

Chris


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [PERFORM] Hardware upgrade for a high-traffic database

2004-08-12 Thread Tom Lane
Merlin Moncure [EMAIL PROTECTED] writes:
 The following suggestion works in two principles: one is that instead of
 using timestamps for ordering, integers are quicker,

The difference would be pretty marginal --- especially if you choose to
use bigints instead of ints.  (A timestamp is just a float8 or bigint
under the hood, and is no more expensive to compare than those datatypes.
Timestamps *are* expensive to convert for I/O, but comparison does not
have to do that.)  I wouldn't recommend kluging up your data schema just
for that.

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PERFORM] Hardware upgrade for a high-traffic database

2004-08-12 Thread Merlin Moncure
 I don't have a huge amount of experience with this in pg, but one of
the
 tricks we do in the ISAM world is a 'reverse date' system, so that you
 can scan forwards on the key to pick up datetimes in descending order.
 This is often a win because the o/s cache may assume read/forwards
 giving you more cache hits.   There are a few different ways to do
this,
 but imagine:

I've been thinking more about this and there is even a more optimal way
of doing this if you are willing to beat on your data a bit.  It
involves the use of sequences.  Lets revisit your id/timestamp query
combination for a message board.  The assumption is you are using
integer keys for all tables.  You probably have something like:

create table messages
(
user_id int4 references users,
topic_id  int4 references topics,
message_idserial,
message_time  timestamp default now(),
[...]
);

The following suggestion works in two principles: one is that instead of
using timestamps for ordering, integers are quicker, and sequences have
a built in ability for reverse-ordering.

Lets define:
create sequence message_seq increment -1 start 2147483647 minvalue 0
maxvalue 2147483647;

now we define our table:
create table messages
(
user_id  int4 references users,
topic_id int4 references topics,
message_id   int4 default nextval('message_seq') primary key,
message_time timestamp default now(),
[...]
);

create index user_message_idx on messages(user_id, message_id);
-- optional
cluster user_message_idx messages;

Since the sequence is in descending order, we don't have to do any
tricks to logically reverse order the table.

-- return last k posts made by user u in descending order;

select * from messages where user_id = u order by user_id, message_id
limit k;

-- return last k posts on a topic
create index topic_message_idx on messages(topic_id, user_id);
select * from messages where topic_id = t order by topic_id, message_id

a side benefit of clustering is that there is little penalty for
increasing k because of read ahead optimization whereas in normal
scenarios your read time scales with k (forcing small values for k).  If
we tended to pull up messages by topic more frequently than user, we
would cluster on topic_message_idx instead.  (if we couldn't decide, we
might cluster on message_id or not at all).

The crucial point is that we are making this one index run really fast
at the expense of other operations.  The other major point is we can use
a sequence in place of a timestamp for ordering.  Using int4 vs.
timestamp is a minor efficiency win, if you are worried about  4B rows,
then stick with timestamp.

This all boils down to a central unifying principle: organize your
indices around your expected access patterns to the data.  Sorry if I'm
bleating on and on about this...I just think there is plenty of
optimization room left in there :)

Merlin


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match