On Wed, 2007-05-16 at 18:47 -0500, Don Barthel wrote:
> My mass update was comprised of all individual, one record, updates
> per transaction.

If each transaction only modifies one record, mass updates aren't the
problem.

> SELECT relname, reltuples, relpages FROM pg_class ORDER BY relpages DESC ;
>              relname             |  reltuples  | relpages
> ---------------------------------+-------------+----------
>  indx_tsearch2a                  |       72559 |    66324
>  indx_tsearch2c                  |       40761 |    44640
>  used_ad                         |      335542 |    38646
> 

First, those numbers might not be updated unless you run ANALYZE.
Second, those other two indexes aren't the indexes used by Slony. The
most important index is the index Slony is using, which is the primary
key by default (which is certainly not those tsearch2 indexes).

See if the primary key of the table fits into shared_buffers. If you're
doing a lot of reporting from that machine constantly the primary key
might be pushed out anyway, which could still be your problem even if
the primary key is smaller than the other indexes.

However, if user_ad is only 300K tuples, that doesn't quite make sense.
I just checked on one of my tables with 30M tuples and the index is only
86K pages. From that, I would guess that your primary key index is less
than 1000 pages. Is that about right?

If that's true and slony is working constantly, that index would
probably be mostly in memory.

Are there other things happening on the subscriber that might be taxing
the I/O system too much? If those tsearch indexes are being used heavily
that might be responsible.

Also, just look at some system stats. What is the bottleneck, is it
disk? If so, what is the disk doing? Is the most intensive process the
postgres process that slony is connected to, or is it something else?

> *** Is shared_buffers comparable to 'relpages'? If so, is it practical

Yes.

> to bump shared_buffers to 325,000? That's just over 2.5GB by my
> calculation - I have 3GB on the machine.
> 

I don't recommend making a jump like that. If you use more than 50% of
the physical memory for shared_buffers you should really do benchmarks
to see if that's actually helping you.

The extra memory is not going to waste -- the OS is caching disk also,
so there's a good chance you don't have to go to disk on many of the
reads.

> > Try to hunt down which events are taking a long time to SYNC.
> 
> Please, what would allow me to measure that?
> 

If they are all one-record updates or similar, it's probably not any one
SYNC, but just the total quantity. I was looking out for any "DELETE
FROM mytable;" or "UPDATE mytable SET foo=foo+1;" type queries that
would really take a long time to SYNC. It doesn't look like that's your
problem though.

Regards,
        Jeff Davis





_______________________________________________
Slony1-general mailing list
[email protected]
http://lists.slony.info/mailman/listinfo/slony1-general

Reply via email to