Hi :)
[pg_dump from a Slony replica]
That's not the problem. The problem is that when you restore the
dump of the slave, you'll have garbage. Slony fools with the
catalogs on the replicas.
(And you might as well throw away the
dumpfiles from the replicas that you have. They won't
On Thu, 26 Oct 2006 18:09:37 -0400
Andrew Sullivan [EMAIL PROTECTED] wrote:
On Thu, Oct 26, 2006 at 09:35:56PM +0100, Gavin Hamill wrote:
I'm absolutely certain. The backups run from only one slave, given
that it is a full copy of node 1. Our overnight traffic has not
increased any
On Fri, 27 Oct 2006 14:07:43 -0400
Tom Lane [EMAIL PROTECTED] wrote:
So the time is all in index vacuuming, eh? I think what's happening
is that the physical order of the index is degrading over time, and
so the vacuum scan takes longer due to more seeking. Can you afford
to do a REINDEX?
Hullo, here's one of those dreadful touchy-feely hand-waving problems.
Our 5-node 8.1.3 Slony system has just started taking /much/ longer to
VACUUM ANALYZE..
The data set has not increased more than usual (nightly backups stand
at 1.3GB, growing by 10MB per day), and no configuration has changed
On Thu, 26 Oct 2006 10:47:21 -0400
Tom Lane [EMAIL PROTECTED] wrote:
Gavin Hamill [EMAIL PROTECTED] writes:
Nodes 2 and 3 take only the tables necessary to run our search (10
out of the full 130) and are much lighter (only 7GB on disk cf.
30GB for the full master) , yet the nightly VACUUM
On Thu, 26 Oct 2006 14:17:29 -0500
Jim C. Nasby [EMAIL PROTECTED] wrote:
Are you sure that there's nothing else happening on the machine that
could affect the vacuum times? Like, say a backup? Or perhaps updates
coming in from Slony that didn't used to be there?
I'm absolutely certain. The
On Tue, 13 Jun 2006 14:28:49 -0500
Scott Marlowe [EMAIL PROTECTED] wrote:
Search the performance archives for the last 4 or 5 months for PPC /
pseries machines.
You'll find a very long thread about the disappointing performance the
tester got with a rather expensive P Series machine. And
On Tue, 25 Apr 2006 14:14:35 -0400
Bill Moran [EMAIL PROTECTED] wrote:
Does anyone in the PostgreSQL community have any experience with
large caches or dual-core pentiums that could make any
recommendations?
Heh :) You're in the position I was in about a year ago - we naturally
replaced our
Hi again :)
This is a follow-up to the mega thread which made a Friday night more
interesting [1] - the summary is various people thought there was some
issue with shared memory access on AIX.
I then installed Debian (kernel 2.6.11) on the 8-CPU p650 (native - no
LPAR) and saw just as
Tom Lane wrote:
Gavin Hamill [EMAIL PROTECTED] writes:
If I replace the
(allocation0_.Date between '2006-06-09 00:00:00.00' and
'2006-06-09 00:00:00.00')
with
allocation0_.Date ='2006-04-09 00:00:00.00'
then the query comes back in a few milliseconds (as I'd expect
On Tue, 18 Apr 2006 13:31:48 -0400
Tom Lane [EMAIL PROTECTED] wrote:
There should be a fix for this by the time PG 8.2 comes out, but in
the meantime you might find that it helps to write the range check in
a way that doesn't have identical bounds, eg
date = '2006-06-10'::date AND date
On Tue, 18 Apr 2006 15:51:44 -0400
Tom Lane [EMAIL PROTECTED] wrote:
Fair enough, no reason to replace one workaround with another. But
would you try it on your test case, just to verify the diagnosis?
Yup I can confirm it from testing earlier today - as soon as
the two dates are non-equal,
laterooms=# explain analyze select allocation0_.ID as y1_,
allocation0_.RoomID as y2_, allocation0_.StatusID as y4_,
allocation0_.Price as y3_, allocation0_.Number as y5_,
allocation0_.Date as y6_ from Allocation allocation0_ where
(allocation0_.Date between '2006-06-09 00:00:00.00' and
chris smith wrote:
1.6secs isn't too bad on 4.3mill rows...
How many entries are there for that date range?
1.7 secs /is/ good - it typically takes 5 or 6 seconds, which isn't so
good. My question is 'why does the planner choose such a bizarre range
request when both elements of the
Tom Lane wrote:
Gavin Hamill [EMAIL PROTECTED] writes:
would a simple #define LWLOCK_PADDED_SIZE 128 be sufficient?
Yeah, that's fine.
OK I tried that but noticed no real improvement... in the interim I've
installed Debian on the pSeries (using
http://debian.gonicus.de
Simon Riggs wrote:
pSeries cache lines are 128 bytes wide, so I'd go straight to 128.
Hello :)
OK, that line of code is:
#define LWLOCK_PADDED_SIZE (sizeof(LWLock) = 16 ? 16 : 32)
What should I change this to? I don't understand the syntax of the = 16
? : stuff...
would a simple
Bing-bong, passenger announcement.. the panic train is now pulling into
platform 8.1.3. Bing-bong. =)
OK, having moved from our quad-xeon to an 8-CPU IBM pSeries 650
(8x1.45GHz POWER4 instead of 4 x 3GHz Xeon), our query times have shot
up and our website is next to unusable. The IBM is not
On Fri, 07 Apr 2006 14:41:39 -0400
Tom Lane [EMAIL PROTECTED] wrote:
Gavin Hamill [EMAIL PROTECTED] writes:
OK, having moved from our quad-xeon to an 8-CPU IBM pSeries 650
(8x1.45GHz POWER4 instead of 4 x 3GHz Xeon), our query times have shot
up and our website is next to unusable. The IBM
On Fri, 07 Apr 2006 13:54:21 -0500
Scott Marlowe [EMAIL PROTECTED] wrote:
Are the same queries getting the same basic execution plan on both
boxes? Turn on logging for slow queries, and explain analyze them on
both machines to see if they are.
See reply to Tom Lane :)
I'd put the old 4 way
On Fri, 7 Apr 2006 16:16:02 -0400
D'Arcy J.M. Cain darcy@druid.net wrote:
We also had problems with a high end AIX system and we got no help
from IBM. They expected you to put Oracle on and if you used
anything else you were on your own.
Urk, I thought IBM were supposedly Linux sycophants
On Fri, 07 Apr 2006 15:24:18 -0500
Scott Marlowe [EMAIL PROTECTED] wrote:
See reply to Tom Lane :)
I didn't see one go by yet... Could be sitting in the queue.
If it's not arrived by now - EXPLAIN ANALYZE doesn't tell me
anything :)
Let us know if changing the fsync setting helps.
On Fri, 07 Apr 2006 16:06:02 -0400
Tom Lane [EMAIL PROTECTED] wrote:
The pSeries isn't much older than our Xeon machine, and I expected
the performance level to be exemplary out of the box..
I'm fairly surprised too. One thing I note from your comparison of
settings is that the default
On Fri, 07 Apr 2006 17:56:49 -0400
Tom Lane [EMAIL PROTECTED] wrote:
This is not good. Did the semop storms coincide with visible
slowdown? (I'd assume so, but you didn't actually say...)
If I'd been able to tell, then I'd tell you =) I'll have another go...
Yes, there's a definate
On Fri, 07 Apr 2006 18:52:20 -0400
Tom Lane [EMAIL PROTECTED] wrote:
Are you in a position to try your workload using PG CVS tip? There's
a nontrivial possibility that we've already fixed this --- a couple
months ago I did some work to reduce contention in the lock manager:
Well, there's a
On Fri, 07 Apr 2006 15:56:52 -0700
Luke Lonergan [EMAIL PROTECTED] wrote:
Seems like you're hitting a very small target in RAM with these semop
calls. I wonder what part of the code is doing this - Tom would know
better how to trace it, but the equivalent of oprofile output would
be nice.
Hullo, I have pg 8.1.3 on an 8-CPU AIX 5.3 box with 16GB of RAM, and I'm
finding that it's taking an age to CREATE INDEX on a large table:
Column | Type | Modifiers
Tom Lane wrote:
Gavin Hamill [EMAIL PROTECTED] writes:
The table has just under six million rows - should it really be taking
nearly six minutes to add an index?
Try running it with trace_sort enabled to get more info about where the
time is going.
We've been doing some considerable
27 matches
Mail list logo