Yes, sorry, two totally different machines. The 7.4.8
run was on a dual P4 3.2GHz, and the 7.4.2 run was on
a dual hyperthreaded Xeon 2.4GHz.
--Ian
On Tue, 2005-08-09 at 10:33, Tom Lane wrote:
Ian Westmacott [EMAIL PROTECTED] writes:
On Mon, 2005-08-08 at 20:58, Tom Lane wrote:
Duplicated your setup in a separate DB.
At least its reproducable for me.
I tested this on a Xeon 2 Ghz, 1 Gig Ram. Its running on some shared
storage array that I'm not sure the details of.
My production example also shows up on our production machine that is
almost the same hardware but
Rhett Garber [EMAIL PROTECTED] writes:
Duplicated your setup in a separate DB.
At least its reproducable for me.
Hmm. Well, we now have several data points but they seem to be on
wildly varying hardware. To try to normalize the results a little,
I computed the total actual time for the
I'm now thinking you've got either a platform- or compiler-specific
problem. Exactly what is the hardware (the CPU not the disks)? How did
you build or come by the Postgres executables (compiler, configure
options, etc)?
I've tried it on two of our machines, both HP Proliant DL580:
Could these locking issues be related to the other changes I made? I'm
really scared that this is related to choosing XFS, but I sure hope
not. How should I go about troubleshooting the problem queries?
They don't seem to be specific to a single table or single database.
My
Rhett Garber [EMAIL PROTECTED] writes:
They are both running SuSE 8, 2.4.21-128-smp kernel
Compile instructions (I didn't do it myself) indicate we built from
source with nothing fancy:
You could double-check the configure options by running pg_config.
But probably the more interesting
Well that could be an issue, is this abnormally large:
#shared_buffers = 1536 # min 16, at least max_connections*2, 8KB each
shared_buffers = 206440
#sort_mem = 131072 # min 64, size in KB
sort_mem = 524288 # min 64, size in KB
vacuum_mem = 131072 #
On Tue, Aug 09, 2005 at 12:04:11PM -0600, Dan Harris wrote:
These changes have definitely improved performance, but I am now
finding some trouble with UPDATE or DELETE queries hanging and
never releasing their locks. As this happens, other statements queue
up behind it.
Have you
Rhett Garber [EMAIL PROTECTED] writes:
Well that could be an issue, is this abnormally large:
#shared_buffers = 1536 # min 16, at least max_connections*2, 8KB each
shared_buffers = 206440
#sort_mem = 131072 # min 64, size in KB
sort_mem = 524288 # min 64,
Dan,
Do you mean you did RAID 1 + 0 (RAID 10) or RAID 0 + 1? Just a
clarification, since RAID 0 is still a single-point of failure even if
RAID1 is on top of RAID0.
How many users are connected when your update / delete queries are
hanging? Have you done an analyze verbose on those queries?
Bingo, the smaller the sort_mem, the faster that query is.
Thanks a lot to everybody that helped, i'll tweak with these values
more when I get a chance now that I have some guidelines that make
sense.
Rhett
On 8/9/05, Tom Lane [EMAIL PROTECTED] wrote:
Rhett Garber [EMAIL PROTECTED] writes:
Joshua D. Drake [EMAIL PROTECTED] writes:
My experience is that when this type of thing happens it is typically
specific queries that cause the problem. If you turn on statement
logging you can get the exact queries and debug from there.
Here are some things to look for:
Is it a large
On Aug 9, 2005, at 1:08 PM, Tom Lane wrote:
Joshua D. Drake [EMAIL PROTECTED] writes:
My experience is that when this type of thing happens it is typically
specific queries that cause the problem. If you turn on statement
logging you can get the exact queries and debug from there.
Here
On Aug 10, 2005, at 12:49 AM, Steve Poe wrote:
Dan,
Do you mean you did RAID 1 + 0 (RAID 10) or RAID 0 + 1? Just a
clarification, since RAID 0 is still a single-point of failure even if
RAID1 is on top of RAID0.
Well, you tell me if I stated incorrectly. There are two raid
enclosures
Dan Harris wrote:
On Aug 10, 2005, at 12:49 AM, Steve Poe wrote:
Dan,
Do you mean you did RAID 1 + 0 (RAID 10) or RAID 0 + 1? Just a
clarification, since RAID 0 is still a single-point of failure even if
RAID1 is on top of RAID0.
Well, you tell me if I stated incorrectly. There are two
On Aug 9, 2005, at 3:51 PM, John A Meinel wrote:
Dan Harris wrote:
On Aug 10, 2005, at 12:49 AM, Steve Poe wrote:
Dan,
Do you mean you did RAID 1 + 0 (RAID 10) or RAID 0 + 1? Just a
clarification, since RAID 0 is still a single-point of failure
even if
RAID1 is on top of RAID0.
Well,
Dan Harris wrote:
On Aug 9, 2005, at 3:51 PM, John A Meinel wrote:
Dan Harris wrote:
On Aug 10, 2005, at 12:49 AM, Steve Poe wrote:
Dan,
Do you mean you did RAID 1 + 0 (RAID 10) or RAID 0 + 1? Just a
clarification, since RAID 0 is still a single-point of failure even if
RAID1 is on top
17 matches
Mail list logo