Dan Harris wrote:
On Aug 9, 2005, at 3:51 PM, John A Meinel wrote:
Dan Harris wrote:
On Aug 10, 2005, at 12:49 AM, Steve Poe wrote:
Dan,
Do you mean you did RAID 1 + 0 (RAID 10) or RAID 0 + 1? Just a
clarification, since RAID 0 is still a single-point of failure even if
RAID1 is on top o
On Aug 9, 2005, at 3:51 PM, John A Meinel wrote:
Dan Harris wrote:
On Aug 10, 2005, at 12:49 AM, Steve Poe wrote:
Dan,
Do you mean you did RAID 1 + 0 (RAID 10) or RAID 0 + 1? Just a
clarification, since RAID 0 is still a single-point of failure
even if
RAID1 is on top of RAID0.
Well, y
Dan Harris wrote:
On Aug 10, 2005, at 12:49 AM, Steve Poe wrote:
Dan,
Do you mean you did RAID 1 + 0 (RAID 10) or RAID 0 + 1? Just a
clarification, since RAID 0 is still a single-point of failure even if
RAID1 is on top of RAID0.
Well, you tell me if I stated incorrectly. There are two ra
On Aug 10, 2005, at 12:49 AM, Steve Poe wrote:
Dan,
Do you mean you did RAID 1 + 0 (RAID 10) or RAID 0 + 1? Just a
clarification, since RAID 0 is still a single-point of failure even if
RAID1 is on top of RAID0.
Well, you tell me if I stated incorrectly. There are two raid
enclosures with
On Aug 9, 2005, at 1:08 PM, Tom Lane wrote:
"Joshua D. Drake" <[EMAIL PROTECTED]> writes:
My experience is that when this type of thing happens it is typically
specific queries that cause the problem. If you turn on statement
logging you can get the exact queries and debug from there.
He
"Joshua D. Drake" <[EMAIL PROTECTED]> writes:
> My experience is that when this type of thing happens it is typically
> specific queries that cause the problem. If you turn on statement
> logging you can get the exact queries and debug from there.
> Here are some things to look for:
> Is it a l
Bingo, the smaller the sort_mem, the faster that query is.
Thanks a lot to everybody that helped, i'll tweak with these values
more when I get a chance now that I have some guidelines that make
sense.
Rhett
On 8/9/05, Tom Lane <[EMAIL PROTECTED]> wrote:
> Rhett Garber <[EMAIL PROTECTED]> writes:
Dan,
Do you mean you did RAID 1 + 0 (RAID 10) or RAID 0 + 1? Just a
clarification, since RAID 0 is still a single-point of failure even if
RAID1 is on top of RAID0.
How many users are connected when your update / delete queries are
hanging? Have you done an analyze verbose on those queries?
Hav
Rhett Garber <[EMAIL PROTECTED]> writes:
> Well that could be an issue, is this abnormally large:
> #shared_buffers = 1536 # min 16, at least max_connections*2, 8KB each
> shared_buffers = 206440
> #sort_mem = 131072 # min 64, size in KB
> sort_mem = 524288 # min
On Tue, Aug 09, 2005 at 12:04:11PM -0600, Dan Harris wrote:
> These changes have definitely improved performance, but I am now
> finding some trouble with UPDATE or DELETE queries "hanging" and
> never releasing their locks. As this happens, other statements queue
> up behind it.
Have you e
Well that could be an issue, is this abnormally large:
#shared_buffers = 1536 # min 16, at least max_connections*2, 8KB each
shared_buffers = 206440
#sort_mem = 131072 # min 64, size in KB
sort_mem = 524288 # min 64, size in KB
vacuum_mem = 131072 #
Rhett Garber <[EMAIL PROTECTED]> writes:
> They are both running SuSE 8, 2.4.21-128-smp kernel
> Compile instructions (I didn't do it myself) indicate we built from
> source with nothing fancy:
You could double-check the configure options by running pg_config.
But probably the more interesting qu
Could these locking issues be related to the other changes I made? I'm
really scared that this is related to choosing XFS, but I sure hope
not. How should I go about troubleshooting the "problem" queries?
They don't seem to be specific to a single table or single database.
My experien
> I'm now thinking you've got either a platform- or compiler-specific
> problem. Exactly what is the hardware (the CPU not the disks)? How did
> you build or come by the Postgres executables (compiler, configure
> options, etc)?
I've tried it on two of our machines, both HP Proliant DL580:
Produ
I thought I would send this to pg-performance since so many people
helped me with my speed issues recently. I was definitely IO-
bottlenecked.
Since then, I have installed 2 RAID arrays with 7 15k drives in them
in RAID 0+1 as well as add a new controller card with 512MB of cache
on it.
Rhett Garber <[EMAIL PROTECTED]> writes:
> Duplicated your setup in a separate DB.
> At least its reproducable for me.
Hmm. Well, we now have several data points but they seem to be on
wildly varying hardware. To try to normalize the results a little,
I computed the total actual time for the
Duplicated your setup in a separate DB.
At least its reproducable for me.
I tested this on a Xeon 2 Ghz, 1 Gig Ram. Its running on some shared
storage array that I'm not sure the details of.
My production example also shows up on our production machine that is
almost the same hardware but ha
Yes, sorry, two totally different machines. The 7.4.8
run was on a dual P4 3.2GHz, and the 7.4.2 run was on
a dual hyperthreaded Xeon 2.4GHz.
--Ian
On Tue, 2005-08-09 at 10:33, Tom Lane wrote:
> Ian Westmacott <[EMAIL PROTECTED]> writes:
> > On Mon, 2005-08-08 at 20:58, Tom Lane wrote:
Ian Westmacott <[EMAIL PROTECTED]> writes:
> On Mon, 2005-08-08 at 20:58, Tom Lane wrote:
>> I'd be interested to see results from other people using 7.4.* too.
> 7.4.8:
> Total runtime: 0.198 ms
> 7.4.2:
> Total runtime: 0.697 ms
Just to be clear: those are two different machines of different
On Mon, 2005-08-08 at 20:58, Tom Lane wrote:
> I'd be interested to see results from other people using 7.4.* too.
7.4.8:
QUERY
PLAN
20 matches
Mail list logo