Re: [PERFORM] Postgres 7.4 and vacuum_cost_delay.

2006-05-04 Thread Andrew Sullivan
On Tue, May 02, 2006 at 05:47:15PM -0400, Chris Mckenzie wrote:
 I've come to the conclusion I need to simply start tracking all transactions
 and determining a cost/performance for the larger and frequently updated
 tables without the benefit and penalty of pg_statio.

I'll bet it won't help you.  If you can't get off 7.4 on a busy
machine, you're going to get hosed by I/O sometimes no matter what. 
My suggestion is to write a bunch of rule-of-thumb rules for your
cron jobs, and start planning your upgrade.

Jan back-patched the vacuum stuff to 7.4 for us (Afilias), and we
tried playing with it; but it didn't really make the difference we'd
hoped.

The reason for this is that 7.4 also doesn't have the bg_writer.  So
you're still faced with I/O storms, no matter what you do.  If I were
in your shoes, I wouldn't waste a lot of time on trying to emulate
the new features in 7.4.

A

-- 
Andrew Sullivan  | [EMAIL PROTECTED]
In the future this spectacle of the middle classes shocking the avant-
garde will probably become the textbook definition of Postmodernism. 
--Brad Holland

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [PERFORM] Postgres 7.4 and vacuum_cost_delay.

2006-05-04 Thread Chris Mckenzie
Title: RE: [PERFORM] Postgres 7.4 and vacuum_cost_delay.





Hey, thanks for the advice.


Sticking with 7.4 isn't my call. There's a lot wrapped up in common usage of Postgres 7.4 and I could never rally everyone into moving forward. (at least not this year)

I've yet to prove (due to my current lack of statistical evidence) that our usage of 7.4 results in frequent vacuums impacting access. (it get more difficult to speculate when considering a large slony cluster) I'm hoping to gather some times and numbers on an internal dogfood of our product shortly.

Any advice on tracking vacuum performance and impact? I was thinking of just system timing the vacuumdb calls and turning on verbose for per-table/index stats. Do you think that's enough info?

Once I vacuum I won't be able to re-test any fragmentation that the vacuum cleaned up, so its all or nothing for this test.

Thanks again.


- Chris


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]] On Behalf Of Andrew Sullivan

Sent: Thursday, May 04, 2006 8:28 AM
To: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Postgres 7.4 and vacuum_cost_delay.



On Tue, May 02, 2006 at 05:47:15PM -0400, Chris Mckenzie wrote:
 I've come to the conclusion I need to simply start tracking all 
 transactions and determining a cost/performance for the larger and 
 frequently updated tables without the benefit and penalty of 
 pg_statio.


I'll bet it won't help you. If you can't get off 7.4 on a busy machine, you're going to get hosed by I/O sometimes no matter what. 

My suggestion is to write a bunch of rule-of-thumb rules for your cron jobs, and start planning your upgrade.


Jan back-patched the vacuum stuff to 7.4 for us (Afilias), and we tried playing with it; but it didn't really make the difference we'd hoped.

The reason for this is that 7.4 also doesn't have the bg_writer. So you're still faced with I/O storms, no matter what you do. If I were in your shoes, I wouldn't waste a lot of time on trying to emulate the new features in 7.4.

A


-- 
Andrew Sullivan | [EMAIL PROTECTED]
In the future this spectacle of the middle classes shocking the avant- garde will probably become the textbook definition of Postmodernism. 

 --Brad Holland


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster





Re: [PERFORM] Performance Issues on Opteron Dual Core

2006-05-04 Thread Jim C. Nasby
All the machines I've been able to replicate this on have been SMP w2k3
machines running SP1. I've been unable to replicate it on anything not
running w2k3, but the only 'SMP' machine I've tested in that manner was
an Intel with HT enabled. I now have an intel with HT and running w2k3
sitting in my office, but I haven't had a chance to fire it up and try
it yet. Once I test that machine it should help narrow down if this
problem exists with HT machines (which someone on -hackers mentioned
they had access to and could do testing with). If it does affect HT
machines then I suspect that this is not an issue for XP...

On Tue, May 02, 2006 at 11:27:02PM -0500, Gregory Stewart wrote:
 Jim,
 
 Have you seen this happening only on W2k3? I am wondering if I should try
 out 2000 Pro or XP Pro.
 Not my first choice, but if it works...
 
 
 
 -Original Message-
 From: Jim C. Nasby [mailto:[EMAIL PROTECTED]
 Sent: Tuesday, May 02, 2006 3:29 PM
 To: Mark Kirkwood
 Cc: Gregory Stewart; pgsql-performance@postgresql.org
 Subject: Re: [PERFORM] Performance Issues on Opteron Dual Core
 
 
 On Sun, Apr 30, 2006 at 10:59:56PM +1200, Mark Kirkwood wrote:
  Pgadmin can give misleading times for queries that return large result
  sets over a network, due to:
 
  1/ It takes time to format the (large) result set for display.
  2/ It has to count the time spent waiting for the (large) result set to
  travel across the network.
 
  You aren't running Pgadmin off the dev server are you? If not check your
  network link to dev and prod  - is one faster than the other? (etc).
 
  To eliminate Pgadmin and the network as factors try wrapping your query
  in a 'SELECT count(*) FROM (your query here) AS a', and see if it
  changes anything!
 
 FWIW, I've found problems running PostgreSQL on Windows in a multi-CPU
 environment on w2k3. It runs fine for some period, and then CPU and
 throughput drop to zero. So far I've been unable to track down any more
 information than that, other than the fact that I haven't been able to
 reproduce this on any single-CPU machines.
 --
 Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
 Pervasive Software  http://pervasive.comwork: 512-231-6117
 vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461
 
 
 --
 No virus found in this incoming message.
 Checked by AVG Free Edition.
 Version: 7.1.385 / Virus Database: 268.5.1/328 - Release Date: 5/1/2006
 
 
 
 ---(end of broadcast)---
 TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly
 

-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PERFORM] Performance Issues on Opteron Dual Core

2006-05-04 Thread Jim C. Nasby
On Wed, May 03, 2006 at 09:29:15AM +0200, Magnus Hagander wrote:
FWIW, I've found problems running PostgreSQL on Windows in a 
multi-CPU environment on w2k3. It runs fine for some period, and 
then CPU and throughput drop to zero. So far I've been unable to 
track down any more information than that, other than the 
  fact that 
I haven't been able to reproduce this on any single-CPU machines.
   
   I have had previous correspondence about this with Magnus (search 
   -general and -hackers). If you uninstall SP1 the problem 
  goes away. We 
   played a bit with potential fixes but didn't find any.
  
  Interesting; does SP2 fix the problem? Anything we can do 
  over here to help?
 
 There is no SP2 for Windows 2003.
 
 Have you tried this with latest-and-greatest CVS HEAD? Meaning with the
 new semaphore code that was committed a couple of days ago?

I'd be happy to test this if someone could provide a build, or if
there's instructions somewhere for doing such a build...
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [PERFORM] Why so slow?

2006-05-04 Thread Jim C. Nasby
On Wed, May 03, 2006 at 07:22:21AM -0400, Michael Stone wrote:
 On Tue, May 02, 2006 at 07:28:34PM -0400, Bill Moran wrote:
 Reindexing is in a different class than vacuuming.
 
 Kinda, but it is in the same class as vacuum full. If vacuum neglect (or 
 dramatic change in usage) has gotten you to the point of 10G of overhead 
 on a 2G table you can get a dramatic speedup if you vacuum full, by 
 dumping a lot of unused space. But in that case you might have a similar 
s/might/will/
 amount of overhead in indices, which isn't going to go away unless you 
 reindex. In either case the unused rows will be reused as needed, but if 
 you know you aren't going to need the space again anytime soon you might 
 need to vacuum full/reindex.

You can also do a CLUSTER on the table, which rewrites both the table
and all the indexes from scratch. But there was some kind of issue with
doing that that was fixed in HEAD, but I don't think it's been
back-ported. I also don't remember exactly what the issue was... :/
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 6: explain analyze is your friend