Re: [pgsql-www] [PERFORM] Help speeding up delete

2005-11-17 Thread Magnus Hagander
Perhaps we should put a link on the home page underneath LATEST RELEASEs saying 7.2: de-supported with a link to a scary note along the lines of the above. ISTM that there are still too many people on older releases. We probably need an explanation of why we support so

Re: [PERFORM] Help speeding up delete

2005-11-17 Thread Steinar H. Gunderson
On Thu, Nov 17, 2005 at 09:40:42AM +0800, Christopher Kings-Lynne wrote: In my experience not many pgsql admins have test servers or the skills to build up test machines with the latest pg_dump, etc. (Seriously.) In fact, few realise at all that they should use the 8.1 dumper. Isn't your

Re: [PERFORM] Hardware/OS recommendations for large databases (

2005-11-17 Thread William Yu
Welty, Richard wrote: David Boreham wrote: I guess I've never bought into the vendor story that there are two reliability grades. Why would they bother making two different kinds of bearing, motor etc ? Seems like it's more likely an excuse to justify higher prices. then how to account for

Re: [PERFORM] Hardware/OS recommendations for large databases (5TB)

2005-11-17 Thread Vivek Khera
On Nov 16, 2005, at 4:50 PM, Claus Guttesen wrote: I'm (also) FreeBSD-biased but I'm not shure whether the 5 TB fs will work so well if tools like fsck are needed. Gvinum could be one option but I don't have any experience in that area. Then look into an external filer and mount via NFS.

Re: [PERFORM] Help speeding up delete

2005-11-17 Thread Christopher Kings-Lynne
Isn't your distribution supposed to do this for you? Mine does these days... A distribution that tries to automatically do a major postgresql update is doomed to fail - spectacularly... Chris ---(end of broadcast)--- TIP 1: if posting/reading

Re: [pgsql-www] [PERFORM] Help speeding up delete

2005-11-17 Thread Christopher Kings-Lynne
That way if someone wanted to upgrade from 7.2 to 8.1, they can just grab the latest dumper from the website, dump their old database, then upgrade easily. But if they're upgrading to 8.1, don't they already have the new pg_dump? How else are they going to dump their *new* database? Erm.

Re: [pgsql-www] [PERFORM] Help speeding up delete

2005-11-17 Thread Steve Wampler
Christopher Kings-Lynne wrote: That most people don't know they should use the new one I understand though. But I don't see how this will help against that :-) It'll make it easy... As the miscreant that caused this thread to get started, let me *wholeheartedly* agree with Chris. An easy

Re: [pgsql-www] [PERFORM] Help speeding up delete

2005-11-17 Thread Magnus Hagander
That way if someone wanted to upgrade from 7.2 to 8.1, they can just grab the latest dumper from the website, dump their old database, then upgrade easily. But if they're upgrading to 8.1, don't they already have the new pg_dump? How else are they going to dump their *new*

Re: [PERFORM] Help speeding up delete

2005-11-17 Thread Scott Marlowe
On Wed, 2005-11-16 at 19:40, Christopher Kings-Lynne wrote: Perhaps we should put a link on the home page underneath LATEST RELEASEs saying 7.2: de-supported with a link to a scary note along the lines of the above. ISTM that there are still too many people on older releases.

Re: [PERFORM] Help speeding up delete

2005-11-17 Thread Steinar H. Gunderson
On Thu, Nov 17, 2005 at 11:07:42PM +0800, Christopher Kings-Lynne wrote: Isn't your distribution supposed to do this for you? Mine does these days... A distribution that tries to automatically do a major postgresql update is doomed to fail - spectacularly... Automatically? Well, you can

Re: [PERFORM] Strange query plan invloving a view

2005-11-17 Thread Tom Lane
Rich Doughty [EMAIL PROTECTED] writes: However, the following query (which i believe should be equivalent) SELECT * FROM tokens.ta_tokenhist h INNER JOIN tokens.ta_tokens t ON h.token_id = t.token_id LEFT JOIN tokens.ta_tokenhist i ON t.token_id = i.token_id AND

Re: [PERFORM] Hardware/OS recommendations for large databases (

2005-11-17 Thread Alex Turner
On 11/16/05, William Yu [EMAIL PROTECTED] wrote: Alex Turner wrote: Spend a fortune on dual core CPUs and then buy crappy disks... I bet for most applications this system will be IO bound, and you will see a nice lot of drive failures in the first year of operation with consumer grade

Re: [PERFORM] Hardware/OS recommendations for large databases (

2005-11-17 Thread Alex Turner
Just pick up a SCSI drive and a consumer ATA drive. Feel their weight. You don't have to look inside to tell the difference. Alex On 11/16/05, David Boreham [EMAIL PROTECTED] wrote: I suggest you read this on the difference between enterprise/SCSI and desktop/IDE drives:

Re: [PERFORM] Hardware/OS recommendations for large databases (

2005-11-17 Thread Alex Turner
On 11/16/05, Joshua D. Drake [EMAIL PROTECTED] wrote: The only questions would be: (1) Do you need a SMP server at all? I'd claim yes -- you always need 2+ cores whether it's DC or 2P to avoid IO interrupts blocking other processes from running. I would back this up. Even for smaller

Re: [PERFORM] Hardware/OS recommendations for large databases (

2005-11-17 Thread Merlin Moncure
Remember - large DB is going to be IO bound. Memory will get thrashed for file block buffers, even if you have large amounts, it's all gonna be cycled in and out again. 'fraid I have to disagree here. I manage ERP systems for manufacturing companies of various sizes. My systems are all

Re: [PERFORM] Hardware/OS recommendations for large databases (

2005-11-17 Thread William Yu
Alex Turner wrote: Opteron 242 - $178.00 Opteron 242 - $178.00 Tyan S2882 - $377.50 Total: $733.50 Opteron 265 - $719.00 Tyan K8E - $169.00 Total: $888.00 You're comparing the wrong CPUs. The 265 is the 2x of the 244 so you'll have to bump up the price more although not enough to make a

[PERFORM] weird performances problem

2005-11-17 Thread Guillaume Smet
Hi all, We are operating a 1.5GB postgresql database for a year and we have problems for nearly a month. Usually everything is OK with the database, queries are executed fast even if they are complicated but sometimes and for half an hour, we have a general slow down. The server is a

Re: [PERFORM] Hardware/OS recommendations for large databases (

2005-11-17 Thread Joshua Marsh
On 11/17/05, William Yu [EMAIL PROTECTED] wrote: No argument there.But it's pointless if you are IO bound.Why would you just accept we're IO bound, nothing we can do? I'd doeverything in my power to make my app go from IO bound to CPU bound -- whether by optimizing my code or buying more

Re: [PERFORM] Hardware/OS recommendations for large databases (

2005-11-17 Thread David Boreham
Alan Stange wrote: Not sure I get your point. We would want the lighter one, all things being equal, right ? (lower shipping costs, less likely to break when dropped on the floor) Why would the lighter one be less likely to break when dropped on the floor? They'd have less kinetic

Re: [PERFORM] Hardware/OS recommendations for large databases (

2005-11-17 Thread William Yu
Joshua Marsh wrote: On 11/17/05, *William Yu* [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote: No argument there. But it's pointless if you are IO bound. Why would you just accept we're IO bound, nothing we can do? I'd do everything in my power to make my app go from IO bound

Re: [PERFORM] weird performances problem

2005-11-17 Thread Andrew Sullivan
On Thu, Nov 17, 2005 at 06:47:09PM +0100, Guillaume Smet wrote: queries are executed fast even if they are complicated but sometimes and for half an hour, we have a general slow down. Is it exactly half an hour? What changes at the time that happens (i.e. what else happens on the machine?).

Re: [PERFORM] Perl DBD and an alarming problem

2005-11-17 Thread Michael Fuhr
On Thu, Nov 17, 2005 at 01:04:21PM -0800, Craig A. James wrote: When I set statement_timeout in the config file, it just didn't do anything - it never timed out (PG 8.0.3). I finally found in the documentation that I can do set statement_timeout = xxx from PerlDBI on a per-client basis,

Re: [PERFORM] weird performances problem

2005-11-17 Thread Guillaume Smet
Andrew, Andrew Sullivan wrote: Is it exactly half an hour? What changes at the time that happens (i.e. what else happens on the machine?). Is this a time, for example, when logrotate is killing your I/O with file moves? No, it's not exactly half an hour. It's just that it slows down for

Re: [PERFORM] weird performances problem

2005-11-17 Thread Claus Guttesen
I forgot to give our non default postgresql.conf parameters: shared_buffers = 28800 sort_mem = 32768 vacuum_mem = 32768 max_fsm_pages = 35 max_fsm_relations = 2000 checkpoint_segments = 16 effective_cache_size = 27 random_page_cost = 2 Isn't sort_mem quite high? Remember that

Re: [PERFORM] Help speeding up delete

2005-11-17 Thread Ron Mayer
Christopher Kings-Lynne wrote: Quite seriously, if you're still using 7.2.4 for production purposes you could justifiably be accused of negligence Perhaps we should put a link on the home page underneath LATEST RELEASEs saying 7.2: de-supported with a link to a scary note along the

Re: [PERFORM] Help speeding up delete

2005-11-17 Thread Christopher Kings-Lynne
I do not really see why all the distributions could do something like this, instead of mucking around with special statically compiled pg_dumps and the like... Contrib modules and tablespaces. Plus, no version of pg_dump before 8.0 is able to actually perform such reliable dumps and reloads

Re: [PERFORM] Hardware/OS recommendations for large databases (

2005-11-17 Thread Greg Stark
Joshua Marsh [EMAIL PROTECTED] writes: We all want our systems to be CPU bound, but it's not always possible. Sure it is, let me introduce you to my router, a 486DX100... Ok, I guess that wasn't very helpful, I admit. -- greg ---(end of

Re: [PERFORM] Hardware/OS recommendations for large databases (

2005-11-17 Thread Greg Stark
Joshua Marsh [EMAIL PROTECTED] writes: We all want our systems to be CPU bound, but it's not always possible. Remember, he is managing a 5 TB Databse. That's quite a bit different than a 100 GB or even 500 GB database. Ok, a more productive point: it's not really the size of the database

Re: [PERFORM] Perl DBD and an alarming problem

2005-11-17 Thread Michael Fuhr
[Please copy the mailing list on replies.] On Thu, Nov 17, 2005 at 05:38:13PM -0800, Craig A. James wrote: You probably shouldn't set statement_timeout on a global basis anyway The server is a one trick pony so setting a global timeout value is actually appropriate. Beware that