Re: [PERFORM] Large tables (was: RAID 0 not as fast as

2006-09-18 Thread Luke Lonergan
Mark, On 9/18/06 8:45 PM, "[EMAIL PROTECTED]" <[EMAIL PROTECTED]> wrote: > Does a tool exist yet to time this for a particular configuration? We're considering building this into ANALYZE on a per-table basis. The basic approach times sequential access in page rate, then random seeks as page rat

Re: [PERFORM] Large tables (was: RAID 0 not as fast as expected)

2006-09-18 Thread mark
On Mon, Sep 18, 2006 at 06:10:13PM -0700, Luke Lonergan wrote: > Also be sure to set the random_page_cost parameter in > postgresql.conf to 100 or even higher when you use indexes, as the actual > seek rate for random access ranges between 50 and 300 for modern disk > hardware. If this parameter i

Re: [PERFORM] Large tables (was: RAID 0 not as fast as

2006-09-18 Thread Luke Lonergan
Yep, Solaris ZFS kicks butt. It does RAID10/5/6, etc and implements most of the high end features available on high end SANs... - Luke On 9/18/06 8:40 PM, "Alex Turner" <[EMAIL PROTECTED]> wrote: > Sweet - thats good - RAID 10 support seems like an odd thing to leave out. > > Alex > > On 9/1

Re: [PERFORM] Large tables (was: RAID 0 not as fast as expected)

2006-09-18 Thread Alex Turner
Sweet - thats good - RAID 10 support seems like an odd thing to leave out.AlexOn 9/18/06, Luke Lonergan < [EMAIL PROTECTED]> wrote:Alex,On 9/18/06 4:14 PM, "Alex Turner" < [EMAIL PROTECTED]> wrote:> Be warned, the tech specs page:> http://www.sun.com/servers/x64/x4500/specs.xml#anchor3 > doesn't me

Re: [PERFORM] LIKE query problem

2006-09-18 Thread Marc McIntyre
Thanks Tom, Is that documented somewhere? I can't seem to see any mention of it in the docs. Tom Lane wrote: Marc McIntyre <[EMAIL PROTECTED]> writes: ... Is there a way to perform this efficiently in one query ? No, because you're hoping for an indexscan optimization of a LIKE que

Re: [PERFORM] LIKE query problem

2006-09-18 Thread Tom Lane
Marc McIntyre <[EMAIL PROTECTED]> writes: > ... Is there a way to perform this efficiently in one query ? No, because you're hoping for an indexscan optimization of a LIKE query, and that can only happen if the pattern is a plan-time constant. regards, tom lane --

Re: [PERFORM] Large tables (was: RAID 0 not as fast as

2006-09-18 Thread Luke Lonergan
Alex, On 9/18/06 4:14 PM, "Alex Turner" <[EMAIL PROTECTED]> wrote: > Be warned, the tech specs page: > http://www.sun.com/servers/x64/x4500/specs.xml#anchor3 > doesn't mention RAID 10 as a possible, and this is probably what most would > recommend for fast data access if you are doing both read a

Re: [PERFORM] Large tables (was: RAID 0 not as fast as expected)

2006-09-18 Thread Luke Lonergan
Bucky, On 9/18/06 7:37 AM, "Bucky Jordan" <[EMAIL PROTECTED]> wrote: > My question is at what point do I have to get fancy with those big > tables? From your presentation, it looks like PG can handle 1.2 billion > records or so as long as you write intelligent queries. (And normal PG > should be

Re: [PERFORM] Vacuums on large busy databases

2006-09-18 Thread Francisco Reyes
Jim C. Nasby writes: BTW, on some good raid controllers (with battery backup and write-caching), putting pg_xlog on a seperate partition doesn't really help, so you might want to try combining everything. Planning to put a busy database on second raid or perhaps some index files. So far the se

[PERFORM] LIKE query problem

2006-09-18 Thread Marc McIntyre
I'm having a problem with a simple query, that finds children of a node, using a materialized path to the node. The query: select n1.id from nodes n1, nodes n2 where n1.path like n2.path || '%' and n2.id = 14; QUERY PLAN

Re: [PERFORM] Large tables (was: RAID 0 not as fast as expected)

2006-09-18 Thread Michael Stone
On Mon, Sep 18, 2006 at 07:14:56PM -0400, Alex Turner wrote: If you have a table with 100million records, each of which is 200bytes long, that gives you roughtly 20 gig of data (assuming it was all written neatly and hasn't been updated much). If you're in that range it doesn't even count as

Re: [PERFORM] Large tables (was: RAID 0 not as fast as expected)

2006-09-18 Thread Alex Turner
Do the basic math:If you have a table with 100million records, each of which is 200bytes long, that gives you roughtly 20 gig of data (assuming it was all written neatly and hasn't been updated much).   If you have to do a full table scan, then it will take roughly 400 seconds with a single 10k RPM

Re: [PERFORM] Large tables (was: RAID 0 not as fast as expected)

2006-09-18 Thread Bucky Jordan
> good normalization skills are really important for large databases, > along with materialization strategies for 'denormalized sets'. Good points- thanks. I'm especially curious what others have done for the materialization. The matview project on gborg appears dead, and I've only found a smatter

Re: [PERFORM] Vacuums on large busy databases

2006-09-18 Thread Jim C. Nasby
On Thu, Sep 14, 2006 at 11:23:01AM -0400, Francisco Reyes wrote: > My setup: > Freebsd 6.1 > Postgresql 8.1.4 > Memory: 8GB > SATA Disks > > Raid 1 10 spindles (2 as hot spares) > 500GB disks (16MB buffer), 7200 rpm > Raid 10 > > Raid 2 4 spindles > 150GB 10K rpm disks > Raid 10 > > shared_buff

Re: [PERFORM] Large tables (was: RAID 0 not as fast as expected)

2006-09-18 Thread Alan Hodgson
On Monday 18 September 2006 13:56, "Merlin Moncure" <[EMAIL PROTECTED]> wrote: > just another fyi, if you have a really big database, you can forget > about doing pg_dump for backups (unless you really don't care about > being x day or days behind)...you simply have to due some type of > replicati

Re: [PERFORM] Large tables (was: RAID 0 not as fast as expected)

2006-09-18 Thread Merlin Moncure
On 9/18/06, Bucky Jordan <[EMAIL PROTECTED]> wrote: My question is at what point do I have to get fancy with those big tables? From your presentation, it looks like PG can handle 1.2 billion records or so as long as you write intelligent queries. (And normal PG should be able to handle that, corr

Re: [PERFORM] High CPU Load

2006-09-18 Thread Guillaume Smet
On 9/18/06, Jérôme BENOIS <[EMAIL PROTECTED]> wrote: Tomorrow morning i plan to add 2Go RAM in order to test difference with my actual config. I don't think more RAM will change anything if you don't swap at all. You can try to set shared_buffers lower (try 32768 and 16384) but I don't

Re: [PERFORM] High CPU Load

2006-09-18 Thread Jérôme BENOIS
Hi Markus, Le vendredi 15 septembre 2006 à 11:43 +0200, Markus Schaber a écrit : > Hi, Jérôme, > > Jérôme BENOIS wrote: > > > max_connections = 512 > > Do you really have that much concurrent connections? Then you should > think about getting a larger machine, probably. > > You will definitely

[PERFORM] Large tables (was: RAID 0 not as fast as expected)

2006-09-18 Thread Bucky Jordan
>Yes. What's pretty large? We've had to redefine large recently, now we're >talking about systems with between 100TB and 1,000TB. > >- Luke Well, I said large, not gargantuan :) - Largest would probably be around a few TB, but the problem I'm having to deal with at the moment is large numbers (p

Re: [PERFORM] High CPU Load

2006-09-18 Thread Jérôme BENOIS
Hi Guillaume, Now i disable Hyper Threading in BIOS, and "context switch storms" disappeared. (when i look with command sar -t) I decreased work_mem parameter to 32768. My CPU load is better. But it is still too high, in example : top - 16:27:05 up 9:13, 3 users, load average

Re: [PERFORM] Poor performance on seq scan

2006-09-18 Thread Guido Neitzer
Because there is no MVCC information in the index. cug 2006/9/12, Piotr Kołaczkowski <[EMAIL PROTECTED]>: On Tuesday 12 September 2006 12:47, Heikki Linnakangas wrote: > Laszlo Nagy wrote: > > I made another test. I create a file with the identifiers and names of > > the products: > > > > psql#

Re: [PERFORM] Poor performance on seq scan

2006-09-18 Thread Markus Schaber
Hi, Piotr, Piotr Kołaczkowski wrote: > Why match rows from the heap if ALL required data are in the index itself? > Why look at the heap at all? Because the index does not contain any transaction informations, so it has to look to the heap to find out which of the rows are current. This is one

Re: [PERFORM] Optimize SQL

2006-09-18 Thread Mikael Carneholm
That query is generated by hibernate, right? -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Pallav Kalva Sent: den 15 september 2006 17:10 To: pgsql-performance@postgresql.org Subject: [PERFORM] Optimize SQL Hi, Is there anyway we can optimize this