Re: [PERFORM] RAID Controller (HP P400) beat by SW-RAID?

2011-09-12 Thread Arjen van der Meijden
On 12-9-2011 0:44 Anthony Presley wrote: A few weeks back, we purchased two refurb'd HP DL360's G5's, and were hoping to set them up with PG 9.0.2, running replicated. These machines have (2) 5410 Xeon's, 36GB of RAM, (6) 10k SAS drives, and are using the HP SA P400i with 512MB of BBWC. PG is

Re: [PERFORM] RAID Controller (HP P400) beat by SW-RAID?

2011-09-12 Thread Gavin Flower
On 12/09/11 15:10, mark wrote: From: pgsql-performance-ow...@postgresql.org [mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of Anthony Presley Sent: Sunday, September 11, 2011 4:45 PM To: pgsql-performance@postgresql.org Subject: [PERFORM] RAID Controller (HP P400) beat by SW-RAID?

Re: [PERFORM] Databases optimization

2011-09-12 Thread Hany ABOU-GHOURY
Hi Anthony, I will try that thank you very much for your help Cheers Hany On Tue, Sep 13, 2011 at 2:49 PM, Anthony Presley wrote: > You may want to try pgreplay ... we've tried it for a similar scenario, and > so far, it's pretty promising. > > I do wish it was able to be loaded from a pgfoui

Re: [PERFORM] Databases optimization

2011-09-12 Thread Anthony Presley
You may want to try pgreplay ... we've tried it for a similar scenario, and so far, it's pretty promising. I do wish it was able to be loaded from a pgfouine formatted log file, or from another db ... but that's OK. -- Anthony Presley On Mon, Sep 12, 2011 at 6:28 PM, Hany ABOU-GHOURY wrote:

Re: [PERFORM] Raid 5 vs Raid 10 Benchmarks Using bonnie++

2011-09-12 Thread Aidan Van Dyk
On Mon, Sep 12, 2011 at 8:47 PM, wrote: >> XFS FAQ  goes over much of it, starting at Q24: >> >>  http://xfs.org/index.php/XFS_FAQ#Q:_What_is_the_problem_with_the_write_cache_on_journaled_filesystems.3F >> >> So, for pure performance, on a battery-backed controller, nobarrier is >> the recommend

Re: [PERFORM] Allow sorts to use more available memory

2011-09-12 Thread Stephen Frost
* Robert Schnabel (schnab...@missouri.edu) wrote: > And getting back to the to-do list entry and reading the related > posts, it appears that even if you could set work_mem that high it > would only use 2GB anyway. I guess that was the second part of my > question. Is that true? Errr, and to get

Re: [PERFORM] Allow sorts to use more available memory

2011-09-12 Thread Stephen Frost
* Robert Schnabel (schnab...@missouri.edu) wrote: > And getting back to the to-do list entry and reading the related > posts, it appears that even if you could set work_mem that high it > would only use 2GB anyway. I guess that was the second part of my > question. Is that true? Yes and no. wor

Re: [PERFORM] RAID Controller (HP P400) beat by SW-RAID?

2011-09-12 Thread Anthony Presley
So, today, I did the following: - Swapped out the 5410's (2.3Ghz) for 5470's (3.33Ghz) - Set the ext4 mount options to be noatime,barrier=0,data=writeback - Installed PG 9.1 from the yum repo Item one: With the accelerator cache set to 0/100 (all 512MB for writing), loading the db / creat

Re: [PERFORM] Raid 5 vs Raid 10 Benchmarks Using bonnie++

2011-09-12 Thread david
On Mon, 12 Sep 2011, Aidan Van Dyk wrote: On Mon, Sep 12, 2011 at 6:57 PM, wrote: The "barrier" is the linux fs/block way of saying "these writes need to be on persistent media before I can depend on them".  On typical spinning media disks, that means out of the disk cache (which is not pers

Re: [PERFORM] Raid 5 vs Raid 10 Benchmarks Using bonnie++

2011-09-12 Thread Aidan Van Dyk
On Mon, Sep 12, 2011 at 6:57 PM, wrote: >> The "barrier" is the linux fs/block way of saying "these writes need >> to be on persistent media before I can depend on them".  On typical >> spinning media disks, that means out of the disk cache (which is not >> persistent) and on platters.  The way

Re: [PERFORM] Databases optimization

2011-09-12 Thread Hany ABOU-GHOURY
I have a production PostGres v8.2 database on luinx and a test PostGres V9.0 database on a test linux server I am going to do migration but do not want to do that before making sure the performance of the new test Postgres 9.0 database performance is as good as the current production Postgres 8.2

Re: [PERFORM] Allow sorts to use more available memory

2011-09-12 Thread Tom Lane
Robert Schnabel writes: > On 9/12/2011 3:58 PM, Scott Marlowe wrote: >> On Mon, Sep 12, 2011 at 11:33 AM, Robert Schnabel >> wrote: >>> The recent "data warehouse" thread made me think about how I use work_mem >>> for some of my big queries. So I tried SET work_mem = '4GB' for a session >>> and

Re: [PERFORM] Sudden drop in DBb performance

2011-09-12 Thread Marinos Yannikos
Hi, On 03.09.2011 09:26, Gerhard Wohlgenannt wrote: Activating log_min_duration shows for instance this query --- there are now constantly queries which take absurdely long. 2 things you should check: - if your /var/lib/postgresql is on an ext3 fs, I've seen such things before due to the cha

Re: [PERFORM] Raid 5 vs Raid 10 Benchmarks Using bonnie++

2011-09-12 Thread david
apologies for such a late response to this thread, but there is domething I think is _really_ dangerous here. On Thu, 18 Aug 2011, Aidan Van Dyk wrote: On Thu, Aug 18, 2011 at 1:35 AM, Craig Ringer wrote: On 18/08/2011 11:48 AM, Ogden wrote: Isn't this very dangerous? I have the Dell PERC

Re: [PERFORM] Allow sorts to use more available memory

2011-09-12 Thread Robert Schnabel
On 9/12/2011 1:57 PM, Andy Colson wrote: On 9/12/2011 1:22 PM, Robert Schnabel wrote: On 9/12/2011 12:57 PM, Shaun Thomas wrote: On 09/12/2011 12:47 PM, Andy Colson wrote: work_mem is not the total a query can use. I believe each step can use that much, and each backend can use it for multip

Re: [PERFORM] Allow sorts to use more available memory

2011-09-12 Thread Robert Schnabel
On 9/12/2011 3:58 PM, Scott Marlowe wrote: On Mon, Sep 12, 2011 at 11:33 AM, Robert Schnabel wrote: The recent "data warehouse" thread made me think about how I use work_mem for some of my big queries. So I tried SET work_mem = '4GB' for a session and got ERROR: 4194304 is outside the valid

Re: [PERFORM] Postgres for a "data warehouse", 5-10 TB

2011-09-12 Thread Robert Klemme
On 12.09.2011 19:22, Andy Colson wrote: On 9/12/2011 12:15 PM, Robert Klemme wrote: On 11.09.2011 19:02, Marti Raudsepp wrote: On Sun, Sep 11, 2011 at 17:23, Andy Colson wrote: On 09/11/2011 08:59 AM, Igor Chudov wrote: By the way, does that INSERT UPDATE functionality or something like this

Re: [PERFORM] Postgres for a "data warehouse", 5-10 TB

2011-09-12 Thread Scott Marlowe
On Mon, Sep 12, 2011 at 2:55 PM, Shaun Thomas wrote: > On 09/12/2011 03:44 PM, Scott Marlowe wrote: > >> The PostgreSQL team works REALLY hard to prevent any kind of >> corruption scenario from rearing its ugly head, so when the word >> corruption pops up I start to wonder about the system (hardwa

Re: [PERFORM] Allow sorts to use more available memory

2011-09-12 Thread Scott Marlowe
On Mon, Sep 12, 2011 at 11:33 AM, Robert Schnabel wrote: > The recent "data warehouse" thread made me think about how I use work_mem > for some of my big queries.  So I tried SET work_mem = '4GB' for a session > and got > > ERROR: 4194304 is outside the valid range for parameter "work_mem" (64 ..

Re: [PERFORM] Postgres for a "data warehouse", 5-10 TB

2011-09-12 Thread Shaun Thomas
On 09/12/2011 03:44 PM, Scott Marlowe wrote: The PostgreSQL team works REALLY hard to prevent any kind of corruption scenario from rearing its ugly head, so when the word corruption pops up I start to wonder about the system (hardware wise) someone is using, You've apparently never used early

Re: [PERFORM] Postgres for a "data warehouse", 5-10 TB

2011-09-12 Thread Scott Marlowe
On Mon, Sep 12, 2011 at 2:04 PM, Shaun Thomas wrote: > On 09/12/2011 02:48 PM, Scott Marlowe wrote: > >> I put it to you that your hardware has problems if you have a pg db >> that's corrupting from having too much vacuum activity. > > What? No. We optimized by basically forcing autovacuum to neve

Re: [PERFORM] Postgres for a "data warehouse", 5-10 TB

2011-09-12 Thread Marti Raudsepp
On Mon, Sep 12, 2011 at 23:04, Shaun Thomas wrote: > I was alluding to the fact that if a DBA had his system running for a week > at our transaction level, and PG didn't have forced auto vacuum, and their > maintenance lapsed even slightly, they could end up with a corrupt database. It doesn't ac

Re: [PERFORM] Postgres for a "data warehouse", 5-10 TB

2011-09-12 Thread Scott Marlowe
On Mon, Sep 12, 2011 at 10:22 AM, Shaun Thomas wrote: > On 09/11/2011 12:02 PM, Marti Raudsepp wrote: > >> Which brings me to another important point: don't do lots of small >> write transactions, SAVEPOINTs or PL/pgSQL subtransactions. Besides >> being inefficient, they introduce a big maintenanc

Re: [PERFORM] Postgres for a "data warehouse", 5-10 TB

2011-09-12 Thread Shaun Thomas
On 09/12/2011 02:48 PM, Scott Marlowe wrote: I put it to you that your hardware has problems if you have a pg db that's corrupting from having too much vacuum activity. What? No. We optimized by basically forcing autovacuum to never run during our active periods. We never actually encountered

Re: [PERFORM] Allow sorts to use more available memory

2011-09-12 Thread pasman pasmański
I think , you may add a ramdisk as tablespace for temporary tables. This should work similar to bigger work_mem. 2011/9/12, Robert Schnabel : > > On 9/12/2011 12:57 PM, Shaun Thomas wrote: >> On 09/12/2011 12:47 PM, Andy Colson wrote: >> >>> work_mem is not the total a query can use. I believe eac

Re: [PERFORM] Postgres for a "data warehouse", 5-10 TB

2011-09-12 Thread Scott Marlowe
On Mon, Sep 12, 2011 at 11:04 AM, Robert Klemme wrote: > On 11.09.2011 22:10, Scott Marlowe wrote: > >> Another data point.  We had a big Oracle installation at my last job, >> and OLAP queries were killing it midday, so I built a simple >> replication system to grab rows from the big iron Oracle

Re: [PERFORM] Allow sorts to use more available memory

2011-09-12 Thread Andy Colson
On 9/12/2011 1:22 PM, Robert Schnabel wrote: On 9/12/2011 12:57 PM, Shaun Thomas wrote: On 09/12/2011 12:47 PM, Andy Colson wrote: work_mem is not the total a query can use. I believe each step can use that much, and each backend can use it for multiple bits. So if you had two backends, each

Re: [PERFORM] Allow sorts to use more available memory

2011-09-12 Thread Andy Colson
On 9/12/2011 1:22 PM, Robert Schnabel wrote: On 9/12/2011 12:57 PM, Shaun Thomas wrote: On 09/12/2011 12:47 PM, Andy Colson wrote: work_mem is not the total a query can use. I believe each step can use that much, and each backend can use it for multiple bits. So if you had two backends, each

Re: [PERFORM] Allow sorts to use more available memory

2011-09-12 Thread Robert Schnabel
On 9/12/2011 12:57 PM, Shaun Thomas wrote: On 09/12/2011 12:47 PM, Andy Colson wrote: work_mem is not the total a query can use. I believe each step can use that much, and each backend can use it for multiple bits. So if you had two backends, each doing 2 sorts, you'd use 2*2 = 4 * 2GB = 8GB.

Re: [PERFORM] Allow sorts to use more available memory

2011-09-12 Thread Andy Colson
On 9/12/2011 12:57 PM, Shaun Thomas wrote: On 09/12/2011 12:47 PM, Andy Colson wrote: work_mem is not the total a query can use. I believe each step can use that much, and each backend can use it for multiple bits. So if you had two backends, each doing 2 sorts, you'd use 2*2 = 4 * 2GB = 8GB.

Re: [PERFORM] Allow sorts to use more available memory

2011-09-12 Thread Shaun Thomas
On 09/12/2011 12:47 PM, Andy Colson wrote: work_mem is not the total a query can use. I believe each step can use that much, and each backend can use it for multiple bits. So if you had two backends, each doing 2 sorts, you'd use 2*2 = 4 * 2GB = 8GB. Exactly. Find a big query somewhere in your

Re: [PERFORM] Allow sorts to use more available memory

2011-09-12 Thread Andy Colson
On 9/12/2011 12:33 PM, Robert Schnabel wrote: The recent "data warehouse" thread made me think about how I use work_mem for some of my big queries. So I tried SET work_mem = '4GB' for a session and got ERROR: 4194304 is outside the valid range for parameter "work_mem" (64 .. 2097151) A bit of s

[PERFORM] Allow sorts to use more available memory

2011-09-12 Thread Robert Schnabel
The recent "data warehouse" thread made me think about how I use work_mem for some of my big queries. So I tried SET work_mem = '4GB' for a session and got ERROR: 4194304 is outside the valid range for parameter "work_mem" (64 .. 2097151) A bit of searching turned up the "Allow sorts to use

Re: [PERFORM] Postgres for a "data warehouse", 5-10 TB

2011-09-12 Thread Andy Colson
On 9/12/2011 12:15 PM, Robert Klemme wrote: On 11.09.2011 19:02, Marti Raudsepp wrote: On Sun, Sep 11, 2011 at 17:23, Andy Colson wrote: On 09/11/2011 08:59 AM, Igor Chudov wrote: By the way, does that INSERT UPDATE functionality or something like this exist in Postgres? You have two options:

Re: [PERFORM] Postgres for a "data warehouse", 5-10 TB

2011-09-12 Thread Robert Klemme
On 11.09.2011 19:02, Marti Raudsepp wrote: On Sun, Sep 11, 2011 at 17:23, Andy Colson wrote: On 09/11/2011 08:59 AM, Igor Chudov wrote: By the way, does that INSERT UPDATE functionality or something like this exist in Postgres? You have two options: 1) write a function like: create function

Re: [PERFORM] Postgres for a "data warehouse", 5-10 TB

2011-09-12 Thread Robert Klemme
On 11.09.2011 22:10, Scott Marlowe wrote: Another data point. We had a big Oracle installation at my last job, and OLAP queries were killing it midday, so I built a simple replication system to grab rows from the big iron Oracle SUN box and shove into a single core P IV 2.xGHz machine with 4 12

Re: [PERFORM] Re: How to track number of connections and hosts to Postgres cluster

2011-09-12 Thread Merlin Moncure
On Tue, Aug 30, 2011 at 12:55 AM, Venkat Balaji wrote: > Thanks to all for your very helpful replies ! > As Greg Smith rightly said, i faced a problem of missing connections between > the runs. I even ran the cron every less than a second, but, still that > would become too many runs per second an

Re: [PERFORM] RAID Controller (HP P400) beat by SW-RAID?

2011-09-12 Thread Anthony Presley
Mark, On Sun, Sep 11, 2011 at 10:10 PM, mark wrote: > > > >From: pgsql-performance-ow...@postgresql.org > [mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of Anthony > Presley > >Sent: Sunday, September 11, 2011 4:45 PM > >To: pgsql-performance@postgresql.org > >Subject: [PERFORM] RAID

Re: [PERFORM] RAID Controller (HP P400) beat by SW-RAID?

2011-09-12 Thread Anthony Presley
On Sun, Sep 11, 2011 at 6:17 PM, Tomas Vondra wrote: > Dne 12.9.2011 00:44, Anthony Presley napsal(a): > > We've currently got PG 8.4.4 running on a whitebox hardware set up, > > with (2) 5410 Xeon's, and 16GB of RAM. It's also got (4) 7200RPM > > SATA drives, using the onboard IDE controller an

Re: [PERFORM] Postgres for a "data warehouse", 5-10 TB

2011-09-12 Thread Shaun Thomas
On 09/11/2011 09:27 AM, Claudio Freire wrote: I have used slony to do database migration. It is a pain to set up, but it saves you hours of downtime. I've had to shoot this option down in two separate upgrade scenarios in two different companies. Theoretically it's possible, but slony is base

Re: [PERFORM] Postgres for a "data warehouse", 5-10 TB

2011-09-12 Thread Shaun Thomas
On 09/11/2011 12:02 PM, Marti Raudsepp wrote: Which brings me to another important point: don't do lots of small write transactions, SAVEPOINTs or PL/pgSQL subtransactions. Besides being inefficient, they introduce a big maintenance burden. I'd like to second this. Before a notable application

Re: [PERFORM] Postgres for a "data warehouse", 5-10 TB

2011-09-12 Thread Shaun Thomas
On 09/11/2011 09:44 AM, Claudio Freire wrote: And Andy is right, you'll have a lot less space. If raid 10 doesn't give you enough room, just leave two spare drives for a raid 0 temporary partition. That will be at least twice as fast as doing temporary tables on the raid 6. Alternatively, thro