[PERFORM] PostgreSQL 9.2.3 performance problem caused Exclusive locks

2013-03-13 Thread Jeff Janes
On Friday, March 8, 2013, Emre Hasegeli wrote: PostgreSQL writes several following logs during the problem which I never > saw before 9.2.3: > > LOG: process 4793 acquired ExclusiveLock on extension of relation 305605 > of database 16396 after 2348.675 ms > The key here is not that it is an Excl

Re: [PERFORM] New server setup

2013-03-13 Thread David Boreham
On 3/13/2013 9:29 PM, Mark Kirkwood wrote: Just going through this now with a vendor. They initially assured us that the drives had "end to end protection" so we did not need to worry. I had to post stripdown pictures from Intel's s3700, showing obvious capacitors attached to the board before I

Re: [PERFORM] New server setup

2013-03-13 Thread Mark Kirkwood
On 14/03/13 09:16, David Boreham wrote: On 3/13/2013 1:23 PM, Steve Crawford wrote: What concerns me more than wear is this: InfoWorld Article: http://www.infoworld.com/t/solid-state-drives/test-your-ssds-or-risk-massive-data-loss-researchers-warn-213715 Referenced research paper: https://ww

Re: [PERFORM] Risk of data corruption/loss?

2013-03-13 Thread Joshua Berkus
Neils, > - Master server with battery back raid controller with 4 SAS disks in > a RAID 0 - so NO mirroring here, due to max performance > requirements. > - Slave server setup with streaming replication on 4 HDD's in RAID > 10. The setup will be done with synchronous_commit=off and > synchronous_s

Re: [PERFORM] PostgreSQL 9.2.3 performance problem caused Exclusive locks

2013-03-13 Thread Joshua Berkus
Emre, > > LOG: process 4793 acquired ExclusiveLock on extension of relation > > 305605 of database 16396 after 2348.675 ms The reason you're seeing that message is that you have log_lock_waits turned on. That message says that some process waited for 2.3 seconds to get a lock for expanding the

Re: [PERFORM] New server setup

2013-03-13 Thread David Boreham
On 3/13/2013 1:23 PM, Steve Crawford wrote: What concerns me more than wear is this: InfoWorld Article: http://www.infoworld.com/t/solid-state-drives/test-your-ssds-or-risk-massive-data-loss-researchers-warn-213715 Referenced research paper: https://www.usenix.org/conference/fast13/understa

Re: [PERFORM] New server setup

2013-03-13 Thread John Lister
On 13/03/2013 19:23, Steve Crawford wrote: On 03/13/2013 09:15 AM, John Lister wrote: On 13/03/2013 15:50, Greg Jaskiewicz wrote: SSDs have much shorter life then spinning drives, so what do you do when one inevitably fails in your system ? Define much shorter? I accept they have a limited no o

Re: [PERFORM] New server setup

2013-03-13 Thread CSS
On Mar 13, 2013, at 3:23 PM, Steve Crawford wrote: > On 03/13/2013 09:15 AM, John Lister wrote: >> On 13/03/2013 15:50, Greg Jaskiewicz wrote: >>> SSDs have much shorter life then spinning drives, so what do you do when >>> one inevitably fails in your system ? >> Define much shorter? I accept th

Re: [PERFORM] New server setup

2013-03-13 Thread Karl Denninger
On 3/13/2013 2:23 PM, Steve Crawford wrote: > On 03/13/2013 09:15 AM, John Lister wrote: >> On 13/03/2013 15:50, Greg Jaskiewicz wrote: >>> SSDs have much shorter life then spinning drives, so what do you do >>> when one inevitably fails in your system ? >> Define much shorter? I accept they have

Re: [PERFORM] Setup of four 15k SAS disk with LSI raid controller

2013-03-13 Thread Niels Kristian Schjødt
Den 13/03/2013 kl. 20.01 skrev Joshua D. Drake : > > On 03/13/2013 11:45 AM, Vasilis Ventirozos wrote: >> Its better to split WAL segments and data just because these two have >> different io requirements and because its easier to measure and tune >> things if you have them on different disks. >

Re: [PERFORM] New server setup

2013-03-13 Thread Steve Crawford
On 03/13/2013 09:15 AM, John Lister wrote: On 13/03/2013 15:50, Greg Jaskiewicz wrote: SSDs have much shorter life then spinning drives, so what do you do when one inevitably fails in your system ? Define much shorter? I accept they have a limited no of writes, but that depends on load. You can

Re: [PERFORM] Setup of four 15k SAS disk with LSI raid controller

2013-03-13 Thread Joshua D. Drake
On 03/13/2013 11:45 AM, Vasilis Ventirozos wrote: Its better to split WAL segments and data just because these two have different io requirements and because its easier to measure and tune things if you have them on different disks. Generally speaking you are correct but we are talking about R

Re: [PERFORM] Setup of four 15k SAS disk with LSI raid controller

2013-03-13 Thread Vasilis Ventirozos
Its better to split WAL segments and data just because these two have different io requirements and because its easier to measure and tune things if you have them on different disks. Vasilis Ventirozos On Wed, Mar 13, 2013 at 8:37 PM, Niels Kristian Schjødt < nielskrist...@autouncle.com> wrote:

Re: [PERFORM] Setup of four 15k SAS disk with LSI raid controller

2013-03-13 Thread Niels Kristian Schjødt
Den 13/03/2013 kl. 19.15 skrev Vasilis Ventirozos : > raid0 tends to linear scaling so 3 of them should give something close to > 300% increased write speed. So i would say 1. but make sure you test your > configuration as soon as you can with bonnie++ or something similar > > On Wed, Mar 13, 2

Re: [PERFORM] Setup of four 15k SAS disk with LSI raid controller

2013-03-13 Thread Vasilis Ventirozos
raid0 tends to linear scaling so 3 of them should give something close to 300% increased write speed. So i would say 1. but make sure you test your configuration as soon as you can with bonnie++ or something similar On Wed, Mar 13, 2013 at 7:43 PM, Niels Kristian Schjødt < nielskrist...@autouncle.

[PERFORM] Setup of four 15k SAS disk with LSI raid controller

2013-03-13 Thread Niels Kristian Schjødt
I have a server with 32GB ram, one intel E3-1245 and four 15k SAS disks with a BB LSI MegaRaid controller. I wan't the optimal performance for my server, which will be pretty write heavy at times, and less optimized for redundancy, as my data is not very crucial and I will be running a streaming

Re: [PERFORM] Risk of data corruption/loss?

2013-03-13 Thread Niels Kristian Schjødt
Den 13/03/2013 kl. 18.13 skrev Jeff Janes : > On Wed, Mar 13, 2013 at 8:24 AM, Niels Kristian Schjødt > wrote: > I'm considering the following setup: > > - Master server with battery back raid controller with 4 SAS disks in a RAID > 0 - so NO mirroring here, due to max performance requirement

Re: [PERFORM] Risk of data corruption/loss?

2013-03-13 Thread Jeff Janes
On Wed, Mar 13, 2013 at 8:24 AM, Niels Kristian Schjødt < nielskrist...@autouncle.com> wrote: > I'm considering the following setup: > > - Master server with battery back raid controller with 4 SAS disks in a > RAID 0 - so NO mirroring here, due to max performance requirements. > - Slave server se

Re: [PERFORM] New server setup

2013-03-13 Thread John Lister
On 13/03/2013 15:50, Greg Jaskiewicz wrote: SSDs have much shorter life then spinning drives, so what do you do when one inevitably fails in your system ? Define much shorter? I accept they have a limited no of writes, but that depends on load. You can actively monitor the drives "health" level

Re: [PERFORM] New server setup

2013-03-13 Thread Greg Jaskiewicz
On 13 Mar 2013, at 15:33, John Lister wrote: > On 12/03/2013 21:41, Gregg Jaskiewicz wrote: >> >> Whilst on the hardware subject, someone mentioned throwing ssd into the mix. >> I.e. combining spinning HDs with SSD, apparently some raid cards can use >> small-ish (80GB+) SSDs as external cach

Re: [PERFORM] New server setup

2013-03-13 Thread John Lister
On 12/03/2013 21:41, Gregg Jaskiewicz wrote: Whilst on the hardware subject, someone mentioned throwing ssd into the mix. I.e. combining spinning HDs with SSD, apparently some raid cards can use small-ish (80GB+) SSDs as external caches. Any experiences with that ? The new LSI/Dell cards do

[PERFORM] Risk of data corruption/loss?

2013-03-13 Thread Niels Kristian Schjødt
I'm considering the following setup: - Master server with battery back raid controller with 4 SAS disks in a RAID 0 - so NO mirroring here, due to max performance requirements. - Slave server setup with streaming replication on 4 HDD's in RAID 10. The setup will be done with synchronous_commit=o