On Friday, March 8, 2013, Emre Hasegeli wrote:
PostgreSQL writes several following logs during the problem which I never
> saw before 9.2.3:
>
> LOG: process 4793 acquired ExclusiveLock on extension of relation 305605
> of database 16396 after 2348.675 ms
>
The key here is not that it is an Excl
On 3/13/2013 9:29 PM, Mark Kirkwood wrote:
Just going through this now with a vendor. They initially assured us
that the drives had "end to end protection" so we did not need to
worry. I had to post stripdown pictures from Intel's s3700, showing
obvious capacitors attached to the board before I
On 14/03/13 09:16, David Boreham wrote:
On 3/13/2013 1:23 PM, Steve Crawford wrote:
What concerns me more than wear is this:
InfoWorld Article:
http://www.infoworld.com/t/solid-state-drives/test-your-ssds-or-risk-massive-data-loss-researchers-warn-213715
Referenced research paper:
https://ww
Neils,
> - Master server with battery back raid controller with 4 SAS disks in
> a RAID 0 - so NO mirroring here, due to max performance
> requirements.
> - Slave server setup with streaming replication on 4 HDD's in RAID
> 10. The setup will be done with synchronous_commit=off and
> synchronous_s
Emre,
> > LOG: process 4793 acquired ExclusiveLock on extension of relation
> > 305605 of database 16396 after 2348.675 ms
The reason you're seeing that message is that you have log_lock_waits turned on.
That message says that some process waited for 2.3 seconds to get a lock for
expanding the
On 3/13/2013 1:23 PM, Steve Crawford wrote:
What concerns me more than wear is this:
InfoWorld Article:
http://www.infoworld.com/t/solid-state-drives/test-your-ssds-or-risk-massive-data-loss-researchers-warn-213715
Referenced research paper:
https://www.usenix.org/conference/fast13/understa
On 13/03/2013 19:23, Steve Crawford wrote:
On 03/13/2013 09:15 AM, John Lister wrote:
On 13/03/2013 15:50, Greg Jaskiewicz wrote:
SSDs have much shorter life then spinning drives, so what do you do
when one inevitably fails in your system ?
Define much shorter? I accept they have a limited no o
On Mar 13, 2013, at 3:23 PM, Steve Crawford wrote:
> On 03/13/2013 09:15 AM, John Lister wrote:
>> On 13/03/2013 15:50, Greg Jaskiewicz wrote:
>>> SSDs have much shorter life then spinning drives, so what do you do when
>>> one inevitably fails in your system ?
>> Define much shorter? I accept th
On 3/13/2013 2:23 PM, Steve Crawford wrote:
> On 03/13/2013 09:15 AM, John Lister wrote:
>> On 13/03/2013 15:50, Greg Jaskiewicz wrote:
>>> SSDs have much shorter life then spinning drives, so what do you do
>>> when one inevitably fails in your system ?
>> Define much shorter? I accept they have
Den 13/03/2013 kl. 20.01 skrev Joshua D. Drake :
>
> On 03/13/2013 11:45 AM, Vasilis Ventirozos wrote:
>> Its better to split WAL segments and data just because these two have
>> different io requirements and because its easier to measure and tune
>> things if you have them on different disks.
>
On 03/13/2013 09:15 AM, John Lister wrote:
On 13/03/2013 15:50, Greg Jaskiewicz wrote:
SSDs have much shorter life then spinning drives, so what do you do
when one inevitably fails in your system ?
Define much shorter? I accept they have a limited no of writes, but
that depends on load. You can
On 03/13/2013 11:45 AM, Vasilis Ventirozos wrote:
Its better to split WAL segments and data just because these two have
different io requirements and because its easier to measure and tune
things if you have them on different disks.
Generally speaking you are correct but we are talking about R
Its better to split WAL segments and data just because these two have
different io requirements and because its easier to measure and tune things
if you have them on different disks.
Vasilis Ventirozos
On Wed, Mar 13, 2013 at 8:37 PM, Niels Kristian Schjødt <
nielskrist...@autouncle.com> wrote:
Den 13/03/2013 kl. 19.15 skrev Vasilis Ventirozos :
> raid0 tends to linear scaling so 3 of them should give something close to
> 300% increased write speed. So i would say 1. but make sure you test your
> configuration as soon as you can with bonnie++ or something similar
>
> On Wed, Mar 13, 2
raid0 tends to linear scaling so 3 of them should give something close to
300% increased write speed. So i would say 1. but make sure you test your
configuration as soon as you can with bonnie++ or something similar
On Wed, Mar 13, 2013 at 7:43 PM, Niels Kristian Schjødt <
nielskrist...@autouncle.
I have a server with 32GB ram, one intel E3-1245 and four 15k SAS disks with a
BB LSI MegaRaid controller. I wan't the optimal performance for my server,
which will be pretty write heavy at times, and less optimized for redundancy,
as my data is not very crucial and I will be running a streaming
Den 13/03/2013 kl. 18.13 skrev Jeff Janes :
> On Wed, Mar 13, 2013 at 8:24 AM, Niels Kristian Schjødt
> wrote:
> I'm considering the following setup:
>
> - Master server with battery back raid controller with 4 SAS disks in a RAID
> 0 - so NO mirroring here, due to max performance requirement
On Wed, Mar 13, 2013 at 8:24 AM, Niels Kristian Schjødt <
nielskrist...@autouncle.com> wrote:
> I'm considering the following setup:
>
> - Master server with battery back raid controller with 4 SAS disks in a
> RAID 0 - so NO mirroring here, due to max performance requirements.
> - Slave server se
On 13/03/2013 15:50, Greg Jaskiewicz wrote:
SSDs have much shorter life then spinning drives, so what do you do when one
inevitably fails in your system ?
Define much shorter? I accept they have a limited no of writes, but that
depends on load. You can actively monitor the drives "health" level
On 13 Mar 2013, at 15:33, John Lister wrote:
> On 12/03/2013 21:41, Gregg Jaskiewicz wrote:
>>
>> Whilst on the hardware subject, someone mentioned throwing ssd into the mix.
>> I.e. combining spinning HDs with SSD, apparently some raid cards can use
>> small-ish (80GB+) SSDs as external cach
On 12/03/2013 21:41, Gregg Jaskiewicz wrote:
Whilst on the hardware subject, someone mentioned throwing ssd into
the mix. I.e. combining spinning HDs with SSD, apparently some raid
cards can use small-ish (80GB+) SSDs as external caches. Any
experiences with that ?
The new LSI/Dell cards do
I'm considering the following setup:
- Master server with battery back raid controller with 4 SAS disks in a RAID 0
- so NO mirroring here, due to max performance requirements.
- Slave server setup with streaming replication on 4 HDD's in RAID 10. The
setup will be done with synchronous_commit=o
22 matches
Mail list logo