Hi Robert,

On Thu, May 3, 2012 at 1:28 AM, Robert Klemme <shortcut...@googlemail.com>wrote:

> Hi Jan,
>
> On Thu, May 3, 2012 at 4:10 AM, Jan Nielsen <jan.sture.niel...@gmail.com>
> wrote:
> > Below is the hardware, firmware, OS, and PG configuration pieces that I'm
> > settling in on. As was noted, the local storage used for OS is actually
> two
> > disks with RAID 10. If anything appears like a mistake or something is
> > missing, I'd appreciate the feedback.
>
> You should quickly patent this solution.  As far as I know you need at
> least four disks for RAID 10. :-)
> http://en.wikipedia.org/wiki/RAID#Nested_.28hybrid.29_RAID
>
> Or did you mean RAID 1?
>

Ugh - yeah - sorry. RAID-1 for the 2-disk OS and WAL.


>  > I'm still working on the benchmarks scripts and I don't have
> good/reliable
> > numbers yet since our SAN is still very busy reconfiguring from the 2x4
> to
> > 1x8. I'm hoping to get them running tomorrow when the SAN should complete
> > its 60 hours of reconfiguration.
>
> Yeah, does not seem to make a lot of sense to test during this phase.
>
> > Thanks, again, for all the great feedback.
>
> You're welcome!
>
> > 300GB RAID10 2x15k drive for OS on local storage
>

Correction: RAID-1 on the 2x15k local storage device for OS


> > */dev/sda1 RA*                                            4096
> > */dev/sda1 FS*                                            ext4
> > */dev/sda1 MO*
>
> See above.
>
> > 600GB RAID 10 8x15k drive for $PGDATA on SAN
>

Clarification: RAID-10 on the 8x15k SAN device for $PGDATA


> > *IO Scheduler sda*            noop anticipatory deadline [cfq]
> > */dev/sdb1 RA*                                            4096
> > */dev/sdb1 FS*                                             xfs
> > */dev/sdb1 MO*
> > allocsize=256m,attr2,logbufs=8,logbsize=256k,noatime
> >
> > 300GB RAID 10 2x15k drive for $PGDATA/pg_xlog on SAN
>

Correction: RAID-1 on the 2x15k SAN device for $PGDATA/pg_log


> > *IO Scheduler sdb*            noop anticipatory deadline [cfq]
> > */dev/sde1 RA*                                            4096
> > */dev/sde1 FS*                                             xfs
> > */dev/sde1 MO*
> allocsize=256m,attr2,logbufs=8,logbsize=256k,noatime
> > *IO Scheduler sde*            noop anticipatory deadline [cfq]
>
> See above.
>
> With regard to the scheduler, I have frequently read that [deadline]
> and [noop] perform better for PG loads.  Fortunately this can be
> easily changed.
>
> Maybe this also has some additional input:
> http://www.fccps.cz/download/adv/frr/hdd/hdd.html
>

Thanks for the reference, Robert.


>  On Thu, May 3, 2012 at 8:54 AM, John Lister <john.lis...@kickstone.co.uk>
> wrote:
> > I was wondering if it would be better to put the xlog on the same disk as
> > the OS? Apart from the occasional log writes I'd have thought most OS
> data
> > is loaded into cache at the beginning, so you effectively have an unused
> > disk. This gives you another spindle (mirrored) for your data.
> >
> > Or have I missed something fundamental?
>
> Separating avoids interference between OS and WAL logging (i.e. a
> script running berserk and filling OS filesystem).  Also it's easier
> to manage (e.g. in case of relocation to another volume etc.).  And
> you can have different mount options (i.e. might want to have atime
> for OS volume).
>
> Kind regards
>
> robert
>
>
> --
> remember.guy do |as, often| as.you_can - without end
> http://blog.rubybestpractices.com/
>

Reply via email to