I'm using a MASTER server and a SLAVE as read-only as well.
The results I'm posting here is related to the *master* server.



> We're gonna need better stats. iostat, iotop, vmstat etc will all break
> down your io between reads and writes, random vs sequential etc.
>

I'll try to get more data during a spike

*SPIKE:*

rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz avgqu-sz
await  svctm  %util

dm-2              0.00     0.00  129.00  585.10  5932.00  4680.80    14.86
  26.82   37.58   1.40  99.80


>
> If you're at 100% IO Util, and iostat says you're writing is taking up 20
> or 30% of the time, then no, adding cache probably won't help.
>

Well.. I'm getting spikes. So, I'm not getting 100% of I/O all the time. It
does happen several times during the day.



>
> Start looking into adding SSDs. They are literally 20 to 1000 times faster
> at a lot of io stuff than spinning drives. And they're relatively cheap for
> what they do.
>

I know.. but unfortunately the bosses don't want to spend money :(


>
> Note that a software RAID-5 array of SSDs can stomp a hardware controller
> running RAID-10 with spinning disks easily, and RAID-5 is pretty much as
> slow as RAID gets.
>
> Here's a few minutes of "iostat -xd 10 /dev/sdb" on one of my big servers
> at work. These machines have a RAID-5 of 10x750GB SSDs under LSI MegaRAIDs
> with caching turned off. (much faster that way). The array created thus is
> 6.5TB and it's 83% full. Note that archiving and pg_xlog are on separate
> volumes as well.
>
> Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz
> avgqu-sz   await r_await w_await  svctm  %util
> sdb               0.00   236.30 1769.10 5907.30 20366.80 69360.80
> 23.38    36.38    4.74    0.34    6.06   0.09  71.00
>

*NORMAL SERVER:* (as it usually works during the day)

rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz
avgqu-sz   await  svctm  %util

dm-2              0.00     0.00   42.60  523.60  1644.80  4188.80
  10.30     7.85   13.88   1.04  59.15

- Those results are changing all the time


>
> So we're seeing 1769 reads/s, 5907 writes/s and we're reading ~20MB/s and
> writing ~70MB/s. In the past this kind of performance from spinning disks
> required massive caching and cabinets full of hard drives. When first
> testing these boxes we got literally a fraction of this performance with 20
> spinning disks in RAID-10, and they had 512GB of RAM. Management at first
> wanted to throw more memory at it, these machines go to 1TB RAM, but we
> tested with 1TB RAM and the difference was literally a few % points going
> from 512GB to 1TB RAM.
>
> If your iostat output looks anything like mine, with lots of wkB/s and w/s
> then adding memory isn't going to do much.
>

Thanks a lot for your reply!
Lucas

Reply via email to