On Wed, Jul 1, 2009 at 2:27 PM, Ross Walker<[email protected]> wrote:
> Sorry Jim et al, I had a bunch of replies go directly to Jim and not
> the list. I have to remember to Reply All here.
>
> So to recap, the initiator (esx windows 2003 quad cpu guest) showed
> consistent cpu usage of 20% throughout the benchmark.
>
> The target (2x quad xeon, 4GB RAM) showed CPU usage from 5-20% during
> the benchmark. But during the larger I/O write tests the kernel usage
> would shoot to 99% every 5 or so seconds and I/O would stall, this
> happened pretty consistently, but more dramatically with the 16K
> random writes.
>

On a hunch I thought I'd run my tests separating the read from the
write thinking that the write operations were randomizing the test
file due to ZFS COW and I was right.

Here are the numbers I got just doing read operations on the test file.

C:\Temp\sqlio1>sqlio d
                                      SQLIO Test
Type:
Drives: d
Test File: sqlio.dat
Test Size: 8000
Threads: 1
Seconds: 300
Outstanding IO: 4
Block Size(s): 4 8 16 32 64
Buffer Setting (N:none, Y:all, S:system, H:hba (Default N)): N
Latency Measurement (S:system, P:processor (Default S)): S
Processor Affinity Mask: 0x0
Iterations: 1
Patterns:
Press any key to continue . . .
                                                          Latency
Operation               IOs/sec         MBs/sec         Min/Avg/Max
4K Sequential Read      6792.54         26.53           0/0/1113
4K Random Read          683.06          2.66            0/5/27
8K Sequential Read      5771.96         45.09           0/0/536
8K Random Read          668.75          5.22            0/5/34
16K Sequential Read     4589.05         71.70           0/0/322
16K Random Read         652.35          10.19           0/5/39
32K Sequential Read     3000.23         93.75           0/0/391
32K Random Read         649.43          20.29           0/5/590
64K Sequential Read     1618.69         101.16          0/2/530
64K Random Read         555.60          34.72           0/6/579

So mystery of the read performance is solved.

Given how COW operations fragment files so much I wonder how read
performance on file servers fair over a long period of time. I would
think they would tend to really slow down over time. Makes me wonder
if ZFS is even suitable for file servers with high activity.

When I get my Mtron SSD drives I'm going to look at the write
performance some more, I don't think the LSI MegaRAID controllers
perform write-back caching very well over 15 logical drives (or it
needs GBs of cache instead of MBs to work well) it helps, just not
enough on heavy write loads. Once the SSD drives are in I might just
enable write-back only on those to see how it fairs, it may even
provide some additional wear-leveling over the built in cache.

-Ross
_______________________________________________
storage-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/storage-discuss

Reply via email to