[zfs-discuss] VMGuest IOMeter numbers

2010-07-25 Thread Mark
Hello, first time posting.  I've been working with zfs on and off with limited 
*nix experience for a year or so now, and have read a lot of things by a lot of 
you I'm sure.  Still tons I don't understand/know I'm sure.

We've been having awful IO latencies on our 7210 running about 40 VM's spread 
over 3 hosts, no SSD's / Intent Logs.  I am trying to get some, but the 
price...  so I had to work up some sort of PoC to show it would help.   It so 
happens I just purchased 3, X25-M's for my own use, and could spare one for a 
few weeks (though I hate to think how many cells I burned testing), and we also 
happen to have a couple home built small ZFS servers around to test with.

Pretty limited resources, we have a home built box with 5, 250gb 7200rpm sata 
disks, each connected to the Intel Server board's built in sata ports.  I 
reduced the RAM to 2GB for the tests.  The OS is on a sata disk in its own 
single disk pool.  The X25-M was used as a ZIL Log for the SSD Tests.

I created 5 VM's on a single ESX Host (Dell 1950) with a Data Store connected 
to the mini-thumper running 2009.06 snv_111b via NFS over a single GB link.

Each VM runs Windows 2003 R2 on a 4GB C:\ vmdk.  Dynamo runs 1 worker on the 
local C: vmdk on each guest and reports to my workstation, so the numbers below 
are totals of the dynamo on all 5 guests.

Each test consisted of an 8k transfer with a 67% read, and 70% random pattern.  
The tests were run for 5 minutes each.

Queue Depth, IOPS, Avg Latency (ms)

RAID0 - 5 Disk  
1,  326,15.3
2,  453,22
4,  516,38.7
8,  503,72.3
16, 526,152
32, 494,323

RAID0-4 Disk +SSD   
1,  494,10.1
2,  579,17.2
4,  580,34.4
8,  603,66.3
16, 598,133.6
32, 600,266

RAIDz - 5 Disk  
1,  144,34
2,  162,60
4,  184,108
8,  183,218
16, 175,455
32, 185,864

RAIDz - 4 Disk +SSD 
1,  222,22
2,  201,50
4,  221,90
8,  219,181
16, 228,348
32, 228,700

RAID10 - 4 Disk 
1,  159,31
2,  206,48
4,  236,84
8,  194,205
16, 243,328
32, 219,728

RAID10 - 4 Disk +SSD
1,  270,18
2,  332,30
4,  363,54
8,  320,124
16, 325,245
32, 333,479

(wonders how the formatting will turn out)

Its interesting that going from a 5 disk RAIDz to a 4 disk Mirror (both with no 
log device) has a bigger increase then using X25-M Log with a 4 disk RAIDz.  

The increase in IO's adding the X25-M to the Mirror setup is nice, but smaller 
then I had expected, but the halving of the latencies is even nicer.

I am curious how this would scale with a lot more disks, the SSD didn't 
increase performance as much as I had hoped, but its still nice to see...  I'm 
thinking that's mostly due to my limit of 4-5 disks.  I'm not sure how much 
difference there is between the X25-M and the SUN SSD's for the 7000 series.  

From what I've read so far the X25-E needs to have its write-cache forced off 
to function proper, where the X25-M seems to obey the flush commands?  I was 
also curious if I would have seen a bigger increase with an SLC drive instead 
of the MLC...  searching turns up so much old info.

Comments welcome!
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] VMGuest IOMeter numbers

2010-07-25 Thread Oliver Seidel
Hello Mark,

I assume you have a read-intensive workload, not many synchronous writes, so 
leave out the ZIL, please try:

* configure the controller to show individual disks, no RAID
* create one large striped pool (zpool create tank c0t0d{1,2,3,4,5})
* if your SSD is c0t0d6, use it as an L2ARC (zpool create tank c0t0d{1,2,3,4,5} 
cache c0t0d6), not a ZIL
* use the 134 developer release from here: 
http://www.genunix.org/distributions/indiana/
* give 80% of all the memory that your box has to the Opensolaris instance that 
is serving the ZFS

I'm new here, so others may improve on these suggestions.

Best regards,

Oliver
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss