Very nice mathematics, Im going to save this email J
I didn't really think about the rebuild rate. Everything I do now is SAN0 with SAN0 live replication. I don't even mess with raid on my san's anymore. I also don't drop in ESX or ESXi w/o it, since the cost is really minimal. If I get any hiccup on the san, be disk, power, nic, I just failover to the second one and repair the issue and then it just fails back if theres an error on the other one. Just my .02 I offset the cost of the SAN by 2 ways. One is going with basically no disks in ESXi (thumb drive for example) or SATA. I build two low end servers and put all the storage there. Since Im in RAID0 I get more disk so for 8 disks I put 4/4 in each SAN device. The cost of a single CPU with 2GB of ram is only a few hundred dollars, sooo I have a fully mirrored redundant san for about 1k more. For snapshot of the esxi box you could drop in some 1TB SATA drives (99 bucks at tiger) and just use the local storage of the server for its backups, or just drop out to an NFS share to a tertiary location. I would imagine the backup vendors will have something that works around the esxi/esxi model in the very near future to get around some of the shell scripts required for snapshotting. From: Scott Kaufman at HQ [mailto:[email protected]] Sent: Tuesday, February 03, 2009 2:27 PM To: NT System Admin Issues Subject: RE: Vmware Disk Ideas Here's an abbreviated version of notes I've collected over the years on RAID write penalties, raw spindle IOPS & how to calculate minimum # of spindles. This has worked well for internal storage & shared storage (low to middle tier): RAID0 = write penalty of 0 RAID1 or 10 = write penalty of 2 RAID5 = write penalty of 4 RAID6 = write penalty of 6 or 8 (recently added & depends on the manufacturer) A single 15K SAS drive generally has a raw max IOPS of 180. SATA 10k drives are 160. Fiber channel 15k drives are 190. To calculate minimum number of spindles based on a given IOPS load: (read ratio + (write ratio * write penalty)) / raw spindle IOPS = # of minimum spindles [1] The total IOPS load, the read/write ratio, the size of the I/O's, the amount of caching memory on the raid controller & what RAID level you're using will all play into how much IOPS you can get out of a storage system. Having setup a few ESX systems with internal storage & shared storage using 146GB SAS 15k drives, RAID 5 works extremely well with these disk configurations [2]. Unless you upgrade the drives to 500GB or higher, I wouldn't use RAID 6 for a VMFS partition. Use RAID 6 when the individual disks gets over 500GB in size, because the rebuild rate might be so long, that the likelihood of another disk failure during rebuild starts to grow. YMMV, and definitely test the storage system to make sure it will give the performance you need before you start adding VMs. Scott [1] an Example: you determine that 1000 IOPS is needed to handle the VM load. Using 180 as the maximum number of IOPS per SAS spindle, and assume there's a 70/30 read to write ratio, the minimum number of spindles for a RAID5 partition is: (700+(300*4))/180 = 10.5 spindles needed to handle 1000 IOPS raw. You can drop a few spindles if you have lots of controller cache dedicated to writes. [2] That is until you get a VM that is doing 80%+ writes to the disk like SQL or Exchange, or an undersized VM with a poorly written program that is thrashing the pagefile. From: Brian Desmond [mailto:[email protected]] Sent: Tuesday, February 03, 2009 11:41 AM To: NT System Admin Issues Subject: RE: Vmware Disk Ideas 6 drives is a lot of IOPS. I'd be inclined to say you'll be just fine given the workload of a typical SBS instance. Just a thought but why not go with HyperV? It's a lot more painless to manage especially when discussing the skillset of a typical SBS shop. Thanks, Brian Desmond [email protected] c - 312.731.3132 From: [email protected] [mailto:[email protected]] Sent: Tuesday, February 03, 2009 10:30 AM To: NT System Admin Issues Subject: RE: Vmware Disk Ideas Thanks Ben and Don, Just wanted to make sure that the performance would be acceptable with 6 drives for Raid6. I was trying to get 8 drives but they wouldn't go for it. Thanks again Greg From: Don Ely [mailto:[email protected]] Sent: Tuesday, February 03, 2009 11:21 AM To: NT System Admin Issues Subject: Re: Vmware Disk Ideas +1 On Tue, Feb 3, 2009 at 5:12 AM, Benjamin Zachary - Lists <[email protected]> wrote: Hi Greg, I think running that high performance with that limited users probably won't make any real difference as far as the client would be able to see. Maybe if there is heavy SQL or something on there you could look at RAID10 for the i/o increase. However, in your description below I would look at RAID5/6. ESXi runs about 90% through ram so you don't really see a lot of disk i/o from that per se. From: [email protected] [mailto:[email protected]] Sent: Monday, February 02, 2009 10:47 PM To: NT System Admin Issues Subject: Vmware Disk Ideas Just wondering what everyone's idea would be on a VMWARE ESXi that will run 2 VM's, SBS 2003 and SBS 2008 for some time to migrate. 6 x 146 GIG SAS 15K drives running either Raid 6 or Raid 10. Assuming the storage loss was fine to Raid 10, how much performance are we going to see with Raid 10 vs going with Raid 6 and getting the two drive failure protection and the write hit. Small office about 20 users, Peachtree, SMB size email. Nothing insane (Larger mailboxes 1.5GB to 2.5GB) and then just the normal SBS Exchange and SQL servers for Sharepoint services, about 100+ gig in files now going to grow at least another 75 to 100 gig over 2 years. I think either way will work well, but I just don't have that much experience with Raid 6 other than Netapp and was curious? Thanks Greg ~ Finally, powerful endpoint security that ISN'T a resource hog! ~ ~ <http://www.sunbeltsoftware.com/Business/VIPRE-Enterprise/> ~
