Our cluster isn't too far off of your planned one—we built our own SAN using Open-E (which is a software iSCSI target package) and Supermicro hardware with very good performance, though we did pack it very very full of spindles (111 spindles including hot spares) at about twice the cost you're looking at. We have 24 TB of high availability SAN storage between two datacenters as a result. I would say the NetApp is a good price considering you'll be getting vendor support. We decided to keep two physical domain controllers in addition to two virtual ones, which lets us operate with all VM hosts down in a disaster situation. Our vCenter is also on a physical host, though that's only because it started as a physical box and it's been easier to just keep it that way instead of making it virtual (it'll be remade as a virtual machine eventually).
You have the basic idea of how the VM system works exactly right. Single gigabit ethernet is actually not much of a bottleneck since your usual deployment will have a dedicated NIC for storage traffic. Sharing a storage NIC with your VM traffic is a very bad idea except in cases of extremely light load. We actually have four gigabit ports in aggregation on each of our storage hosts and a dedicated iSCSI SAN adapter (a Qlogic QLE4060) for each of our four vSphere hosts. Modern HP and Dell servers have built-in support for iSCSI offloading, which means you wouldn't need the dedicated host card. (It outperforms the software iSCSI adapter that ESXi includes very handily, and also gets you faster boots on vSphere 5.) Each of our VM servers has 8 NICs, four onboard and four from an Intel quad-port card. We have three ports on each server dedicated to vMotion between hosts (which allows for hot migration of virtual machines from host to host for load balancing purposes), three ports in aggregation for VM traffic (with multiple VLANs passed to the aggregated port, simulating several separate wired LANs), and two ports left for host management to give us redundancy. You'll want to make sure you have high-quality networking gear—ours is Juniper EX3200 switches in both datacenters. We opted for Datacenter instead of Enterprise to avoid being limited on our quantity of Windows guests—it's nice to be able to fire up any number of Windows server systems you feel like having. I would bet that 12 VMs would not be a lot of load on a cluster like you've described and I'd say go with Datacenter. (It was a lot easier for us to make the jump since we're academic, which means we pay next to nothing for our Microsoft licenses.) I would also say go ahead and virtualize everything. (We would have if we didn't already have two Windows 2008 domain controllers before deploying vSphere.) You sound like you'll be well set on hardware, and your biggest bottleneck is likely to be RAM usage and that's an easy hardware (and licensing) upgrade. Also, you can always purchase a full vCenter license and add vSphere hosts to your heart's desire (allowing you to expand beyond the 3 hosts in the kit you're purchasing). ---- Jack Kramer Manager of Information Technology University Relations, Michigan State University w: 517-884-1231 / c: 248-635-4955 From: David Mazzaccaro <[email protected]<mailto:[email protected]>> Reply-To: NT System Admin Issues <[email protected]<mailto:[email protected]>> Date: Tue, 13 Mar 2012 11:04:14 -0400 To: NT System Admin Issues <[email protected]<mailto:[email protected]>> Subject: New to virtualization Hi all, I am starting to investigate moving our aging network infrastructure into the virtual world. ~ 10 servers, 6-7 years old Windows 2003 domain Exchange 2003 Citrix 4.0 farm ~190 users After some initial discussions w/ a reseller, here’s what they are recommending: (3) DL 380 G7 servers (to host the VMs) ~$18,000 (1) Net App FAS2240 (this is the SAN that would host 12 600GB drives of storage for the VMs) ~$20,000 VMWare essentials plus kit (VMware software) ~$5200 (3) MS Windows 2008 R2 Enterprise (this would allow the 3 HP servers to run 4 Windows 2008 VMs each) I guess the way it would work is that the VMs would reside on the SAN, and the 3 hosts would call up the SAN to load each VM utilizing the host’s CPU, RAM, NIC, etc.)… right? I have meetings scheduled w/ 2 other vendors, but verbally both have started the conversation along the same path as above. Being very new to VM, does the above scenario seem to make sense? It is hard for me to imagine all that traffic going between the SAN and the host servers w/o creating a huge bottleneck (over gig Ethernet) Do people recommend virtualizing every server? Domain controllers? Exchange? Citrix farm (4 server)? Shouldn’t something be left physical? Is 7 TB of storage enough (probably only 3 usable after array config)? Is the net app a decent appliance? $20k sounds cheap to me… I have done a little more reading, and from what I understand w/ 3 Windows Enterprise licenses, I would be limiting myself to 12 VMs. However, if I went w/ 3 Windows Datacenter licenses, for a small increase in price - I would get unlimited VMs? Which would allow for actually having a testing environment, and better patch deployment? Thx . ~ Finally, powerful endpoint security that ISN'T a resource hog! ~ ~ <http://www.sunbeltsoftware.com/Business/VIPRE-Enterprise/> ~ --- To manage subscriptions click here: http://lyris.sunbelt-software.com/read/my_forums/ or send an email to [email protected]<mailto:[email protected]> with the body: unsubscribe ntsysadmin ~ Finally, powerful endpoint security that ISN'T a resource hog! ~ ~ <http://www.sunbeltsoftware.com/Business/VIPRE-Enterprise/> ~ --- To manage subscriptions click here: http://lyris.sunbelt-software.com/read/my_forums/ or send an email to [email protected] with the body: unsubscribe ntsysadmin
