On Thu, Jun 18, 2009 at 5:42 PM, Atom Powers <[email protected]> wrote:

> I'm considering moving some of my services into virtual hosts, but
> I've never used VMware before. Perhaps one of you could be kind enough
> to explain their products to me? Specifically I'm interested in:
> * Being able to migrate a VM between hosts for High Availability
> * Clustering VMs on both 32 bit and 64 bit hardware.
> * Managing the whole shebang from a single workstation.
> * Have mostly FreeBSD and some Ubuntu and MS Windows VMs.
>
> I've recently installed ESXi 4 on one of our newer workstations for
> testing, and I'm moderately impressed. I then installed ESXi 3.5 on
> one of our older servers (not 64 bit) but I haven't played with
> vSphere yet. Is this the right direction? Is there some other product
> I should be looking at?
>
> On a similar note, what kind of storage should I be using? Currently I
> have a few NFS hosts doing most of my storage, which also happen to be
> the samba servers. Many of my service servers have way more storage
> than they need. What is the best way to manage data with VMs?


VMware puts on a ton of webinars: http://www.vmware.com/a/webcasts/
pre-recorded
ones here: http://www.vmware.com/a/webcasts/recorded/
I have found them very helpful to gain a high level understanding of what
you can do with the platform.

We use an ESX Enterprise 3.5 cluster here, using Virtual Center.  We deliver
storage via NFS from our NetApp (it is what we had prior to virtualization).
 I am very happy with the environment.  You mention having a few "NFS Hosts"
 - if you are putting the hard drive storage of your VM's on these, then any
outage of these devices will take out any VM's served by them.  We use a
NetApp which is incredibly reliable, make sure that whatever storage you are
providing is equally stable as it does represent a potential failure point
for anything served by it.

vMotion is a wonderful feature that is fantastic for improving availability.
 Roughly a month after setting this environment up, one of the physical
servers started alerting that it's RAM was going flaky, we immediately put
it in maintenance and all VM's moved off of it, we were then able to
diagnose and replace the faulty memory without having a single moment of
downtime.  This is probably the most cost effective way of improving
availability I've ever encountered.

We have not moved to vSphere yet, but it is looking even better.  One
feature not mentioned yet is storage vMotion - you can live migrate the
storage part of your virtual machine to a different storage pool.  This was
accomplished in 3.5 with some rather lengthy command line scripts, but has
been wrapped up with a pretty bow and put into the GUI for vSphere, and
looks to be as simple as vMotioning the server itself.  If you have your
networking setup fully redundantly to your physical hosts then you have an
environment where every single component can have maintenance performed on
it with no downtime to the VM's.

ESXi vs ESX is something to explore as well - ESXi gives you an appliance
shell, ESX gives you a regular shell and is basically RHEL 3 so you can muck
with it a bit more.  ESXi wasn't an option when we deployed, but I believe
it would have been a viable option for us(with relevant enterprise licenses
added to it).  I intend to maintain a dual environment down the road - one
with servers managed in vCenter with all the clustering goodness, and
another pool of ESXi boxes with local storage for services that have
different requirements.

--
Neil Neely
_______________________________________________
Discuss mailing list
[email protected]
http://lopsa.org/cgi-bin/mailman/listinfo/discuss
This list provided by the League of Professional System Administrators
 http://lopsa.org/

Reply via email to