Hi,
I'm looking into virtualizing some of our servers onto two (or more)
physical nodes with either KVM or Xen. What are the 'best practices' for
running virtual _servers_ with KVM? Any good/bad experiences with
running KVM for virtual servers that have to run for months on end?
I've installed ubuntu 8.04 because it should have KVM as the default
virtualization tool and is the only 'enterprise' distribution with kvm
right now. I used one host to act as an iSCSI target and installed
ubuntu with KVM on two other nodes. I can create a virtual server with
virt-manager, but it seems live migration is not (yet) supported by
libvirt/virsh? So how are other people running their KVM virtual
servers? Do you create a script for each virtual server and invoke kvm
directly? How do you do the live migration then? Launch the script with
an 'incoming' parameter on the target host, and run the migrate command
manually? Or is there an other (automated) way? I once tried the live
migration on a test host and if I recall correctly, the kvm process kept
on running on the source host even after the server was migrated to the
target? Is that the expected behaviour?
What type of shared storage is best used with KVM (or Xen for that
matter)? Our physical servers will be connected to a SAN. Should I
create volumes on my san and export them to my physical servers where I
can then use them as /dev/by-id/xxx disk in my KVM configs? Of should I
configure my two servers into a GFS cluster and use files as backend for
my KVM virtual machines? What are you using as shared storage?
Regards,
Rik
Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html