Re: [CentOS] Virtualization platform choice

2011-03-31 Thread Pasi Kärkkäinen
On Mon, Mar 28, 2011 at 08:59:09AM -0400, Steve Thompson wrote:
 On Mon, 28 Mar 2011, Pasi Kärkkäinen wrote:

 On Sun, Mar 27, 2011 at 09:41:04AM -0400, Steve Thompson wrote:
 First. With Xen I was never able to start more than 30 guests at one time
 with any success; the 31st guest always failed to boot or crashed during
 booting, no matter which guest I chose as the 31st. With KVM I chose to
 add more guests to see if it could be done, with the result that I now
 have 36 guests running simultaneously.

 Hmm.. I think I've seen that earlier. I *think* it was some trivial
 thing to fix, like increasing number of available loop devices or so.

 I tried that, and other things, but was never able to make it work. I was 
 using max_loop=64 in the end, but even with a larger number I couldn't  
 start more than 30 guests. Number 31 would fail to boot, and would boot  
 successfully if I shut down, say, #17. Then #17 would fail to boot, and 
 so on.


Ok. I have a link somewhere about how to fix that, but I can't seem to
be able to find it now.


 Hmm.. Windows 7 might be too new for Xen 3.1 in el5, so for win7  
 upgrading to xen 3.4 or 4.x helps. (gitco.de has newer xen rpms for el5 
 if you're ok with thirdparty rpms).

 Point taken; I realize this.

 Third. I was never able to successfully complete a PXE-based installation
 under Xen. No problems with KVM.
 That's weird. I do that often. What was the problem?

 I use the DHCP server (on the host) to supply all address and name  
 information, and this works without any issues. In the PXE case, I was  
 never able to get the guest to communicate with the server for long 
 enough to fully load pxelinux.0, in spite of the bridge setup. I have no 
 idea why; it's not exactly rocket science either.


Ok.

 Can you post more info about the benchmark? How many vcpus did the VMs have?
 How much memory? Were the VMs 32b or 64b ?

 The benchmark is just a make of a large package of my own  
 implementation. A top-level makefile drives a series of makes of a set of 
 sub-packages, 33 of them. It is a compilation of about 1100 C and C++  
 source files, including generation of dependencies and binaries, and  
 running a set of perl scripts (some of which generate some of the C  
 source). All of the sources and target directories were NFS volumes; only 
 the local O/S disks were virtualized. I used 1 vcpu per guest and either  
 512MB or 1GB of memory. The results I showed were for 64-bit guests with  
 512MB memory, but they were qualitatively the same for 32-bit guests.  
 Increasing memory from 512MB to 1GB made no significant difference to the 
 timings. Some areas of the build are serial by nature; the result of 
 14:38 for KVM w/virtio was changed to 9:52 with vcpu=2 and make -j2.


So it's a fork-heavy workload.. I'll try to make some benchmarks/comparisons
soon aswell, also with other workloads.

 The 64-bit HVM guests w/o PV were quite a bit faster than the 32-bit HVM  
 guests, as expected. I also had some Fedora diskless guests (no PV) using 
 an NFS root, in which situation the 32-bit guests were faster than the  
 64-bit guests (and both were faster than the HVM guests w/o PV). These  
 used kernels that I built myself.

 I did not compare Xen vs KVM with vcpu  1.


Ok.

 Did you try Xen HVM with PV drivers?

 Yes, but I don't have the exact timings to hand anymore. They were faster 
 than the non-PV case but still slower than KVM w/virtio.


Yep.

 Fifth: I love being able to run top/iostat/etc on the host and see just
 what the hardware is really up to, and to be able to overcommit memory.
 xm top and iostat in dom0 works well for me :)

 I personally don't care much for xm top, and it doesn't help anyway if  
 you're not running as root or have sudo access, or if you'd like to read  
 performance info for the whole shebang via /proc (as I do).


-- Pasi

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Virtualization platform choice

2011-03-31 Thread David Sommerseth
On 29/03/11 21:13, Kenni Lund wrote:
 Den 29/03/2011 15.41 skrev David Sommerseth d...@users.sourceforge.net:
[...snip...]

Thanks a lot for good information!

 The main problem is Windows guests, which easily chokes on hardware
 changes (forced reactivation of Windows or unbootable with BSOD). Each
 qemu-kvm version will behave differently, so moving from one major
 qemu-kvm version to another (0.1x - 0.1y), will most likely change
 the virtual hardware seen by the guest, unless you have libvirt etc.
 configured to keep track of the guest hardware.

Do you know how to set up this?  Or where to look for more details about
this?  I do have one Windows guest, and I can't break this one.

 If it's only Linux guests, it should work fine when moving the guests
 between any recent Linux distribution with KVM. Of course, if you
 don't use libvirt or a similar management solution, the hardware in
 the guest will likely change, for example causing your MAC-addresses
 of your NICs to change, etc, when moving to a new KVM host.

It's all using libvirt already, so this should be pretty much the same.


kind regards,

David Sommerseth

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Virtualization platform choice

2011-03-31 Thread Kenni Lund
2011/3/31 David Sommerseth d...@users.sourceforge.net:
 On 29/03/11 21:13, Kenni Lund wrote:
 The main problem is Windows guests, which easily chokes on hardware
 changes (forced reactivation of Windows or unbootable with BSOD). Each
 qemu-kvm version will behave differently, so moving from one major
 qemu-kvm version to another (0.1x - 0.1y), will most likely change
 the virtual hardware seen by the guest, unless you have libvirt etc.
 configured to keep track of the guest hardware.

 Do you know how to set up this?  Or where to look for more details about
 this?  I do have one Windows guest, and I can't break this one.

AFAIR, the BSOD I've seen while moving Windows 2003 Server guests to
new hosts (Fedora 7-8-9-10-11-CentOS5), was caused by old VirtIO
guest block drivers. If you've installed recent VirtIO drivers in the
guest (like virtio-win from Fedora) and are using a recent
kernel/qemu-kvm on the host, then I don't think you'll have any BSOD
or breakage of the guest. You'll need to reactivate once, but that
should be it.

If it was me who were going to move from to CentOS/SL 6 from a non-RH
distribution with a different libvirt/qemu-kvm version, I would not
use the old configuration file directly. Instead I would create a
similar guest from scratch with virt-manager/virt-install, shut down
the guest before installing anything, overwrite the new (empty) image
with your old backup image and then compare the old XML configuration
with the new one and manually move over some specific configurations,
if needed. On first boot, you'll probably have to reactivate Windows,
but at least now you know that the libvirt XML-configuration for the
guest should be compatible with CentOS/SL 6+, and hence have stable
guest hardware in future host upgrades.

You can read some more about it here:
http://fedoraproject.org/wiki/Features/KVM_Stable_PCI_Addresses
http://fedoraproject.org/wiki/Features/KVM_Stable_Guest_ABI

Best regards
Kenni
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Virtualization platform choice

2011-03-29 Thread David Sommerseth
On 27/03/11 11:57, Jussi Hirvi wrote:
 Some may be bored with the subject - sorry...
 
 Still not decided about virtualization platform for my webhotel v2 
 (ns, mail, web servers, etc.).
 
 KVM would be a natural way to go, I suppose, only it is too bad CentOS 6 
 will not be out in time for me - I guess KVM would be more mature in 
 CentOS 6.

I believe KVM was introduced in RHEL5.4, so I presume CentOS5.5 have a
working KVM support as well, in addition to Xen.  Of course, it will be
even better with CentOS6.

For the impatient souls, ScientificLinux 6.0 is released - even though,
discussions lately in this list raises some concerns regarding how good the
binary compatibility is in SL6, compared to CentOS6.

This makes me wondering how well it would go to migrate from SL6 to CentOS
6, if all KVM guests are on dedicated/separate LVM volumes and that you
take a backup of /etc/libvirt.  So when CentOS6 is released, scratch SL6
and install CentOS6, put back the SL6 libvirt configs ... would there be
any issues in such an approach?  And what about other KVM based host OSes?


kind regards,

David Sommerseth

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Virtualization platform choice

2011-03-29 Thread Lucian
On Tue, Mar 29, 2011 at 2:41 PM, David Sommerseth
d...@users.sourceforge.net wrote:
 On 27/03/11 11:57, Jussi Hirvi wrote:
 Some may be bored with the subject - sorry...

 Still not decided about virtualization platform for my webhotel v2
 (ns, mail, web servers, etc.).

 KVM would be a natural way to go, I suppose, only it is too bad CentOS 6
 will not be out in time for me - I guess KVM would be more mature in
 CentOS 6.

 I believe KVM was introduced in RHEL5.4, so I presume CentOS5.5 have a
 working KVM support as well, in addition to Xen.  Of course, it will be
 even better with CentOS6.

 For the impatient souls, ScientificLinux 6.0 is released - even though,
 discussions lately in this list raises some concerns regarding how good the
 binary compatibility is in SL6, compared to CentOS6.

If it's good enough for Fermilab/CERN then it's prolly good enough for
many (even most) people on this list, imho. :)
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Virtualization platform choice

2011-03-29 Thread Kenni Lund
Den 29/03/2011 15.41 skrev David Sommerseth d...@users.sourceforge.net:
 This makes me wondering how well it would go to migrate from SL6 to CentOS
 6, if all KVM guests are on dedicated/separate LVM volumes and that you
 take a backup of /etc/libvirt.  So when CentOS6 is released, scratch SL6
 and install CentOS6, put back the SL6 libvirt configs ... would there be
 any issues in such an approach?

I would not expect any issues at all, I would expect it to just
work. As long as you use CentOS6+/SL6+ (or Fedora 12+) *with* the
libvirtd/virsh/virt-manager management tools, you shouldn't run into
any major problems. This is because RH has implemented a stable guest
ABI and stable guest PCI addresses, so the virtual hardware will
remain the same on different KVM/libvirt hosts.

 And what about other KVM based host OSes?

That depends on a lot of things...in general, if you're not using one
of the RH-based distributions mentioned above, use the latest version
of the distribution in question, to hopefully receive some of the
upstreamed bits from the RH-distributions. Luckily things are slowly
stabilizing, so it should only be a question of time, before any
distribution with a recent kernel, recent qemu-kvm executable and a
recent libvirt version, should be compatible with each other in terms
of moving KVM-guests around.

The main problem is Windows guests, which easily chokes on hardware
changes (forced reactivation of Windows or unbootable with BSOD). Each
qemu-kvm version will behave differently, so moving from one major
qemu-kvm version to another (0.1x - 0.1y), will most likely change
the virtual hardware seen by the guest, unless you have libvirt etc.
configured to keep track of the guest hardware.

If it's only Linux guests, it should work fine when moving the guests
between any recent Linux distribution with KVM. Of course, if you
don't use libvirt or a similar management solution, the hardware in
the guest will likely change, for example causing your MAC-addresses
of your NICs to change, etc, when moving to a new KVM host.

Best regards
Kenni
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Virtualization platform choice

2011-03-28 Thread Pasi Kärkkäinen
On Sun, Mar 27, 2011 at 09:41:04AM -0400, Steve Thompson wrote:
 
 The slightly longer story...
 
 First. With Xen I was never able to start more than 30 guests at one time 
 with any success; the 31st guest always failed to boot or crashed during 
 booting, no matter which guest I chose as the 31st. With KVM I chose to 
 add more guests to see if it could be done, with the result that I now 
 have 36 guests running simultaneously.
 

Hmm.. I think I've seen that earlier. I *think* it was some trivial
thing to fix, like increasing number of available loop devices or so.

 Second. I was never able to keep a Windows 7 guest running under Xen for 
 more than a few days at a time without a BSOD. I haven't seen a single 
 crash under KVM.
 

Hmm.. Windows 7 might be too new for Xen 3.1 in el5,
so for win7 upgrading to xen 3.4 or 4.x helps. 
(gitco.de has newer xen rpms for el5 if you're ok with thirdparty rpms).

Then again xen in el5.6 might have fixes for win7, iirc.

 Third. I was never able to successfully complete a PXE-based installation 
 under Xen. No problems with KVM.
 

That's weird. I do that often. What was the problem?

 Fourth. My main work load consists of a series of builds of a package of 
 about 1100 source files and about 500 KLOC's; all C and C++. Here are the 
 elapsed times (min:sec) to build the package on a CentOS 5 guest (1 vcpu), 
 each time with the guest being the only active guest (although the others 
 were running). Sources come from NFS, and targets are written to NFS, with 
 the host being the NFS server.
 
 * Xen HVM guest (no pv drivers): 29:30
 * KVM guest, no virtio drivers: 23:52
 * KVM guest, with virtio: 14:38
 

Can you post more info about the benchmark? How many vcpus did the VMs have? 
How much memory? Were the VMs 32b or 64b ?

Did you try Xen HVM with PV drivers? 
I've been planning to benchmarks myself aswell so just curious.

 Fifth: I love being able to run top/iostat/etc on the host and see just 
 what the hardware is really up to, and to be able to overcommit memory.
 

xm top and iostat in dom0 works well for me :)

-- Pasi

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Virtualization platform choice

2011-03-28 Thread Steve Thompson

On Mon, 28 Mar 2011, Pasi Kärkkäinen wrote:


On Sun, Mar 27, 2011 at 09:41:04AM -0400, Steve Thompson wrote:

First. With Xen I was never able to start more than 30 guests at one time
with any success; the 31st guest always failed to boot or crashed during
booting, no matter which guest I chose as the 31st. With KVM I chose to
add more guests to see if it could be done, with the result that I now
have 36 guests running simultaneously.


Hmm.. I think I've seen that earlier. I *think* it was some trivial
thing to fix, like increasing number of available loop devices or so.


I tried that, and other things, but was never able to make it work. I was 
using max_loop=64 in the end, but even with a larger number I couldn't 
start more than 30 guests. Number 31 would fail to boot, and would boot 
successfully if I shut down, say, #17. Then #17 would fail to boot, and so 
on.


Hmm.. Windows 7 might be too new for Xen 3.1 in el5, so for win7 
upgrading to xen 3.4 or 4.x helps. (gitco.de has newer xen rpms for el5 
if you're ok with thirdparty rpms).


Point taken; I realize this.


Third. I was never able to successfully complete a PXE-based installation
under Xen. No problems with KVM.

That's weird. I do that often. What was the problem?


I use the DHCP server (on the host) to supply all address and name 
information, and this works without any issues. In the PXE case, I was 
never able to get the guest to communicate with the server for long enough 
to fully load pxelinux.0, in spite of the bridge setup. I have no idea 
why; it's not exactly rocket science either.



Can you post more info about the benchmark? How many vcpus did the VMs have?
How much memory? Were the VMs 32b or 64b ?


The benchmark is just a make of a large package of my own 
implementation. A top-level makefile drives a series of makes of a set of 
sub-packages, 33 of them. It is a compilation of about 1100 C and C++ 
source files, including generation of dependencies and binaries, and 
running a set of perl scripts (some of which generate some of the C 
source). All of the sources and target directories were NFS volumes; only 
the local O/S disks were virtualized. I used 1 vcpu per guest and either 
512MB or 1GB of memory. The results I showed were for 64-bit guests with 
512MB memory, but they were qualitatively the same for 32-bit guests. 
Increasing memory from 512MB to 1GB made no significant difference to the 
timings. Some areas of the build are serial by nature; the result of 14:38 
for KVM w/virtio was changed to 9:52 with vcpu=2 and make -j2.


The 64-bit HVM guests w/o PV were quite a bit faster than the 32-bit HVM 
guests, as expected. I also had some Fedora diskless guests (no PV) using 
an NFS root, in which situation the 32-bit guests were faster than the 
64-bit guests (and both were faster than the HVM guests w/o PV). These 
used kernels that I built myself.


I did not compare Xen vs KVM with vcpu  1.


Did you try Xen HVM with PV drivers?


Yes, but I don't have the exact timings to hand anymore. They were faster 
than the non-PV case but still slower than KVM w/virtio.



Fifth: I love being able to run top/iostat/etc on the host and see just
what the hardware is really up to, and to be able to overcommit memory.

xm top and iostat in dom0 works well for me :)


I personally don't care much for xm top, and it doesn't help anyway if 
you're not running as root or have sudo access, or if you'd like to read 
performance info for the whole shebang via /proc (as I do).


Steve___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Virtualization platform choice

2011-03-28 Thread Warren Young
On 3/27/2011 3:07 PM, Jure Pečar wrote:

 It's interesting that nobody so far mentioned openVZ

I wouldn't use it since being bitten by its lack of swap support.

I run a couple of web sites on a fairly heavy web stack which loads up 
a bunch of dependencies that don't actually end up being used by my 
site, but because there is no swap, all that unused code eats real RAM.

Because of that, I had to upgrade to a 512 MB VPS hosting plan from a 
256 MB plan.  My sites initially ran just fine under the 256 MB plan but 
after adding just one feature to my sites which used one of the piggier 
features in the web stack, it pushed me over the limit and I had to 
upgrade.  If the VPS could use swap, I'm sure enough of the web stack 
would remain swapped out that I could have continued using the 256 MB plan.

My VPS provider may find OpenVZ to be efficient than Xen, but it cost me 
about 50% more in hosting fees.  That's less efficient in my book.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Virtualization platform choice

2011-03-28 Thread Andres Toomsalu
Please also consider OpenNode - http://opennode.activesys.org  - a CentOS based 
KVM full virtualization + OpenVZ linux containers solution. Supports VM 
templating and live migration, etc - with easy bare metal setup.

Cheers,
-- 
--
Andres Toomsalu, and...@active.ee







On 27.03.2011, at 12:57, Jussi Hirvi wrote:

 Some may be bored with the subject - sorry...
 
 Still not decided about virtualization platform for my webhotel v2 
 (ns, mail, web servers, etc.).
 
 KVM would be a natural way to go, I suppose, only it is too bad CentOS 6 
 will not be out in time for me - I guess KVM would be more mature in 
 CentOS 6.
 
 Any experience with the free VMware vSphere Hypervisor?. (It was 
 formerly known as VMware ESXi Single Server or free ESXi.)
 
 http://www.vmware.com/products/vsphere-hypervisor/overview.html
 
 I would need a tutorial about that... For example, does that run without 
 a host OS? Can it be managed only via Win clients? Issues with CentOS 
 4/5 guests (all my systems are currently CentOS 4/5).
 
 - Jussi
 
 -- 
 Jussi Hirvi * Green Spot
 Topeliuksenkatu 15 C * 00250 Helsinki * Finland
 Tel. +358 9 493 981 * Mobile +358 40 771 2098 (only sms)
 jussi.hi...@greenspot.fi * http://www.greenspot.fi
 ___
 CentOS mailing list
 CentOS@centos.org
 http://lists.centos.org/mailman/listinfo/centos
 

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Virtualization platform choice

2011-03-27 Thread RedShift
On 03/27/11 11:57, Jussi Hirvi wrote:
 Some may be bored with the subject - sorry...

 Still not decided about virtualization platform for my webhotel v2
 (ns, mail, web servers, etc.).

 KVM would be a natural way to go, I suppose, only it is too bad CentOS 6
 will not be out in time for me - I guess KVM would be more mature in
 CentOS 6.

 Any experience with the free VMware vSphere Hypervisor?. (It was
 formerly known as VMware ESXi Single Server or free ESXi.)

 http://www.vmware.com/products/vsphere-hypervisor/overview.html

 I would need a tutorial about that... For example, does that run without
 a host OS? Can it be managed only via Win clients? Issues with CentOS
 4/5 guests (all my systems are currently CentOS 4/5).

 - Jussi


VMware ESXi is definitely a good choice. I use it at work (the free version as 
well) and haven't regretted it. No tutorials needed, everything's pretty 
straightforward.

Yes, it can only be graphically managed from Windows clients (vsphere client), 
however the command line tools are available for Linux as well. Tried running 
the vsphere client using wine but that didn't work.

No issues with CentOS 5 guests here.


Glenn
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Virtualization platform choice

2011-03-27 Thread Jerry Franz
On 03/27/2011 02:57 AM, Jussi Hirvi wrote:
 Some may be bored with the subject - sorry...

 Still not decided about virtualization platform for my webhotel v2
 (ns, mail, web servers, etc.).

 KVM would be a natural way to go, I suppose, only it is too bad CentOS 6
 will not be out in time for me - I guess KVM would be more mature in
 CentOS 6.

 Any experience with the free VMware vSphere Hypervisor?. (It was
 formerly known as VMware ESXi Single Server or free ESXi.)

 http://www.vmware.com/products/vsphere-hypervisor/overview.html

 I would need a tutorial about that... For example, does that run without
 a host OS? Can it be managed only via Win clients? Issues with CentOS
 4/5 guests (all my systems are currently CentOS 4/5).
I'm currently using Ubuntu Server 10.04-LTS as a host for KVM running 
CentOS5.5 guests I migrated from VMware Server 2. Works fine. A nice 
feature of current generation KVM is that you are supposed to be able to 
do live migration even without shared storage (although I haven't tested 
that yet). I wrote some custom scripts to allow me to take LVM snapshots 
for whole-image backups and I'm pretty happy with the who setup.

The only corners I encountered were

1) A lack of documentation on how to configure bridging over bonded 
interfaces for the host server. It turned out to be fairly easy - just 
not clearly documented anyplace I could find.

2) The default configuration for rebooting/shutting dow the host server 
just 'shoots the guests in the head' rather than having them shutdown 
cleanly. :( You will want to write something to make sure they get 
shutdown properly instead.

-- 
Benjamin Franz
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Virtualization platform choice

2011-03-27 Thread Drew
 Any experience with the free VMware vSphere Hypervisor?. (It was
 formerly known as VMware ESXi Single Server or free ESXi.)

 http://www.vmware.com/products/vsphere-hypervisor/overview.html

 I would need a tutorial about that... For example, does that run without
 a host OS? Can it be managed only via Win clients? Issues with CentOS
 4/5 guests (all my systems are currently CentOS 4/5).

vSphere ESX(i) is good product. It runs on bare metal so there is no
OS underneath it. ESX has a linux based environment that sort of runs
at the hypervisor level that people use for basic admin but VMware is
trying to phase that out as most everything you can do with ESX's
console can be done through ESXi's API's and the remote CLI.

Only downside to the free version is certain API's are unavailable and
if you need those features you may have to go to a paid version.


-- 
Drew

Nothing in life is to be feared. It is only to be understood.
--Marie Curie
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Virtualization platform choice

2011-03-27 Thread Eero Volotinen
2011/3/27 Drew drew@gmail.com:
 Any experience with the free VMware vSphere Hypervisor?. (It was
 formerly known as VMware ESXi Single Server or free ESXi.)

 http://www.vmware.com/products/vsphere-hypervisor/overview.html

 I would need a tutorial about that... For example, does that run without
 a host OS? Can it be managed only via Win clients? Issues with CentOS
 4/5 guests (all my systems are currently CentOS 4/5).

 vSphere ESX(i) is good product. It runs on bare metal so there is no
 OS underneath it. ESX has a linux based environment that sort of runs
 at the hypervisor level that people use for basic admin but VMware is
 trying to phase that out as most everything you can do with ESX's
 console can be done through ESXi's API's and the remote CLI.

 Only downside to the free version is certain API's are unavailable and
 if you need those features you may have to go to a paid version.

Biggest problem in free esxi is that it lacks backup vcb api, so full
image backups are almost impossible under free esxi host ..

--
Eero
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Virtualization platform choice

2011-03-27 Thread Nico Kadel-Garcia
On Sun, Mar 27, 2011 at 8:53 AM, Drew drew@gmail.com wrote:
 Any experience with the free VMware vSphere Hypervisor?. (It was
 formerly known as VMware ESXi Single Server or free ESXi.)

 http://www.vmware.com/products/vsphere-hypervisor/overview.html

 I would need a tutorial about that... For example, does that run without
 a host OS? Can it be managed only via Win clients? Issues with CentOS
 4/5 guests (all my systems are currently CentOS 4/5).

 vSphere ESX(i) is good product. It runs on bare metal so there is no
 OS underneath it. ESX has a linux based environment that sort of runs
 at the hypervisor level that people use for basic admin but VMware is
 trying to phase that out as most everything you can do with ESX's
 console can be done through ESXi's API's and the remote CLI.

I like vSphere in corporate environments, and LabManager with it for
burning guest images very quickly. The VMWareTools are not as
integrated as I would like, and their RPM names are quite misleading.
(The name of the file does not match the name of the actual RPM
reqported by `rpm -qf --%{name}-%{version}-%{release}.%{arch}.rpm\n',
and it's not as well integrated for kernel changes or host cloning as
I'd like. (Ask if you're curious.) But for corporate grade
virtualization, well built management tools, and corporate support,
they're very hard to beat. And for virtualizing weird old setups, like
SCO OpenServer 5.0.x, they're the only thing I tested that worked.

KVM was a dog in testing under CentOS and RHEL 5.x. The bridged
networking has *NO* network configuration tool that understands how to
set it up, you have to do it manually, and that's a deficit I've
submitted upstream as an RFE. It may work well with CentOS and RHEL 6,
i've not had a chance to test it.

VirtualBox is friendly, lightweight, and I'm using it right now under
MacOS X for a Debian box, and on Windows boxes for testing Linux
environments. Works quite well, friendly interfaces, very quick to
learn, I like it a light for one-off setups.

Xen, I did a stack of work with for CentOS 4 a few years ago. It
worked well, particularly with the para-virtualized kernels to improve
performance. (Don't virtualize things you don't have to!!! Uses custom
kernels to let the guest and server not waste time virtualizing IO
requests, especially for disk IO). I've not played with its management
tools since, and it didn't work well with virtualizing odd old OS's.
(I wanted to use it for OpenServer, but the support team who came
out to demonstrate it couldn't even get the keyboards interacting
reliably for the installation. It was a complete failure for that
project.)

You've got a lot of choices. I'd start with assessing what you need
for your guest environments, and where it's going to be managed from,
and be sure that you've got access to the management tools.

 Only downside to the free version is certain API's are unavailable and
 if you need those features you may have to go to a paid version.

This is true for everything in life.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Virtualization platform choice

2011-03-27 Thread Ryan Wagoner
On Sun, Mar 27, 2011 at 9:16 AM, Eero Volotinen eero.voloti...@iki.fi wrote:
 2011/3/27 Drew drew@gmail.com:
 Any experience with the free VMware vSphere Hypervisor?. (It was
 formerly known as VMware ESXi Single Server or free ESXi.)

 http://www.vmware.com/products/vsphere-hypervisor/overview.html

 I would need a tutorial about that... For example, does that run without
 a host OS? Can it be managed only via Win clients? Issues with CentOS
 4/5 guests (all my systems are currently CentOS 4/5).

 vSphere ESX(i) is good product. It runs on bare metal so there is no
 OS underneath it. ESX has a linux based environment that sort of runs
 at the hypervisor level that people use for basic admin but VMware is
 trying to phase that out as most everything you can do with ESX's
 console can be done through ESXi's API's and the remote CLI.

 Only downside to the free version is certain API's are unavailable and
 if you need those features you may have to go to a paid version.

 Biggest problem in free esxi is that it lacks backup vcb api, so full
 image backups are almost impossible under free esxi host ..

If you have some money to spend you can solve the backup problem with
VMware's $500 entry level license. The license gives you the vCenter
server software, which can manage 3 ESXi hosts and unlocks a number of
capabilities, like cloning and offline migration. For around $1000 per
server you can look at Veam or Vizioncore for backups. Overall you
can't beat the price for the reliability and ease of use.

Since ESXi is a bare metal hypervisior it doesn't have as many
security vulnerabilities discovered which means less reboots of the
host system. I have been using ESXi since 3.5 with around 8 ESXi
servers now and 50 guests. I have not had a crash of the host ESXi
host and the advanced capabilities (vMotion, Storage vMotion and
Enhanced vMotion Capability (EVC)) have just worked, these do not work
with the $500 license.

Ryan
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Virtualization platform choice

2011-03-27 Thread Steve Thompson
On Sun, 27 Mar 2011, Jussi Hirvi wrote:

 KVM would be a natural way to go, I suppose, only it is too bad CentOS 6
 will not be out in time for me - I guess KVM would be more mature in
 CentOS 6.

I have been using Xen with much success for several years, now with two 
CentOS 5.5 x86_64 Dom0's, hosting 29 (mixed Linux and Windows) and 30 (all 
Windows) guests respectively, using only packages from the distro along 
with the GPLPV drivers on the Windows guests (so it's Xen 3.1, not the 
latest). A couple of weeks ago I decided (on the first of these hosts) to 
give KVM a look, since I was able to take the machine down for a while. 
All guests use LVM volumes, and were unchanged between Xen and KVM (modulo 
pv drivers). The host is a Dell PE2900 with 24 GB memory and E5345 
processors (8 cores). Bridged mode networking. What follows is obviously 
specific to my environment, so YMMV.

The short story is that I plan to keep using KVM. It has been absolutely 
solid and without any issues whatsoever, and performance is significantly 
better than Xen in all areas that I have measured (and also in the feels 
good benchmark). Migration from Xen to KVM was almost trivially simple.

The slightly longer story...

First. With Xen I was never able to start more than 30 guests at one time 
with any success; the 31st guest always failed to boot or crashed during 
booting, no matter which guest I chose as the 31st. With KVM I chose to 
add more guests to see if it could be done, with the result that I now 
have 36 guests running simultaneously.

Second. I was never able to keep a Windows 7 guest running under Xen for 
more than a few days at a time without a BSOD. I haven't seen a single 
crash under KVM.

Third. I was never able to successfully complete a PXE-based installation 
under Xen. No problems with KVM.

Fourth. My main work load consists of a series of builds of a package of 
about 1100 source files and about 500 KLOC's; all C and C++. Here are the 
elapsed times (min:sec) to build the package on a CentOS 5 guest (1 vcpu), 
each time with the guest being the only active guest (although the others 
were running). Sources come from NFS, and targets are written to NFS, with 
the host being the NFS server.

* Xen HVM guest (no pv drivers): 29:30
* KVM guest, no virtio drivers: 23:52
* KVM guest, with virtio: 14:38

Fifth: I love being able to run top/iostat/etc on the host and see just 
what the hardware is really up to, and to be able to overcommit memory.

Steve

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Virtualization platform choice

2011-03-27 Thread Nico Kadel-Garcia
On Sun, Mar 27, 2011 at 9:41 AM, Steve Thompson s...@vgersoft.com wrote:
 On Sun, 27 Mar 2011, Jussi Hirvi wrote:

 KVM would be a natural way to go, I suppose, only it is too bad CentOS 6
 will not be out in time for me - I guess KVM would be more mature in
 CentOS 6.

 I have been using Xen with much success for several years, now with two
 CentOS 5.5 x86_64 Dom0's, hosting 29 (mixed Linux and Windows) and 30 (all
 Windows) guests respectively, using only packages from the distro along
 with the GPLPV drivers on the Windows guests (so it's Xen 3.1, not the
 latest). A couple of weeks ago I decided (on the first of these hosts) to
 give KVM a look, since I was able to take the machine down for a while.
 All guests use LVM volumes, and were unchanged between Xen and KVM (modulo
 pv drivers). The host is a Dell PE2900 with 24 GB memory and E5345
 processors (8 cores). Bridged mode networking. What follows is obviously
 specific to my environment, so YMMV.

 The short story is that I plan to keep using KVM. It has been absolutely
 solid and without any issues whatsoever, and performance is significantly
 better than Xen in all areas that I have measured (and also in the feels
 good benchmark). Migration from Xen to KVM was almost trivially simple.

 The slightly longer story...

 First. With Xen I was never able to start more than 30 guests at one time
 with any success; the 31st guest always failed to boot or crashed during
 booting, no matter which guest I chose as the 31st. With KVM I chose to
 add more guests to see if it could be done, with the result that I now
 have 36 guests running simultaneously.

 Second. I was never able to keep a Windows 7 guest running under Xen for
 more than a few days at a time without a BSOD. I haven't seen a single
 crash under KVM.

 Third. I was never able to successfully complete a PXE-based installation
 under Xen. No problems with KVM.

How did you get the PXE working? I had real problems. Mind you, that
was RHEL 5.4 and CentOS 5.4 for the server host, so it may have
improved.

And do you have widgets for setting up the necessary bridged
networking? I left mine behind at a consulting gig, and haven't asked
for copies of them.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Virtualization platform choice

2011-03-27 Thread Steve Thompson
On Sun, 27 Mar 2011, Nico Kadel-Garcia wrote:

 How did you get the PXE working?

I already had a PXE server for physical hosts, so I just did a 
virt-install with the --pxe switch, and it worked first time. The MAC 
address was pre-defined and known to the DHCP server. I installed both 
Linux and Windows guests with PXE.

 And do you have widgets for setting up the necessary bridged
 networking?

I edited the ifcfg-eth0 file on the host and added an ifcfg-br0, all by 
hand, and then rebooted. I didn't have to think about it again.

/etc/sysconfig/network-scripts/ifcfg-eth0:
DEVICE=eth0
HWADDR=xx:xx:xx:xx:xx:xx
ONBOOT=yes
BRIDGE=br0
NM_CONTROLLED=0

/etc/sysconfig/network-scripts/ifcfg-br0:
DEVICE=br0
TYPE=Bridge
BOOTPROTO=static
BROADCAST=braddr
IPADDR=ipaddr
NETMASK=netmask
NETWORK=network
ONBOOT=yes

For each guest, something like this was used:

 interface type='bridge'
   mac address='52:54:00:1d:58:cf'/
   source bridge='br0'/
   model type='virtio'/
 /interface

Steve
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Virtualization platform choice

2011-03-27 Thread compdoc
 KVM was a dog in testing under CentOS and RHEL 5.x. The bridged
networking has *NO* network configuration tool that understands
how to set it up, you have to do it manually, and that's a deficit I've
submitted upstream as an RFE. It may work well with CentOS and
RHEL 6, i've not had a chance to test it.

Back when I was searching for a suitable virtualization platform, I found no
difference in performance between Xen and KVM. I liked both, but settled on
KVM.

ESXi back then was very limited in hardware support, so I never got to play
with it much. People seem to like it.

And it's true that bridged networking support in centos 5 requires that you
set up it manually, but that's what led me to learn ifcfg scripts. It's so
simple.



___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Virtualization platform choice

2011-03-27 Thread Les Mikesell
On 3/27/11 4:57 AM, Jussi Hirvi wrote:
 Some may be bored with the subject - sorry...

 Still not decided about virtualization platform for my webhotel v2
 (ns, mail, web servers, etc.).

 KVM would be a natural way to go, I suppose, only it is too bad CentOS 6
 will not be out in time for me - I guess KVM would be more mature in
 CentOS 6.

 Any experience with the free VMware vSphere Hypervisor?. (It was
 formerly known as VMware ESXi Single Server or free ESXi.)

 http://www.vmware.com/products/vsphere-hypervisor/overview.html

 I would need a tutorial about that... For example, does that run without
 a host OS? Can it be managed only via Win clients? Issues with CentOS
 4/5 guests (all my systems are currently CentOS 4/5).

The free ESXi version is very good.  While it doesn't support all the bells and 
whistles of the paid version, there are hardly any disadvantages compared to 
running the VMs on physical machines.  You do have to use a windows box to 
manage it (with the advantage of being able to use the media on the client for 
the install source), but once the guest networking is up you can use whatever 
you would use for remote access to a physical box (vnc, ssh, X, freenx, etc.) 
directly to the guest - so the windows box doesn't have to be server-quality or 
available all the time.  You might even be able to use the converter tool to 
migrate your running systems there - I've usually been able to do that with 
windows systems but couldn't get it to recognize my linux boxes with software 
raid (and didn't try any others since they aren't that hard to re-create). By 
the way, the current version of ESXi permits ssh without the 'unsupported' hack 
so you can copy images over scp or to/from an nfs mount, but it's not all that 
much faster than running the converter tool on another machine anyway.

-- 
   Les Mikesell
 lesmikes...@gmail.com
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Virtualization platform choice

2011-03-27 Thread David Brian Chait

 Biggest problem in free esxi is that it lacks backup vcb api, so full
 image backups are almost impossible under free esxi host ..


Not true at all, I use the ghettovcb script in the console and it works fine.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Virtualization platform choice

2011-03-27 Thread Nataraj
On 03/27/2011 07:10 AM, Steve Thompson wrote:
 On Sun, 27 Mar 2011, Nico Kadel-Garcia wrote:

 How did you get the PXE working?
 I already had a PXE server for physical hosts, so I just did a 
 virt-install with the --pxe switch, and it worked first time. The MAC 
 address was pre-defined and known to the DHCP server. I installed both 
 Linux and Windows guests with PXE.

 And do you have widgets for setting up the necessary bridged
 networking?
 I edited the ifcfg-eth0 file on the host and added an ifcfg-br0, all by 
 hand, and then rebooted. I didn't have to think about it again.

 /etc/sysconfig/network-scripts/ifcfg-eth0:
   DEVICE=eth0
   HWADDR=xx:xx:xx:xx:xx:xx
   ONBOOT=yes
   BRIDGE=br0
   NM_CONTROLLED=0

 /etc/sysconfig/network-scripts/ifcfg-br0:
   DEVICE=br0
   TYPE=Bridge
   BOOTPROTO=static
   BROADCAST=braddr
   IPADDR=ipaddr
   NETMASK=netmask
   NETWORK=network
   ONBOOT=yes

 For each guest, something like this was used:

  interface type='bridge'
mac address='52:54:00:1d:58:cf'/
source bridge='br0'/
model type='virtio'/
  /interface

 Steve
 ___
 CentOS mailing list
 CentOS@centos.org
 http://lists.centos.org/mailman/listinfo/centos

I setup a pxe boot server based on the instructions found here. 
https://help.ubuntu.com/community/PXEInstallMultiDistro
https://help.ubuntu.com/community/PXEInstallMultiDistro It works fine
for both physical machines and kvm VM's.  My pxeboot server is running
under ubuntu 10.04.2 in a kvm vm.  If your pxeboot server needs to run
under Redhat/CentOS, then you'll need to locate/install the packages
mentioned on this web page, which should be pretty straight forward.

Note the pxeboot server works fine for the install CDs and DVD's for all
of the distributions that I've tried, Redhat, Fedora and Ubuntu.  To
boot live CDs I believe you need to convert the entire image into a tftp
boot image which I think can be done using the fedora live cd creator
tool (maybe it's in redhat now as well).

I make the CD/DVD image available via NFS.  On the PXE host I simply
mount the ISO image under the NFS /export directory.  Most of the
install distributions provide the tftp images for pxeboot.  On the
redhat 6 CD you'll find vmlinuz and initrd.img in the /images/pxeboot
directory.

I just recently installed kvm virtualization on several redhat 6 hosts
and one under Scientific Linux.  The latest version of virt-manager
(with recent updates installed) now supports setup of bridge devices
from the GUI.  You'll want to make sure to use virtio for performance
and to disable the tso and gso tcp offload functions present in many
ethernet cards which I do with the upstart script listed below.

I am so far quite happy with kvm and happy to be able to run my
management interface under linux instead of windows.

#disable-tcpoffload - upstart script to modify tcp offload config for 
virtualization
#

description disable-tcpoffload

start on started rc RUNLEVEL=[2345]
stop on stopped rc RUNLEVEL=[!2345]

task

console output
# env INIT_VERBOSE

script
set +e
for interface in eth0 eth1 eth2 eth3; do
/sbin/ethtool -K $interface gso off
/sbin/ethtool -K $interface tso off
done
end script


Nataraj





___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Virtualization platform choice

2011-03-27 Thread John R Pierce
On 03/27/11 2:57 AM, Jussi Hirvi wrote:
 Any experience with the free VMware vSphere Hypervisor?. (It was
 formerly known as VMware ESXi Single Server or free ESXi.)


one downside to ESXi, it does not support any sort of software raid.   
Normally ESX is used with a SAN, which provides all RAID functionality, 
or with NFS based storage (again, the NFS server providing the raid), 
but if you use it with direct attached storage, you had better have a 
supported hardware raid controller.Most server kit from the big 
vendors is fully supported (HP,Dell,IBM).   You can boot ESXi from a 
small CF card, as once its booted, it doesn't touch the boot device at all.


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Virtualization platform choice

2011-03-27 Thread Joseph L. Casale
   You can boot ESXi from a 
small CF card, as once its booted, it doesn't touch the boot device at all.

Yes it does, there are cron jobs for config backups etc.
How does it remember config changes in a non-stateless deployment?

~ # cat /var/spool/cron/crontabs/root
#syntax : minute hour day month dayofweek command
01 01 * * * /sbin/tmpwatch.sh
01 * * * * /sbin/auto-backup.sh #first minute of every hour (run every hour)
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Virtualization platform choice

2011-03-27 Thread Rudi Ahlers
On Sun, Mar 27, 2011 at 11:07 PM, Jure Pečar pega...@nerv.eu.org wrote:
 On Sun, 27 Mar 2011 10:42:36 -0500
 Les Mikesell lesmikes...@gmail.com wrote:

 On 3/27/11 4:57 AM, Jussi Hirvi wrote:
  Some may be bored with the subject - sorry...
 
  Still not decided about virtualization platform for my webhotel v2
  (ns, mail, web servers, etc.).

 It's interesting that nobody so far mentioned openVZ or its commercial
 version, Virtuozzo. It's different than all major virtualization players
 (it's OS level virtualization, not hw level), but that makes it the only
 viable option for things like mass web hosting solutions.

 Try it out and see if it fits your requirements.


 --



OpenVZ / Virtuzzo is a joke and shouldn't be used for production
purposes. Especially not in mass web hosting solutions - that's just
asking for trouble.



-- 
Kind Regards
Rudi Ahlers
SoftDux

Website: http://www.SoftDux.com
Technical Blog: http://Blog.SoftDux.com
Office: 087 805 9573
Cell: 082 554 7532
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Virtualization platform choice

2011-03-27 Thread David Brian Chait
 It's interesting that nobody so far mentioned openVZ or its commercial
 version, Virtuozzo. It's different than all major virtualization players
 (it's OS level virtualization, not hw level), but that makes it the only
 viable option for things like mass web hosting solutions.

 Try it out and see if it fits your requirements.

The two things that always comes to mind when I am considering a virtualization 
solution is extent of tool set/support, and the general acceptance of the 
technology. For those two reasons I nearly always implement VMware, it has a 
mature set of tools, a wide range of functionality, and sits at a very 
manageable price point. No true open source solution really comes close, you 
can approximate parts of it, but in the end it turns into a support issue that 
can pose a significant headache.

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Virtualization platform choice

2011-03-27 Thread Jure Pečar
On Sun, 27 Mar 2011 14:21:45 -0700
David Brian Chait dch...@invenda.com wrote:

 The two things that always comes to mind when I am considering a
 virtualization solution is extent of tool set/support, and the general
 acceptance of the technology. For those two reasons I nearly always
 implement VMware, it has a mature set of tools, a wide range of
 functionality, and sits at a very manageable price point. No true open
 source solution really comes close, you can approximate parts of it, but
 in the end it turns into a support issue that can pose a significant
 headache.

True.

I've deployed Virtuozzo for a large web hosting company and found it
superior to vmware in about every aspect that mattered in a web hosting
environment.

OpenVZ on the other hand is about at the same level as xen, kvm and
similiar opensource solutions. Well, not so much a solution, but a building
block to help you build your own solution to your particular problem.


-- 

Jure Pečar
http://jure.pecar.org
http://f5j.eu
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Virtualization platform choice

2011-03-27 Thread Eero Volotinen
 I've deployed Virtuozzo for a large web hosting company and found it
 superior to vmware in about every aspect that mattered in a web hosting
 environment.

Well.. eh. as you might know that virtuozzo/openvz does not provide kernel
isolation. Mainly this means than one kernel exploit can provide full access
to all openvz/virtuozzo containers.


--
Eero
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Virtualization platform choice

2011-03-27 Thread Rudi Ahlers
On Sun, Mar 27, 2011 at 11:50 PM, Eero Volotinen eero.voloti...@iki.fi wrote:
 I've deployed Virtuozzo for a large web hosting company and found it
 superior to vmware in about every aspect that mattered in a web hosting
 environment.

 Well.. eh. as you might know that virtuozzo/openvz does not provide kernel
 isolation. Mainly this means than one kernel exploit can provide full access
 to all openvz/virtuozzo containers.


 --



. and one overloaded containter can take down the whole server as well.


-- 
Kind Regards
Rudi Ahlers
SoftDux

Website: http://www.SoftDux.com
Technical Blog: http://Blog.SoftDux.com
Office: 087 805 9573
Cell: 082 554 7532
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Virtualization platform choice

2011-03-27 Thread Jure Pečar
On Mon, 28 Mar 2011 00:10:45 +0200
Rudi Ahlers r...@softdux.com wrote:

 On Sun, Mar 27, 2011 at 11:50 PM, Eero Volotinen eero.voloti...@iki.fi
 wrote:
  I've deployed Virtuozzo for a large web hosting company and found it
  superior to vmware in about every aspect that mattered in a web hosting
  environment.
 
  Well.. eh. as you might know that virtuozzo/openvz does not provide
  kernel isolation. Mainly this means than one kernel exploit can provide
  full access to all openvz/virtuozzo containers.
 

The same is true for solutions like vmware. Just google for all the blue
pill talks. It's a theoretical risk that is small enough to be irrelevant.

 
 . and one overloaded containter can take down the whole server as
 well.

That's simply FUD. 


-- 

Jure Pečar
http://jure.pecar.org
http://f5j.eu
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Virtualization platform choice

2011-03-27 Thread Eero Volotinen
  Well.. eh. as you might know that virtuozzo/openvz does not provide
  kernel isolation. Mainly this means than one kernel exploit can provide
  full access to all openvz/virtuozzo containers.
 

 The same is true for solutions like vmware. Just google for all the blue
 pill talks. It's a theoretical risk that is small enough to be irrelevant.

WebServers running buggy php software provides (easy) way to execute
local kernel exploits.


--
Eero
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Virtualization platform choice

2011-03-27 Thread Jure Pečar
On Mon, 28 Mar 2011 01:26:38 +0300
Eero Volotinen eero.voloti...@iki.fi wrote:

  The same is true for solutions like vmware. Just google for all the
  blue pill talks. It's a theoretical risk that is small enough to be
  irrelevant.
 
 WebServers running buggy php software provides (easy) way to execute
 local kernel exploits.

Yes and I've dealt with my share of them. None ever exploited some
virtuozzo vulnerability. In fact many of them failed because they were run
on a VPS.

I trust the russian guys to know their way with C code and kernel fu.
That's why many of the virtuozzo core parts are becoming part of the linus
tree, cgroups being just one of the top of my head.

See, there's even a nice wiki article:
http://wiki.centos.org/HowTos/Virtualization/OpenVZ

I understand that vmware has much stronger marketing machine, however that
does not mean that their technology is somehow better. Their offer is a
reasonable choice for many scenarios in IT, mass web hosting is
unfortunately not one of them. As any competent admin will tell you, use
the right tool for the right job. It's good to have choice.

-- 

Jure Pečar
http://jure.pecar.org
http://f5j.eu
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Virtualization platform choice

2011-03-27 Thread David Brian Chait

 I understand that vmware has much stronger marketing machine, however that
 does not mean that their technology is somehow better. Their offer is a
 reasonable choice for many scenarios in IT, mass web hosting is
 unfortunately not one of them. As any competent admin will tell you, use
 the right tool for the right job. It's good to have choice.

You can't honestly be comparing a 2.6b / year corporation that sells/develops 
enterprise scale products that serve nearly all of the Fortune 500 to a 
virtually unknown product sold by the maker of Parallels. They are not even 
close to being in the same league.

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Virtualization platform choice

2011-03-27 Thread Rob Kampen

David Brian Chait wrote:

I understand that vmware has much stronger marketing machine, however that
does not mean that their technology is somehow better. Their offer is a
reasonable choice for many scenarios in IT, mass web hosting is
unfortunately not one of them. As any competent admin will tell you, use
the right tool for the right job. It's good to have choice.



You can't honestly be comparing a 2.6b / year corporation that sells/develops 
enterprise scale products that serve nearly all of the Fortune 500 to a 
virtually unknown product sold by the maker of Parallels. They are not even 
close to being in the same league.
  
Using that logic Micro$oft must be providing the best software on the 
planet ;-)

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
  
attachment: rkampen.vcf___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Virtualization platform choice

2011-03-27 Thread Chuck Munro


On 03/27/2011 09:00 AM, Jerry Franz wrote:

 On 03/27/2011 02:57 AM, Jussi Hirvi wrote:
   Some may be bored with the subject - sorry...
 
   Still not decided about virtualization platform for my webhotel v2
   (ns, mail, web servers, etc.).
 
   KVM would be a natural way to go, I suppose, only it is too bad CentOS 6
   will not be out in time for me - I guess KVM would be more mature in
   CentOS 6.
 
   Any experience with the free VMware vSphere Hypervisor?. (It was
   formerly known as VMware ESXi Single Server or free ESXi.)
 
   http://www.vmware.com/products/vsphere-hypervisor/overview.html
 
   I would need a tutorial about that... For example, does that run without
   a host OS? Can it be managed only via Win clients? Issues with CentOS
   4/5 guests (all my systems are currently CentOS 4/5).
 I'm currently using Ubuntu Server 10.04-LTS as a host for KVM running
 CentOS5.5 guests I migrated from VMware Server 2. Works fine. A nice
 feature of current generation KVM is that you are supposed to be able to
 do live migration even without shared storage (although I haven't tested
 that yet). I wrote some custom scripts to allow me to take LVM snapshots
 for whole-image backups and I'm pretty happy with the who setup.

 The only corners I encountered were

 1) A lack of documentation on how to configure bridging over bonded
 interfaces for the host server. It turned out to be fairly easy - just
 not clearly documented anyplace I could find.

 2) The default configuration for rebooting/shutting dow the host server
 just 'shoots the guests in the head' rather than having them shutdown
 cleanly.:(  You will want to write something to make sure they get
 shutdown properly instead.

Once in a while I find it's useful to compromise just a little, so I use 
Scientific Linux 6 as the Host OS, and run a bunch of CentOS-5.5 Guest 
VMs.  It all simply works.

KVM has improved quite a bit, and the management tools work well.  One 
thing that requires a bit of skill is getting bridging configured (which 
I simply did by hand from the RHEL-6 documentation).

I'm happy with the result, and see no reason to replace the underlying 
SL-6 Host distro.

SL-6 as the Host is rather slow to shut down gracefully and reboot, 
because it hibernates the Guest OSs, one at a time, rather than just 
killing them.  Hibernation takes a while to write out to disk if you've 
assigned a lot of RAM to the Guests.  Bootup has to restore the saved 
state, so that's a bit slow too.  But it works very well.

I use partitionable RAID arrays for the Guests, and assign a raw md 
device to each one rather than using the 'filesystem-in-a-file' method. 
  It seems to be a bit faster, but there's a learning curve to 
understanding how it works.

One thing I found a bit annoying is the very long time it takes for a 
Guest to format its filesystems on the RAID-6 md device assigned to it. 
  That's mostly due to array checksum overhead.  RAID-10 would be a 
*lot* faster but somewhat less robust ... you pick what's best for your 
own situation.

Chuck
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos