Re: Windows 7 guest installer does not detect drive if physical partition used instead of disk file.

2015-03-23 Thread Emmanuel Noobadmin
On 3/23/15, Stefan Hajnoczi stefa...@gmail.com wrote:
 I have CCed the libvirt mailing list, since KVM is a component here but
 your question seems to be mainly about libvirt, virt-manager,
 virt-install, etc.

Apologies for posting to the wrong list, I assumed it would be KVM
related as the guest could run but could not see the drive.

More information
1. install guest with /dev/sdxx as virtio device (the problem case)
- installer does not see any drive
- load drivers on Redhat virtio cdrom
- installer still does not see any drive

2. Install guest with qcow2 disk file as virtio device
- as previous scenario but installer see drives after installing drivers

3. install guest with qcow2 disk file as IDE device
- complete installation
- add /dev/sdxx as virtio disk
- goto Windows Device Manager and update virtio driver for unknown controller
- Windows see /dev/sdxx after driver installed


 It sounds like you want an NTFS partition on /dev/sda.  That requires
 passing the whole /dev/sda drive to the guest - and the Windows
 installer might overwrite your GRUB Master Boot Record.  Be careful when
 trying to do this.

Yes, I wanted to give Windows its own native partition that could be
read directly if I had to yank the disk and put it into a Windows
machine. Is this why #3 works but not #1? That as long as I want to
install Windows directly to an NTFS partition on/dev/sda, it is
required that I pass the whole drive to Windows?
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Windows 7 guest installer does not detect drive if physical partition used instead of disk file.

2015-03-20 Thread Emmanuel Noobadmin
Running
3.18.9-200.fc21.x86_64
qemu 2:2.1.3-3.fc21
libvirt 1.2.9.2-1.fc21
System is a Thinkpad X250 with Intel i7-5600u Broadwell GT2

I'm trying to replace the Win7 installation on my laptop with Fedora
21 and virtualizing Windows 7 for work purposes. I'd prefer to give
the guest its own NTFS partition instead of using a file for both
performance and ease of potential recovery.

So I've set aside unpartitioned space on the hard disk and added
/dev/sda to the virt-manager storage pool, created a new volume and
assigned it to the guest as an IDE drive. Unfortunately, the Windows 7
installer does not see this drive despite being IDE and not virtio.
If I use a qcow2 file as the drive, the installer has no problems
detecting it.

To eliminate virt-manager from the equation, I've also tried to do a
very basic install using virt-install with similar results, the
physical partition cannot be detected regardless of bus type
(IDE/SATA/virtio) even with the signed Redhat virtio drivers loaded by
the installer.

I was unable to find any similar issues or solutions online except a 2
year old thread on linuxquestions which quoted that we must specify
the whole disk instead of a partition. However, I cannot find the
source of that quote.
http://www.linuxquestions.org/questions/linux-virtualization-and-cloud-90/qemu-kvm-on-a-real-partition-947162/

Is this really the case and the reason why Windows 7 cannot see the
physical partition or there is something else I am doing wrong?
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Bug? 100% load on core after physically removing USB storage from host

2012-06-23 Thread Emmanuel Noobadmin
On 6/22/12, Stefan Hajnoczi stefa...@gmail.com wrote:
 Thanks for investigating and sharing the information you've found.
 It's archived on the list so anyone who hits it in the future or wants
 to reproduce it can try.

I decided to give it one more try before I formatted that machine and
tried the rpm method. Thankfully, with libvirt-0.9.12.tar.gz, it
appeared to install correctly on my first try. Both virsh --version
and libvirtd --version reporting 0.9.12 so I assumed the VMs are
running on the newer libvirt.

However the problem still exists so at least until that version, it
does not appear to had been resolved.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Bug? 100% load on core after physically removing USB storage from host

2012-06-21 Thread Emmanuel Noobadmin
On 6/20/12, Stefan Hajnoczi stefa...@gmail.com wrote:
 Anyway, once you've tried qemu.git/master we'll know whether the bug
 still exists and with all the info you've shared maybe Gerd (USB
 maintainer) will know what the issue is.

Sadly, my noobness meant during the hours I had onsite, I could only
get libvirt compiled but could not get thngs to work. There were some
errors about qemu need to be compiled with/for kalj, unknown OS hvm
and then a whole bunch of other errors when I try to connect to
qemu/kvm. The only time things loaded, turned out to be another noob
error that ended up with 0.8.7 loaded instead of the git version.

Pretty much expected and largely why I dread any witch from stock :D

Unfortunately, I have to get this machine repurposed over this weekend
so unlikely I will have time to figure out how to install a newer
version of libvirt. So hopefully the problem was fixed or somebody
else could replicate this with better success.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Bug? 100% load on core after physically removing USB storage from host

2012-06-20 Thread Emmanuel Noobadmin
On 6/18/12, Stefan Hajnoczi stefa...@gmail.com wrote:
 I believe the call is coming from hw/usb/host-linux.c:async_complete()
 but am not using the same source tree as your qemu-kvm so I could be
 off.  The code suggests that QEMU also logs an error message
 (USBDEVFS_REAPURBNDELAY: Inappropriate ioctl for device) when this
 happens.  If you want, check the libvirt log file for this guest - it
 probably has tons of these messages in it.

I did not have time to try a new version of QEMU but manage to check
do one plug/pull test. However, there was no error message in the
guest log. The last line in the guest log while this was happening was
this and nothing else subsequently.
husb: 1 interfaces claimed for configuration 1
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Bug? 100% load on core after physically removing USB storage from host

2012-06-19 Thread Emmanuel Noobadmin
On 6/18/12, Stefan Hajnoczi stefa...@gmail.com wrote:
 off.  The code suggests that QEMU also logs an error message
 (USBDEVFS_REAPURBNDELAY: Inappropriate ioctl for device) when this
 happens.  If you want, check the libvirt log file for this guest - it
 probably has tons of these messages in it.

I'll check for the above log entries when I next get to that site.

 Emmanuel: You mentioned that upgrading to a newer version might not be
 worth it, but if you're willing to test qemu.git/master just to see if
 the problem has already been fixed then that would be helpful.
 http://wiki.qemu.org/Download

I could try this, since the original host is actually being
repurposed, which was why I was doing some data transfer via USB. So
it would be OK to play with a newer versions when I go there next.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Bug? 100% load on core after physically removing USB storage from host

2012-06-15 Thread Emmanuel Noobadmin
On 6/13/12, Stefan Hajnoczi stefa...@gmail.com wrote:
Since system time is a large chunk you could use strace -f -p $(pgrep
qemu-kvm) or other system call tracing tools to see what the qemu-kvm
process is doing.

The command you gave didn't work so I replace $(pgrep) with PID of the
process running the VM after checking that -p was the PID option.

strace -f -p 19424 produces the following repeating lines

[pid 19424] ioctl(0, USBDEVFS_REAPURBNDELAY, 0x7fff8fc43d48) = -1
ENOTTY (Inappropriate ioctl for device)
[pid 19424] timer_gettime(0x2, {it_interval={0, 0}, it_value={0, 0}}) = 0
[pid 19424] timer_settime(0x2, 0, {it_interval={0, 0}, it_value={0,
25}}, NULL) = 0
[pid 19424] timer_gettime(0x2, {it_interval={0, 0}, it_value={0, 196501}}) = 0
[pid 19424] select(27, [7 10 15 18 20 21 22 23 24 25 26], [16], [],
{1, 0}) = 3 (in [7 18], out [16], left {0, 95})
[pid 19424] read(18, \1\0\0\0\0\0\0\0, 4096) = 8
[pid 19424] read(18, 0x7fff8fc42d50, 4096) = -1 EAGAIN (Resource
temporarily unavailable)
[pid 19424] ioctl(0, USBDEVFS_REAPURBNDELAY, 0x7fff8fc43d48) = -1
ENOTTY (Inappropriate ioctl for device)
[pid 19424] read(7, \0, 512)  = 1
[pid 19424] read(7, 0x7fff8fc43b50, 512) = -1 EAGAIN (Resource
temporarily unavailable)
[pid 19424] select(27, [7 10 15 18 20 21 22 23 24 25 26], [16], [],
{1, 0}) = 2 (in [20], out [16], left {0, 94})
[pid 19424] read(20,
\16\0\0\0\0\0\0\0\376\377\377\377\0\0\0\0\0\0\0\0\0\0\0\0\2\0\0\0\0\0\0\0...,
128) = 128
[pid 19424] rt_sigaction(SIGALRM, NULL, {0x7f5d28f6faa0, ~[KILL STOP
RTMIN RT_1], SA_RESTORER, 0x7f5d288d94a0}, 8) = 0
[pid 19424] write(8, \0, 1)   = 1
[pid 19424] write(19, \1\0\0\0\0\0\0\0, 8) = 8
[pid 19424] read(20, 0x7fff8fc43cc0, 128) = -1 EAGAIN (Resource
temporarily unavailable)
[pid 19424] ioctl(0, USBDEVFS_REAPURBNDELAY, 0x7fff8fc43d48) = -1
ENOTTY (Inappropriate ioctl for device)
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Bug? 100% load on core after physically removing USB storage from host

2012-06-15 Thread Emmanuel Noobadmin
On 6/14/12, Veruca Salt verucasal...@hotmail.co.uk wrote:
  qemu-kvm-0.12.1.2-2.209.el6_2.4.x86_64

 We had the same problem with 0.13
 We were using it on Sandy Bridge motherboards when it happened. It was an
 issue then, but we hanged to 1.0 a long time ago.
 Why are you using 0.12 years after it was replaced?

It's the default on the EL6 based distributions I think, and I don't
really want to change from the defaults unless I really have and 0.12
worked fine so far. This bug is a minor inconvenience but nothing
show-stopping yet.

Plus I'm not a full time server admin and diverging from the standard
stack tends to backfire on me later especially if I overlook noting
down some minor but crucial detail :D
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Bug? 100% load on core after physically removing USB storage from host

2012-06-13 Thread Emmanuel Noobadmin
On 6/12/12, Stefan Hajnoczi stefa...@gmail.com wrote:

Further tests done on the following set only
 qemu-kvm-0.12.1.2-2.209.el6_2.4.x86_64
 on SLES 6, 2.6.32-220.7.1.el.x86_64  (Intel 82801JI ICH10)

 1. VMM add physical host usb device - select storage to guest
 2. VMM remove hardware
 3. Physically remove the USB storage from the host, thread/core
 assigned to guest goes 100%

 Two clarifications:

 1. Can you confirm that the 100% CPU utilization only happens in Step
 #3?  For example, if it happened in Step #2 that would suggest the
 guest is entering a loop.  Step #3 suggests the host is entering a
 loop.

Verified Step #3 triggers the issue.

 2. Please run top(1) on the host during high CPU utilization to
 confirm which process is causing high CPU utilization.

Verified is the VM's process. If unpinned, the utilization floats
around the cores, if pinned, the 100% load stays with the physical
core. Load on the core stabilizes at around 32% usr 67% sys if the VM
is active. Pausing the VM makes it go to around 80+ sys.


Other info
selinux: no difference between enforcing/permissive

Does NOT happen if Step #2 is not done. i.e. simply yanking the USB
drive physically gives no problem. The PCI-USB device must be removed
from the guest in order for this to trigger.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Bug? 100% load on core after physically removing USB storage from host

2012-06-12 Thread Emmanuel Noobadmin
On 6/12/12, Stefan Hajnoczi stefa...@gmail.com wrote:
 After some testing, the only steps needed are
 1. VMM add physical host usb device - select storage to guest
 2. VMM remove hardware
 3. Physically remove the USB storage from the host, thread/core
 assigned to guest goes 100%

 Two clarifications:

 1. Can you confirm that the 100% CPU utilization only happens in Step
 #3?  For example, if it happened in Step #2 that would suggest the
 guest is entering a loop.  Step #3 suggests the host is entering a
 loop.

Yes, it's confirmed that #3 has to be done. I've top running in both
guest and host when replicating this. If left physically attached to
the host machine, nothing unusual happens. The change in load level is
almost immediate upon physical removal.

Within the guest, the loads are basically 0, single core very lightly
loaded guest. In the host, top shows 100% cpu utilization on the
relevant qemu-kvm process. Unfortunately, I did not think to use the
(1) key to display the individual physical core so can't say if it was
really just loading on that core.

The only thing possibly relevant data is that on the SLES set, I had
the VM pinned to a specific core, the VMM gui shows a load graph of
only 25%. On the Centos 6.2 set, it was not pinned specifically and
the load graph goes to 100%. But in both cases, top output shows 100%
for the process.


 2. Please run top(1) on the host during high CPU utilization to
 confirm which process is causing high CPU utilization.

Not physically at the machines now so I can only verify this tomorrow.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Bug? 100% load on core after physically removing USB storage from host

2012-06-11 Thread Emmanuel Noobadmin
After removing a USB flash drive using virtual machine manager, I
notice that the core assigned to the VM guest goes up to 100% load.
Within the guest itself, there is no significant activity.

This also prompted me to look at the other physical machine from which
I used the USB flash drive to transfer files. And it was also
exhibiting the same problem.

Installed versions are
qemu-kvm-0.12.1.2-2.209.el6_2.5.x86_64
on CentOS 6.2, 2.6.32-220.17.1.el6.x86_64 (Intel C204 PCH)

qemu-kvm-0.12.1.2-2.209.el6_2.4.x86_64
on SLES 6, 2.6.32-220.7.1.el.x86_64  (Intel 82801JI ICH10)

There are no error messages in the log files and things seem to be
working except for the fully loaded core.

After some testing, the only steps needed are
1. VMM add physical host usb device - select storage to guest
2. VMM remove hardware
3. Physically remove the USB storage from the host, thread/core
assigned to guest goes 100%

Repeating the same steps without restarting the guest causes cpu
utilization to drop back to normal for about a second or so before
going back up again.

Problem goes away if I restart the guest. As both machines are based
off RHEL, I checked Redhat bugtrack but don't seem to be anything
related except one related to hotplug/unplugging a USB controller more
than 1000 times.

Is this is a bug or there is actually something else I am supposed to
do before removing a physical device from a guest?

Also is there anyway I get the core/thread back to normal without
restarting the guest?
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [kvm] Re: [kvm] Re: Questions about duplicate memory work

2011-09-27 Thread Emmanuel Noobadmin
On 9/27/11, Robin Lee Powell rlpow...@digitalkingdom.org wrote:
 On Mon, Sep 26, 2011 at 04:15:37PM +0800, Emmanuel Noobadmin wrote:
 It's unrelated to what you're actually using as the disks, whether
 file or block devices like LVs. I think it just makes KVM tell the
 host not to cache I/O done on the storage device.

 Wait, hold on, I think I had it backwards.

 It tells the *host* to not cache the device in question, or the
 *VMs* to not cache the device in question?

I'm fairly certain it tells the qemu not to cache the device in
question. If you don't want the guest to cache their i/o, then the
guest OS should be configured if it allows that. Although I'm not sure
if it's possible to disable disk buffering/caching system wide in
Linux.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [kvm] Re: Questions about duplicate memory work

2011-09-26 Thread Emmanuel Noobadmin
On 9/26/11, Robin Lee Powell rlpow...@digitalkingdom.org wrote:
 On Mon, Sep 26, 2011 at 01:49:18PM +0800, Emmanuel Noobadmin wrote:
 On 9/25/11, Robin Lee Powell rlpow...@digitalkingdom.org wrote:
 
  OK, so I've got a Linux host, and a bunch of Linux VMs.
 
  This means that the host *and* all tho VMs do their own disk
  caches/buffers and do their own swap as well.

 If I'm not wrong, that's why the recommended and current default
 in libvirtd is to create storage devices with no caching to remove
 one layer of duplication.

 How do you do that?  I have my VMs using LVs created on the host as
 their disks, but I'm open to other methods if there are significant
 advantages.

It's unrelated to what you're actually using as the disks, whether
file or block devices like LVs. I think it just makes KVM tell the
host not to cache I/O done on the storage device. To do so just use
the option cache=none when specify the storage. e.g. from
http://www.linux-kvm.org/page/Tuning_KVM
 qemu -drive file=/dev/mapper/ImagesVolumeGroup-Guest1,cache=none,if=virtio

or edit the cache attribute in the libvirt domain XML file if you're using that.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Questions about duplicate memory work

2011-09-25 Thread Emmanuel Noobadmin
On 9/25/11, Robin Lee Powell rlpow...@digitalkingdom.org wrote:

 OK, so I've got a Linux host, and a bunch of Linux VMs.

 This means that the host *and* all tho VMs do their own disk
 caches/buffers and do their own swap as well.

If I'm not wrong, that's why the recommended and current default in
libvirtd is to create storage devices with no caching to remove one
layer of duplication.

 I've considered turning off swap on the VMs so all the swapping at
 least happens in *one place*; I dunno if that's best.

Not sure it's a good idea. If the VM needs more working memory than
you allocated, I think it locks up dead if there is insufficient swap
space. At least that appears to be what happened to one of mine.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Memmory and CPU Ballooning

2011-09-19 Thread Emmanuel Noobadmin
On 9/19/11, day knight back2ga...@gmail.com wrote:
 Is it possible and if yes then how.
 Can we increase the memory on a live guest machine without having to
 shutdown or reboot as well as increase and decrase CPUs. if it is
 possible, can some one point me to the documentation :)

Chipping in my 2 cents since nobody's answering, hopefully the sheer
amount of wrong information I put out will generate a meaningful reply
:D

There is/was an option to configure memory ballooning in the domain
xml. However, when I last tried it (on SL6.0 host), it didn't seem to
be working as the domain will use the initial amount of memory and hit
swap instead of getting more memory. Although I vaguely remember
discovering afterwards, there was some qemu command needed for this.

Also, I've read that memory ballooning is a bad idea because of the
way the kernel allocates memory resources during boot based on
available memory. Using ballooning causes the allocation calculation
to be inaccurate and become highly inefficient.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Any problem if I use ionice on KVM?

2011-06-29 Thread Emmanuel Noobadmin
I keep running into a situation where a KVM guest will lock up on some
kind of disk process it seems. System load goes way up but cpu % is
relatively low based on a crond script collecting data before
everything goes south. As a result, the host becoming unresponsive as
well. Initially it appeared to be due to a routine maintenance script
which I resolved with a combination of noatime and ionice on the
script.

However, now it appears that some other event/process is also cause a
lock up at random points in time. It's practically impossible (or I'm
too noob) to troubleshoot and figure out what exactly is causing this.

So I'm wondering if it's safe to run ionice on the KVM process so that
a runaway guest will not pull down the host with it. Which would
perhaps in some ways allow me to try to figure out what is going on.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Better to use iSCSI on host or guest?

2011-06-13 Thread Emmanuel Noobadmin
I'm planning to use iSCSI targets (over gigabit VLANs) for KVM guest
disks. The question I'm wondering about is whether it's better to md
(multi-path + mirror) the iSCSI targets on the host, then create LVM
partitions for the guests. Or to directly md the iSCSI targets within
the guest.

On one hand I think it would be slower to process the additional
layers in the guest, but on the other hand, readings seems to indicate
that the kernel is able to perform better disk i/o if it's aware of
multiple disks rather than just seeing a single disk.

I've not been able to find any definitive article/data on how these
might balance out. Would anybody in the list have a good idea which
way is better in terms of i/o performance?
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Better to use iSCSI on host or guest?

2011-06-13 Thread Emmanuel Noobadmin
On 6/14/11, Avi Kivity a...@redhat.com wrote:
 My gut feeling is to do iscsi in the host.  I guess it's best to measure
 though.  Please post your findings if you do that.

Any suggestions or recommendations as to how/what should I be measuring with?

So far in trying to determine how bad is the qcow2 disk bottleneck on
my VMs, I've been using dd with dsync and direct options as well as
hdparm which errors out (not sure if this is considered a KVM/virtio
bug?) on the virtio disk after getting a buffered result.

But these don't seem to be very good tools since it's all sequential
only. I've tried iozone on my home machine but it takes too long per
run; unfortunately this host and its VMs are live and I don't have the
luxury of another full set of hardware to test on.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [CentOS] Install CentOS as KVM guest

2011-04-29 Thread Emmanuel Noobadmin
On 4/29/11, Emmanuel Noobadmin centos.ad...@gmail.com wrote:
 Only problem is... networking still isn't working although brctl show
 on the host shows that a vnet0 had been created and attached to the
 bridge. Any pointers would be appreciated!


Just to close off on this issue for the benefit of any future clueless
newbies like me, networking wasn't working due to one missing element
in the .xml

model type='virtio' / was the missing ingredient.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [CentOS] Install CentOS as KVM guest

2011-04-28 Thread Emmanuel Noobadmin
On 4/28/11, Simon Grinberg si...@redhat.com wrote:
 What version of VMWare are you using?

Currently, I'm not using VMWare yet on this new server as I really do
hope to be able to use an unified solution. But so far, it's just
one brickwall after another. I've given myself until this weekend to
get things working or just go the easy way.

Previously, I've used VMServer 2 as well as VMPlayer 3. All running
off CentOS 5.x host.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [CentOS] Install CentOS as KVM guest

2011-04-28 Thread Emmanuel Noobadmin
On 4/28/11, Gleb Natapov g...@redhat.com wrote:
 So why don't you use virt-manager?

The original intention was to run the host without any graphical
desktop or anything not necessary to host the guests. That was based
on reading and such which recommends not having anything beyond the
necessary to minimize potential security problems and maximize
resources available.

Then there were those pages which warn that virt-manager didn't work
too well if bridged networking was required.

Last but not least, when I finally gave up and installed the desktop,
virt-manager couldn't find the hypervisor. Checking up, it appears
that the user needed additional permissions to certain files, which
after given and tested via CLI, I still get errors.

Starting up X as root gave me this ominous warning that I really
shouldn't be doing this and since I didn't think it was wise in the
first place to have the desktop with root access running on what's
supposed to be a production machine, I stopped trying that route and
went back to figuring how to get virt-install to work.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [CentOS] Install CentOS as KVM guest

2011-04-28 Thread Emmanuel Noobadmin
On 4/28/11, Gleb Natapov g...@redhat.com wrote:
 Qemu is not intended to be used directly by end user. It is too complex as
 you already found out. VMware don't even give you access to such low parts
 of virt stack. You should use libvirt or virt-manager instead. Especially
 if you are concerned about security. I think libvirt can start guest on
 headless server.

Sorry for the confusion, I was using libvirtd in CLI, i.e.
virt-install and virsh, not qemu directly.

 If this still fails for you you need to complain to libvirt developers
 (not in a rant mode, but providing details of what exact version of
 software you have problem with and what are you trying to do). And
 libvirt developers will not be shy to complain to qemu developers if the
 problem turned to be on their side.

Apologies about the rant mode as well. Before that, I tried sending
two emails (25th and 26th Apr) to the KVM list with some details,
hoping to get some advice. But each of these failed to materialize on
the kvm list for unknown reasons.

So I resorted to posting to the CentOS list, where I did get some
response for which I'm very thankful. The rant post came when despite
the additional advice which helped me get a little further, I keep
running into unexpected brickwalls like anaconda not seeing the dvd
(mounted ISO specified using --location) that it just booted from.

Out of frustration, I CC'd that particular email to the kvm list,
figuring that since it's likely to get me flamed, the imp of
perversity would probably let it through... and it did.

 As far as I know libvirt has no problem using bridged networking and
 virt-manager use libvirt so it should work if you use new enough virt
 stack, but you should ask on libvirt mailing list instead.

I guess those were outdated warnings on older versions. I'll give it
another spin given some of the new suggestions like using virt-install
to create the disk file. If it still doesn't work, I'll go check the
libvirt ML (I'm belatedly getting the idea that libvirt is not part of
kvm).
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [CentOS] Install CentOS as KVM guest

2011-04-28 Thread Emmanuel Noobadmin
On 4/28/11, Gleb Natapov g...@redhat.com wrote:
 of virt stack. You should use libvirt or virt-manager instead. Especially
 if you are concerned about security. I think libvirt can start guest on
 headless server.

 If this still fails for you you need to complain to libvirt developers
 (not in a rant mode, but providing details of what exact version of
 software you have problem with and what are you trying to do). And
 libvirt developers will not be shy to complain to qemu developers if the
 problem turned to be on their side.

I've finally got an installation working, not using virt-install or
virt-manager. After reading through the libvirt site, I started
writing the domain definition manually.

Through trial and error, comparison with what virt-install generated
and the online examples, I got a working xml. Just for the record,
virsh --version reports 0.8.1

For the benefits of other newbies, my discovery so far is that

1. No-activity after the guest VM started
Originally, when I specified the CentOS DVD ISO, the guest will load
and then do nothing but chew up 100% CPU cycle on the allocated 1 vcpu
for quite some time. Subsequently, it appeared that mounting the ISO
as loop back is the solution. This seemed to imply that libvirt or KVM
couldn't boot a guest from ISO... which didn't quite make sense.

I ran into the issue when using my manually generated XML, it turned
out that the reason was the permissions (644) on vmlinuz and
initrd.img on the DVD. By copying the two files to local disk,
changing the permissions and using the initrd and kernel options,
I was able to boot the guest.

I was curious how virt-install got around this and learnt that I could
dump the config from a running machine. It turns out that virt-install
didn't exactly use the .xml it created, it added stuff to the running
version. Importantly making a temporary copy of initrd.img and
vmlinuz. I think the ISO problem with virt-install may be that it was
unable to mount the ISO to copy these files despite me running it as
root.


2. Anaconda couldn't see the DVD
Which was my rant earlier, since it sounded really stupid that the
installer couldn't see the disc it just booted off. Now, with #1
solved, it seems that anaconda wasn't booting off the disc after all.

However, the interesting thing here is that once I got past #1, My
guest could install from the DVD.

After comparing the xml files, it seems the problem is virt-install
did not save the path to the ISO/mounted DVD. Under the disk
element, there wasn't a source. With my manually generated xml,
specifying the ISO as the source worked.

But the virt-installed anaconda was complaining I don't have any hard
disks or cdroms. Not that there was no disk in the drive. Everytime I
picked an option like install media in HDD or CDROM, it prompted me no
device, do I want to install a driver. Since the hard disk definition
appears to be the same, I'm not sure why that happened with
virt-install's xml but not mine.


So right now, I managed to get the OS installed, rebooting it required
removing the initrd and kernel entry as well as the source so that it
would boot from the image disk.

Only problem is... networking still isn't working although brctl show
on the host shows that a vnet0 had been created and attached to the
bridge. Any pointers would be appreciated!
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [CentOS] Install CentOS as KVM guest

2011-04-26 Thread Emmanuel Noobadmin
Unfortunately, things still don't work.

rant
It's just ridiculous that the installer under KVM does not detect the
cdrom drive it was booted from. Trying to do a net-install doesn't
work, maybe I messed up the networking even though br0 and eth0 is
working on the host.

Nevermind, let's install apache and use the mounted ISO. Verified
apache is working and the CentOS folder is accessible via web browser.
But, still the guest installer cannot seem to access the installation
files.

OK, so maybe I messed up the networking, after all I am the noob...
maybe specifying --network=bridge:br0 isn't enough. There was
something about a tap or tunnel when initially reading up on bridged
networking. Looking up more on this, there are several resources
(sorry KVM FAQ leads to a page that no longer exist) which like many
other instructions, give the commands without really explaining
what/why.

So I have to run some tunctl command and scripts to add a bridge and
tunnel/tap... but wait, I already made my bridge, will running the
script kill my networking by creating a second bridge? Especially the
warning about temporarily loosing connectivity due to ifcfg1 being
reset.

And if I need to run this script everytime in order to activate the
bridge and tunnel, doesn't that mean all my guest OS are screwed if
the host reboots while I'm not around? Shouldn't these things go into
permanent files like if-tun0 or something?

Every year, I get a little closer to not using VMWare but it seems
like this year is going to be victory for VMWare again.

CC to kvm mailing list but I expect, like my previous request for help
to the list, it will be rejected by mailman or a moderator.
/rant

Just damn frustrated, even if it's probably just me being too stupid
to know how to use KVM.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Connecting to a new guest VM

2011-04-25 Thread Emmanuel Noobadmin
This is probably a very noob question but I haven't been able to find
a solution that worked so far. Maybe it's just something really minor
that I've missed so I'll appreciate some pointers.

Running on Scientific Linux 6, bridged networking configured with
ifcfg-br0 and ifcfg-eth0, networking is working, I can ssh/vnc into
the host.

I created a guest using the following command as root following the
virt-install man page.

virt-install -n vm_01 -r 640 --vcpus=1
--file=/home/VMs/vm110401/vm_01_d1 -s 170 --nonsparse
--network=bridge:br0  --accelerate
--cdrom=/home/ISO/CentOS-5.6-x86_64-bin-DVD-1of2.iso --os-type=linux
--os-variant=rhel5

It seems to work, except I get a line that says Escape Char is ^]
And the console doesn't react to any further input except to exit.
Then it warns me that the OS is still being installed.

Being a noob, I figured maybe a GUI will be easier. So I installed X
desktop and created another VM with the same parameters except I added
--vnc --vncport=15901

However, I cannot connect to the VM, whether using the public IP or
through the LAN IP.

I have the vnc port allowed in iptables, the port is not the default
5901 since I already have the external VNC listening on that port.

I've also tried to connect to the VM via 127.0.0.1 through my VNC
session but depending on what I try (public, LAN, vnc from within vnc
to localhost) I get either connection refused or write: broken
pipe error.

Based on some google searches, I've also edited qemu.conf to include
the line vnc_listen= 0.0.0.0

But still no joy and from googling, apparently I'm not the only noob
who find myself stuck. So I'll appreciate it greatly if somebody could
point out what's missing or wrong, thanks!
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Connecting to a new guest VM

2011-04-25 Thread Emmanuel Noobadmin
Resending because the first did not appear to go through

This is probably a very noob question but I haven't been able to find
a solution that worked so far. Maybe it's just something really minor
that I've missed so I'll appreciate some pointers.

Running on Scientific Linux 6, bridged networking configured with
ifcfg-br0 and ifcfg-eth0, networking is working, I can ssh/vnc into
the host.

I created a guest using the following command as root following the
virt-install man page.

virt-install -n vm_01 -r 640 --vcpus=1
--file=/home/VMs/vm110401/vm_01_d1 -s 170 --nonsparse
--network=bridge:br0  --accelerate
--cdrom=/home/ISO/CentOS-5.6-x86_64-bin-DVD-1of2.iso --os-type=linux
--os-variant=rhel5

It seems to work, except I get a line that says Escape Char is ^]
And the console doesn't react to any further input except to exit.
Then it warns me that the OS is still being installed.

Being a noob, I figured maybe a GUI will be easier. So I installed X
desktop and created another VM with the same parameters except I added
--vnc --vncport=15901

However, I cannot connect to the VM, whether using the public IP or
through the LAN IP.

I have the vnc port allowed in iptables, the port is not the default
5901 since I already have the external VNC listening on that port.

I've also tried to connect to the VM via 127.0.0.1 through my VNC
session but depending on what I try (public, LAN, vnc from within vnc
to localhost) I get either connection refused or write: broken
pipe error.

Based on some google searches, I've also edited qemu.conf to include
the line vnc_listen= 0.0.0.0

But still no joy and from googling, apparently I'm not the only noob
who find myself stuck. So I'll appreciate it greatly if somebody could
point out what's missing or wrong, thanks!
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Using qemu-img to directly convert physical disk to KVM image

2010-11-09 Thread Emmanuel Noobadmin
Just a bit more info from my unfortunate experience. I took about 20
hours to get the original WinXP machine virtualized including an
unfortunate bug?lock condition? That required a re-install after I
spent time doing an image.

Also initially I made the mistake of making an image of every
partition instead of cloning the entire physical drive. So that
obviously didn't work. When I realized my mistake, I thought since it
was possible to attach a physical drive to a guest, maybe I could run
the guest directly off the physical drive (the original was a fakeraid
1 so I had a backup copy in any case).

But for some reason it didn't work.

That was about the time I asked about the direct method. But the
resulting qcow2 didn't work in the end, I thought it did and happily
post my last update. However, the OS never managed to complete
booting, for some reason the guest took up 25% load and stay stuck.

I was running out of time, so apologies to the KVM folks, I took the
easy way out again (Xen didn't work for me either a yr ago).
Downloaded VM Player, qemu-img to a vmdk and although there was an
error message about invalid boot.ini, the XP guest works.

Despite the possibility of losing yet another day, I'll still give KVM
a try the next time I have to virtualize a machine.


On 11/9/10, Michael Tokarev m...@tls.msk.ru wrote:
 09.11.2010 05:54, Emmanuel Noobadmin wrote:
 Thanks for the confirmation and just for the benefit of anybody else
 who subsequently searches for keywords KVM QEMU convert physical
 drive virtual machine image /keywords, yes it works :)

 Heh.  Well, it is not something unexpected really.  Just a few more
 comments below...

 On 11/9/10, Michael Tokarev m...@tls.msk.ru wrote:
 09.11.2010 01:48, Emmanuel Noobadmin wrote:
 I'm trying to convert a physical Windows XP machine into a KVM guest.
 All the guides so far mentions using dd to create a flat image file,
 then using qemu-img to convert that to qcow2. Since I've been making
 mistake here and there, retrying the process several times (initially
 converting each logical partition into an image), the question struck
 me: is there any reason why I cannot do something like this
 qemu-img convert -f /dev/sdc -O qcow2 /images/winxp.qcow instead of
 having to do it in two passes which literally take hours each.

 You mentioned several kinds of storage.  The format of (virtual) drive
 can be raw or qcow2, or others supported by qemu.  The location of the
 data can be in a file on a filesystem, or it can be a physical device
 (/dev/sdc), or a lvm volume, or a partition, or an iscsi lun, or any
 other block device.  Either reasonable combination of the two can be
 used.

 In this case, running your guest off /dev/sda directly will work too.
 Moreover, you most likely does not want to convert it to a qcow2 format,
 due to various small and large issues with it - the flat image file
 created with dd, or a raw format created by `qemu-img -O raw' (which
 is almost the same but with zero blocks skipped) will most likely work
 better (read: faster and more reliable).

 /mjt

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Using qemu-img to directly convert physical disk to KVM image

2010-11-08 Thread Emmanuel Noobadmin
I'm trying to convert a physical Windows XP machine into a KVM guest.
All the guides so far mentions using dd to create a flat image file,
then using qemu-img to convert that to qcow2. Since I've been making
mistake here and there, retrying the process several times (initially
converting each logical partition into an image), the question struck
me: is there any reason why I cannot do something like this
qemu-img convert -f /dev/sdc -O qcow2 /images/winxp.qcow instead of
having to do it in two passes which literally take hours each.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Using qemu-img to directly convert physical disk to KVM image

2010-11-08 Thread Emmanuel Noobadmin
Thanks for the confirmation and just for the benefit of anybody else
who subsequently searches for keywords KVM QEMU convert physical
drive virtual machine image /keywords, yes it works :)

On 11/9/10, Michael Tokarev m...@tls.msk.ru wrote:
 09.11.2010 01:48, Emmanuel Noobadmin wrote:
 I'm trying to convert a physical Windows XP machine into a KVM guest.
 All the guides so far mentions using dd to create a flat image file,
 then using qemu-img to convert that to qcow2. Since I've been making
 mistake here and there, retrying the process several times (initially
 converting each logical partition into an image), the question struck
 me: is there any reason why I cannot do something like this
 qemu-img convert -f /dev/sdc -O qcow2 /images/winxp.qcow instead of
 having to do it in two passes which literally take hours each.

 This is exactly the way to do it - converting the physical disk directly
 to a qcow (or whatever format) file using qemu-img.  I've no idea why
 all the guide writers are so confused.

 The only problem with your exact version is that you've extra -f
 argument - it expects a parameter, the input image type, which is
 raw, so either use -f raw, or remove -f.

 /mjt

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Creating guest/image from live physical machine?

2010-07-27 Thread Emmanuel Noobadmin
I've been searching for howto on this but all the guides/docs I've
found seem to assume that we would want to either convert from an
existing VMWare/Xen VM or install a whole new VM.

So my question is: is it possible to create an image of a running
physical machine and then start it on another machine as a VM but with
different hardware specs?

Specifically, I have a server running with dmraid 1. I want to
virtualize it with minimal downtime onto a machine which already is
already spec with dmraid 1. Since it doesn't make sense if to recreate
a pair of virtual disk again for dmraid on top of the physical
machine's dmraid.

At the moment, based on some of the things I've read which are not
specific to the situation, it seems that I have to do something like
the following

1. dd the physical machine drives into a file
2. Copy it to the new physical machine.
3. From within the VM guest, dd the file into the VM's virtual drive
4. Edit fstab of the VM after it's been over-written, i.e. mounting
/dev/sda instead of /dev/md0
5. Change any other necessary config
6. Reboot the VM and hope the new settings take.

Or is there a better/easier/safer way to do this?
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Help converting existing VMWare Server 2 guest to KVM

2010-07-10 Thread Emmanuel Noobadmin
On 7/11/10, ewheeler k...@ew.ewheeler.org wrote:
 On Sat, 2010-07-10 at 22:16 +0800, Emmanuel Noobadmin wrote:
 I'm trying to convert an existing VMWare Server 2 guest (Vista 64) to
 KVM (on CentOS 5.5) in order to evaluate migrating to KVM.

 Please advise if there is any more relevant and detailed guide on
 doing this? Thanks.

 After converting the .VDI to a .qcow2, you might try something as simple
 as this from a console:

I don't know if this is part of the problem but I don't have a .VDI
I was converting from a .vmdk file

   qemu-system-x86_64 -hda yourdiskimage.qcow2

 Thats about as simple as it can get.  What do you get at the console
 when you run this?

I don't have that utility on my system. I tried to install the qemu
rpm but yum fails because it conflicts with the KVM package
(kvm-qemu-img-83-164.el5_5.9.x86_64) already installed.

Searching for qemu related files, it seems the only utility/tool I
have is qemu-img and qemu-kvm
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Help converting existing VMWare Server 2 guest to KVM

2010-07-10 Thread Emmanuel Noobadmin
On 7/11/10, ewheeler k...@ew.ewheeler.org wrote:

 I'm having trouble on CentOS 5.5 too

That doesn't sound too positive since Redhat is supposedly shifting to
KVM as their virtualization platform of choice. :(

--- Have you tried the testing repo using these instructions?
 http://wiki.centos.org/HowTos/KVM#head-ddf21f42074a58b940ff360d78b7f79130c193e4

 You might also try the EPEL and freshrpms repo for the latest qemu/kvm
 support.

I was hoping it won't come down to such hoop jumping especially on the
machine which does serve some light production duties. I'll give it a
spin once I've gathered the spare parts to build my two node test
setup.

 BTW, I am having great success using Ubuntu 10.04 as a host OS and
 running the VMs under it---and I've been a die-hard CentOS/RHEL guy for
 years.   I  tried CentOS 5.[45] a while ago for KVM and was not pleased
 with the functionality, primarily because CentOS still uses the 2.6.18
 relic with (too)many backports.

 Let us know how it goes!

I'm quite tempted now and then to give Ubuntu a spin especially given
the support/integration for the newer stuff like Eucalyptus. However,
at the same time, I'm risk averse and sys/infrastructure being a
seconded duty, I more inclined to keep everything on the same OS so I
won't screw up something accidentally. Don't want to use the wrong
switch/options on a command one day and find myself wrecking a
production system :D

But if CentOS really can't be made to work, then it is either Ubuntu
or sticking to VMware, I might just bite the bullet just so I don't
end up stuck with a VM setup that's not supported a few years down the
road.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


ESXi, KVM or Xen?

2010-07-02 Thread Emmanuel Noobadmin
Which of these would be the recommended virtualization platform for
mainly CentOS guest on CentOS host especially for running a
virtualized mail server? From what I've read, objectively it seems
that VMWare's still the way to go although I would had like to go with
Xen or KVM just as a matter of subjective preference.


VMWare's offering seems to have the best support and tools, plus
likely the most mature of the options. Also given their market
dominance, unlikely to just up and die in the near future.

Xen would had been a possible option except Redhat appears to be
focusing on KVM as their virtualization platform of choice to compete
with VMWare and Citrix. So maybe Xen support will be killed shortly.
Plus the modified xen kernel apparently causes conflict with certain
software, at least based on previous incidents where I'd been advised
not to use the CentOS xen kernel if not using xen virtualization.


KVM would be ideal since it's opensource and would be supported in
CentOS as far as can be reasonably foreseen. However, looking at
available resources online, it seems to have these key disadvantages

1. Poorer performance under load.
http://wiki.xensource.com/xenwiki/Open_Topics_For_Discussion?action=AttachFiledo=gettarget=Quantitative+Comparison+of+Xen+and+KVM.pdf
This 2008 XenSummit paper indicates that it dies on heavy network load
as well as when there are more than a few VM doing heavy processing at
the same time. But that's two years ago and they weren't using
paravirtual drivers it seems.

http://vmstudy.blogspot.com/2010/04/network-performance-test-xenkvm-vt-d.html
This  blog testing out Xen/KVM pretty recently. While the loads are
not as drastic and neither the difference, it still shows that KVM
does lag behind by about 10%.

This is a concern since I plan to put storage on the network and the
most heavy load the client has is basically the email server due to
the volume plus inline antivirus and anti-spam scanning to be done on
those emails. Admittedly, they won't be seeing as much emails as say a
webhost but most of their emails come with relatively large
attachments.


2. Security
Some sites point out that KVM VM runs in userspace as threads. So a
compromised guest OS would then give intruder access to the system as
well as other VMs.

Should I really be concerned or are these worries only for extreme
situations and that KVM is viable for normal production situations?
Are there other things I should be aware of?
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: ESXi, KVM or Xen?

2010-07-02 Thread Emmanuel Noobadmin
 if by 'put storage on the network' you mean using a block-level
 protocol (iSCSI, FCoE, AoE, NBD, DRBD...), then you should by all
 means initiate on the host OS (Dom0 in Xen) and present to the VM as
 if it were local storage.  it's far faster and more stable that way.
 in that case, storage wouldn't add to the VM's network load, which
 might or might not make those (old) scenarios irrelevant

Thanks for that tip :)

 in any case, yes; Xen does have more maturity on big hosting
 deployments.  but most third parties are betting on KVM for the
 future.  the biggest examples are Redhat, Canonical, libvirt (which is
 sponsored by redhat), and Eucalyptus (which reimplements amazon's EC2
 with either Xen or KVM, focusing on the last) so the gap is closing.

This is what I figured too, hence not a straightforward choice. I
don't need top notch performance for most of the servers targeted for
virtualization. Loads are usually low except on the mail servers and
often only when there's a mail loop problem. So if the performance hit
under worse case situation is only 10~20%, it's something I can live
with. Especially since the intended VM servers (i5/i7) will be
significantly faster than the current ones (P4/C2D) I'm basing the my
estimates on.

But I need to do my due dilligence and have justification ready to
show that current performance/reliability/security is at least good
enough instead of I like where KVM is going and think it'll be the
platform of choice in the years to come. Bosses and clients tend to
frown on that kind of thing :D

 and finally, even if right now the 'best' deployment on Xen definitely
 outperforms KVM by a measurable margin; when things are not optimal
 Xen degrades a lot quicker than KVM.  in part because the Xen core
 scheduler is far from the maturity of Linux kernel's scheduler.

The problem is finding stats to back that up if my clients/boss ask
about it. So far most of the available comparisons/data seem rather
dated, mostly 2007 and 2008. The most professional looking one, in
that PDF I linked to, seems to indicate the opposite, i.e. KVM
degrades faster when things go south. That graph with the Apache
problem is especially damning because our primary product/services are
web-based applications, infrastructure being a supplement
service/product.

In addition, I remember reading a thread on this list where an Intel
developer pointed out that the Linux scheduler causes performance hit,
about 8x~10x slower when the physical processors are heavily loaded
and there are more vCPU than pCPU when it puts the same VM's vCPUs
into the same physical core.

So I am a little worried since 8~10x is massive difference, esp if
some process goes awry, starts chewing up processor cycles and the VM
starts to lag because of this. A vicious cycle that makes it even
harder to fix things without killing the VM.

Of course if I could honestly tell my clients/boss This, this and
this are rare situations we will almost never encounter..., then it's
a different thing. Hence asking about this here :)
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html