Unable to get MementOS booting

2009-10-14 Thread Ubitux
Hi, I'm not able to get MementOS booting while using kvm modules. QEmu hangs on Floppy boot. Here is the procedure: cd /tmp wget -c 'http://www.menuetos.be/download.php?CurrentMenuetOS' -O menuetos.zip unzip -u menuetos.zip qemu-kvm -m 512 -fda M64-*.IMG -boot a I have a intel i7 920, I use

Re: [PATCH][RFC] Xen PV-on-HVM guest support

2009-10-14 Thread Jan Kiszka
Ed Swierk wrote: As we discussed a while back, support for Xen PV-on-HVM guests can be implemented almost entirely in userspace, except for handling one annoying MSR that maps a Xen hypercall blob into guest address space. A generic mechanism to delegate MSR writes to userspace seems

[PATCH] qemu-kvm: x86: Add support for NMI states

2009-10-14 Thread Jan Kiszka
This adds the required bit to retrieve and set the so far hidden NMI pending and NMI masked states of the KVM kernel side. It also extends CPU VMState for proper saving/restoring. We can now savely reset a VM while NMIs are on the fly, and we can live migrate etc. too. Fortunately, the

[PATCH] KVM test: Add a kvm subtest guest_s4

2009-10-14 Thread Lucas Meneghel Rodrigues
This test suspends a guest OS to disk, it supports Linux and Windows. Signed-off-by: Ken Cao k...@redhat.com Signed-off-by: Yolkfull Chow yz...@redhat.com --- client/tests/kvm/kvm_tests.cfg.sample | 16 client/tests/kvm/tests/guest_s4.py| 66 + 2

Re: [Autotest] [PATCH] Add a kvm test guest_s4 which supports both Linux and Windows platform

2009-10-14 Thread Lucas Meneghel Rodrigues
On Tue, Oct 13, 2009 at 11:54 PM, Yolkfull Chow yz...@redhat.com wrote: On Tue, Oct 13, 2009 at 05:29:40PM -0300, Lucas Meneghel Rodrigues wrote: Hi Yolkfull and Chen: Thanks for your test! I have some comments and doubts to clear, most of them are about content of the messages delivered for

Re: [Autotest] [PATCH] Add a kvm test guest_s4 which supports both Linux and Windows platform

2009-10-14 Thread Yolkfull Chow
On Wed, Oct 14, 2009 at 06:58:01AM -0300, Lucas Meneghel Rodrigues wrote: On Tue, Oct 13, 2009 at 11:54 PM, Yolkfull Chow yz...@redhat.com wrote: On Tue, Oct 13, 2009 at 05:29:40PM -0300, Lucas Meneghel Rodrigues wrote: Hi Yolkfull and Chen: Thanks for your test! I have some comments and

Re: [Autotest] [PATCH] Using shutil.move to move result files in job.py

2009-10-14 Thread Lucas Meneghel Rodrigues
Ok, looks good. Commited as http://autotest.kernel.org/changeset/3844 On Mon, Oct 12, 2009 at 11:36 PM, Cao, Chen k...@redhat.com wrote: Since os.rename requires that the file is in the same partition with the dest directory, we would get a python OSError if the result directory is mounted to

Re: [Autotest] [PATCH] Test 802.1Q vlan of nic

2009-10-14 Thread Lucas Meneghel Rodrigues
Hi Amos, thanks for the patch, here are my comments (pretty much concerning only coding style): On Wed, Sep 23, 2009 at 8:19 AM, Amos Kong ak...@redhat.com wrote: Test 802.1Q vlan of nic, config it by vconfig command. 1) Create two VMs 2) Setup guests in different vlan by vconfig and test

Re: sync guest calls made async on host - SQLite performance

2009-10-14 Thread Avi Kivity
On 10/14/2009 07:37 AM, Christoph Hellwig wrote: Christoph, wasn't there a bug where the guest didn't wait for requests in response to a barrier request? Can't remember anything like that. The bug was the complete lack of cache flush infrastructure for virtio, and the lack of advertising

Re: sync guest calls made async on host - SQLite performance

2009-10-14 Thread Matthew Tippett
I understand. However the test itself is fairly trivial representation of a single teir high-transactional load system. (Ie: a system that is logging a large number of events). The phoronix test suite simply hands over to a binary using sqlite and does 25000 sequential inserts. The overhead of

[PATCH] virtio-blk: fallback to draining the queue if barrier ops are not supported

2009-10-14 Thread Avi Kivity
Early implementations of virtio devices did not support barrier operations, but did commit the data to disk. In such cases, drain the queue to emulate barrier operations. Signed-off-by: Avi Kivity a...@redhat.com --- drivers/block/virtio_blk.c |6 +- 1 files changed, 5 insertions(+), 1

Re: [Autotest] [PATCH] Add pass through feature test (support SR-IOV)

2009-10-14 Thread Lucas Meneghel Rodrigues
Yolkfull, I've studied about single root IO virtualization before reviewing your patch, the general approach here looks good. There were some stylistic points as far as code is concerned, so I have rebased your patch against the latest trunk, and added some explanation about the features being

Re: [Qemu-devel] Release plan for 0.12.0

2009-10-14 Thread Arnd Bergmann
On Thursday 08 October 2009, Anthony Liguori wrote: Jens Osterkamp wrote: On Wednesday 30 September 2009, Anthony Liguori wrote: Please add to this list and I'll collect it all and post it somewhere. What about Or Gerlitz' raw backend driver ? I did not see it go in yet, or

Re: Release plan for 0.12.0

2009-10-14 Thread Michael S. Tsirkin
On Thu, Oct 08, 2009 at 09:21:04AM -0500, Anthony Liguori wrote: Jens Osterkamp wrote: On Wednesday 30 September 2009, Anthony Liguori wrote: o VMState conversion -- I expect most of the pc target to be completed o qdev conversion -- I hope that we'll get most of the pc target

Re: sync guest calls made async on host - SQLite performance

2009-10-14 Thread Christoph Hellwig
On Wed, Oct 14, 2009 at 08:03:41PM +0900, Avi Kivity wrote: Can't remember anything like that. The bug was the complete lack of cache flush infrastructure for virtio, and the lack of advertising a volative write cache on ide. By complete flush infrastructure, you mean host-side and

[PATCHv2 1/2] Complete cpu initialization before signaling main thread.

2009-10-14 Thread Gleb Natapov
Otherwise some cpus may start executing code before others are fully initialized. Signed-off-by: Gleb Natapov g...@redhat.com --- v1-v2: - reinit cpu_single_env after qemu_cond_wait() qemu-kvm.c | 29 +++-- 1 files changed, 15 insertions(+), 14 deletions(-) diff

[PATCH 2/2] Don't sync mpstate to/from kernel when unneeded.

2009-10-14 Thread Gleb Natapov
mp_state, unlike other cpu state, can be changed not only from vcpu context it belongs to, but by other vcpus too. That makes its loading from kernel/saving back not safe if mp_state value is changed inside kernel between load and save. For example vcpu 1 loads mp_sate into user-space and the

Re: [Qemu-devel] Release plan for 0.12.0

2009-10-14 Thread Anthony Liguori
Arnd Bergmann wrote: There are two reasons why I think this backend is important: - As an easy way to provide isolation between guests (private ethernet port aggregator, PEPA) and external enforcement of network priviledges (virtual ethernet port aggregator, VEPA) using the macvlan

Re: [Qemu-devel] Release plan for 0.12.0

2009-10-14 Thread Michael S. Tsirkin
On Wed, Oct 14, 2009 at 08:53:55AM -0500, Anthony Liguori wrote: Arnd Bergmann wrote: There are two reasons why I think this backend is important: - As an easy way to provide isolation between guests (private ethernet port aggregator, PEPA) and external enforcement of network priviledges

Re: [Qemu-devel] Release plan for 0.12.0

2009-10-14 Thread Michael S. Tsirkin
On Wed, Oct 14, 2009 at 03:09:28PM +0200, Arnd Bergmann wrote: On Thursday 08 October 2009, Anthony Liguori wrote: Jens Osterkamp wrote: On Wednesday 30 September 2009, Anthony Liguori wrote: Please add to this list and I'll collect it all and post it somewhere. What

Re: Release plan for 0.12.0

2009-10-14 Thread Anthony Liguori
Michael S. Tsirkin wrote: Looks like Or has abandoned it. I have an updated version which works with new APIs, etc. Let me post it and we'll go from there. I'm generally inclined to oppose the functionality as I don't think it offers any advantages over the existing backends. I

Re: Release plan for 0.12.0

2009-10-14 Thread Michael S. Tsirkin
On Wed, Oct 14, 2009 at 09:17:15AM -0500, Anthony Liguori wrote: Michael S. Tsirkin wrote: Looks like Or has abandoned it. I have an updated version which works with new APIs, etc. Let me post it and we'll go from there. I'm generally inclined to oppose the functionality as I don't

Re: [Autotest] [PATCH] KVM test: Add PCI pass through test

2009-10-14 Thread Lucas Meneghel Rodrigues
FYI, Amit pointed out that the correct name for this test would be PCI device assignment, so the final version of this patch will be called PCI device assignment instead. On Wed, Oct 14, 2009 at 9:08 AM, Lucas Meneghel Rodrigues l...@redhat.com wrote: Add a new PCI pass trough test. It supports

[PATCH] v4: allow userspace to adjust kvmclock offset

2009-10-14 Thread Glauber Costa
When we migrate a kvm guest that uses pvclock between two hosts, we may suffer a large skew. This is because there can be significant differences between the monotonic clock of the hosts involved. When a new host with a much larger monotonic time starts running the guest, the view of time will be

Re: [PATCH] virtio-blk: fallback to draining the queue if barrier ops are not supported

2009-10-14 Thread Javier Guerra
On Wed, Oct 14, 2009 at 7:03 AM, Avi Kivity a...@redhat.com wrote: Early implementations of virtio devices did not support barrier operations, but did commit the data to disk.  In such cases, drain the queue to emulate barrier operations. would this help on the (i think common) situation with

Re: [Qemu-devel] Re: Release plan for 0.12.0

2009-10-14 Thread Jamie Lokier
Michael S. Tsirkin wrote: On Wed, Oct 14, 2009 at 09:17:15AM -0500, Anthony Liguori wrote: Michael S. Tsirkin wrote: Looks like Or has abandoned it. I have an updated version which works with new APIs, etc. Let me post it and we'll go from there. I'm generally inclined to oppose

Re: [PATCH] virtio-blk: fallback to draining the queue if barrier ops are not supported

2009-10-14 Thread Michael Tokarev
Avi Kivity wrote: Early implementations of virtio devices did not support barrier operations, but did commit the data to disk. In such cases, drain the queue to emulate barrier operations. Are there any implementation currently that actually supports barriers? As far as I remember there's no

Re: [PATCH] virtio-blk: fallback to draining the queue if barrier ops are not supported

2009-10-14 Thread Christoph Hellwig
On Wed, Oct 14, 2009 at 07:38:45PM +0400, Michael Tokarev wrote: Avi Kivity wrote: Early implementations of virtio devices did not support barrier operations, but did commit the data to disk. In such cases, drain the queue to emulate barrier operations. Are there any implementation

Re: [Qemu-devel] Re: Release plan for 0.12.0

2009-10-14 Thread Michael S. Tsirkin
On Wed, Oct 14, 2009 at 04:19:17PM +0100, Jamie Lokier wrote: Michael S. Tsirkin wrote: On Wed, Oct 14, 2009 at 09:17:15AM -0500, Anthony Liguori wrote: Michael S. Tsirkin wrote: Looks like Or has abandoned it. I have an updated version which works with new APIs, etc. Let me post it

Latest -git qemu-kvm doesn't boot an x86 kernel

2009-10-14 Thread Aneesh Kumar K.V
Hi, I am trying qemu-system-x86_64 on a x86 host running 2.6.30-2 (debian testing) kernel and trying to boot latest linus git kernel (x86). The kernel hang after printing the below [ 4.394392] ACPI: PCI Interrupt Link [LNKC] enabled at IRQ 11 [4.397837] virtio-pci :00:03.0: PCI INT A

[PATCH 0/3] get rid of kvm vcpu structure

2009-10-14 Thread Glauber Costa
Hello, Done in three parts, the following patches get rid of vcpu structure in qemu-kvm. All state is now held in CPUState, getting us a bit closer to upstream qemu again. The last pass converts us to the use of kvm_vcpu_ioctl, allowing more code to be shared. -- To unsubscribe from this

[PATCH 1/3] change function signatures so that they don't take a vcpu argument

2009-10-14 Thread Glauber Costa
At this point, vcpu arguments are passed only for the fd field. We already provide that in env, as kvm_fd. Replace it. Signed-off-by: Glauber Costa glom...@redhat.com --- cpu-defs.h |1 - hw/apic.c |4 +- kvm-tpr-opt.c | 16 +- qemu-kvm-x86.c | 91

[PATCH 2/3] get rid of vcpu structure

2009-10-14 Thread Glauber Costa
We have no use for it anymore. Only trace of it was in vcpu_create. Make it disappear. Signed-off-by: Glauber Costa glom...@redhat.com --- qemu-kvm.c | 11 +++ qemu-kvm.h |5 - 2 files changed, 3 insertions(+), 13 deletions(-) diff --git a/qemu-kvm.c b/qemu-kvm.c index

[PATCH 3/3] use upstream kvm_vcpu_ioctl

2009-10-14 Thread Glauber Costa
Signed-off-by: Glauber Costa glom...@redhat.com --- kvm-all.c |3 --- qemu-kvm-x86.c | 20 ++-- qemu-kvm.c | 26 +- qemu-kvm.h |1 + 4 files changed, 24 insertions(+), 26 deletions(-) diff --git a/kvm-all.c b/kvm-all.c index

Re: [PATCH] virtio-blk: fallback to draining the queue if barrier ops are not supported

2009-10-14 Thread Avi Kivity
On 10/14/2009 11:46 PM, Javier Guerra wrote: On Wed, Oct 14, 2009 at 7:03 AM, Avi Kivitya...@redhat.com wrote: Early implementations of virtio devices did not support barrier operations, but did commit the data to disk. In such cases, drain the queue to emulate barrier operations.

Re: sync guest calls made async on host - SQLite performance

2009-10-14 Thread Avi Kivity
On 10/14/2009 10:41 PM, Christoph Hellwig wrote: But can't this be also implemented using QUEUE_ORDERED_DRAIN, and on the host side disabling the backing device write cache? I'm talking about cache=none, primarily. Yes, it could. But as I found out in a long discussion with Stephen it's

Re: sync guest calls made async on host - SQLite performance

2009-10-14 Thread Christoph Hellwig
On Thu, Oct 15, 2009 at 01:56:40AM +0900, Avi Kivity wrote: Does virtio say it has a write cache or not (and how does one say it?)? Historically it didn't and the only safe way to use virtio was in cache=writethrough mode. Since qemu git as of 4th Sempember and Linux 2.6.32-rc there is a

Re: kernel bug in kvm_intel

2009-10-14 Thread Avi Kivity
On 10/13/2009 11:04 PM, Andrew Theurer wrote: Look at the address where vmx_vcpu_run starts, add 0x26d, and show the surrounding code. Thinking about it, it probably _is_ what you showed, due to module page alignment. But please verify this; I can't reconcile the fault address

Re: [Qemu-devel] [STABLE PATCH] hotplug: fix scsi hotplug.

2009-10-14 Thread Dustin Kirkland
On Wed, Oct 14, 2009 at 8:30 AM, Gerd Hoffmann kra...@redhat.com wrote: Well, partly just papering over the issues.  But without proper scsi bus infrastructure we hardly can do better.  Changes:  * Avoid auto-attach by setting the bus number to -1.  * Ignore the unit value calculated by

[PATCH] kvm: fix MSR_COUNT for kvm_arch_save_regs()

2009-10-14 Thread Eduardo Habkost
A new register was added to the load/save list on commit d283d5a65a2bdcc570065267be21848bd6fe3d78, but MSR_COUNT was not updated, leading to potential stack corruption on kvm_arch_save_regs(). The following registers are saved by kvm_arch_save_regs(): 1) MSR_IA32_SYSENTER_CS 2)

Re: [PATCHv2 1/2] Complete cpu initialization before signaling main thread.

2009-10-14 Thread Marcelo Tosatti
On Wed, Oct 14, 2009 at 03:52:31PM +0200, Gleb Natapov wrote: Otherwise some cpus may start executing code before others are fully initialized. Signed-off-by: Gleb Natapov g...@redhat.com Applied both, thanks. -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a

Re: [PATCH] v4: allow userspace to adjust kvmclock offset

2009-10-14 Thread Marcelo Tosatti
On Wed, Oct 14, 2009 at 10:47:46AM -0400, Glauber Costa wrote: When we migrate a kvm guest that uses pvclock between two hosts, we may suffer a large skew. This is because there can be significant differences between the monotonic clock of the hosts involved. When a new host with a much larger

Re: Added VM Exit on RDTSC, trouble handling in userspace

2009-10-14 Thread Marcelo Tosatti
On Tue, Oct 13, 2009 at 10:51:48PM -0700, Kurt Kiefer wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Hi all, In short, I have a need for trapping RDTSC with a VM Exit and this works, but I'm having trouble handling it in userspace. I have added the hooks I need (I only care about

Re: [PATCH] v4: allow userspace to adjust kvmclock offset

2009-10-14 Thread Glauber Costa
On Wed, Oct 14, 2009 at 03:53:27PM -0300, Marcelo Tosatti wrote: On Wed, Oct 14, 2009 at 10:47:46AM -0400, Glauber Costa wrote: When we migrate a kvm guest that uses pvclock between two hosts, we may suffer a large skew. This is because there can be significant differences between the

Re: [PATCH][RFC] Xen PV-on-HVM guest support

2009-10-14 Thread Ed Swierk
Thanks for the feedback; I'll post a new version shortly. On Tue, Oct 13, 2009 at 11:45 PM, Jan Kiszka jan.kis...@web.de wrote: Interesting stuff. How usable is your work at this point? I've no immediate demand, but the question if one could integrate Xen guests with KVM already popped up more

Re: Latest -git qemu-kvm doesn't boot an x86 kernel

2009-10-14 Thread Marcelo Tosatti
On Wed, Oct 14, 2009 at 09:23:43PM +0530, Aneesh Kumar K.V wrote: Hi, I am trying qemu-system-x86_64 on a x86 host running 2.6.30-2 (debian testing) kernel and trying to boot latest linus git kernel (x86). The kernel hang after printing the below [ 4.394392] ACPI: PCI Interrupt Link

Re: [Qemu-devel] Re: Release plan for 0.12.0

2009-10-14 Thread Sridhar Samudrala
On Wed, 2009-10-14 at 17:50 +0200, Michael S. Tsirkin wrote: On Wed, Oct 14, 2009 at 04:19:17PM +0100, Jamie Lokier wrote: Michael S. Tsirkin wrote: On Wed, Oct 14, 2009 at 09:17:15AM -0500, Anthony Liguori wrote: Michael S. Tsirkin wrote: Looks like Or has abandoned it. I have an

Re: Add a qemu interface for sharing memory between guests.

2009-10-14 Thread Cam Macdonell
On Mon, Oct 12, 2009 at 2:55 AM, Avi Kivity a...@redhat.com wrote: On 10/12/2009 08:53 AM, Sivaram Kannan wrote: Hi all, I am a KVM newbie and I picked up the following task from the TODO of the KVM wiki. Add a qemu interface for sharing memory between guests. Using a pci device to

Raw vs. tap (was: Re: [Qemu-devel] Re: Release plan for 0.12.0)

2009-10-14 Thread Anthony Liguori
Sridhar Samudrala wrote: Can't we bind the raw socket to the tap interface instead of the physical interface and allow the bridge config to work. But why use the raw interface instead of tap directly. Let me summarize the discussion so far: Raw sockets Pros: o User specifies a network

Re: sync guest calls made async on host - SQLite performance

2009-10-14 Thread Anthony Liguori
Christoph Hellwig wrote: On Thu, Oct 15, 2009 at 01:56:40AM +0900, Avi Kivity wrote: Does virtio say it has a write cache or not (and how does one say it?)? Historically it didn't and the only safe way to use virtio was in cache=writethrough mode. Which should be the default on

[PATCH] kvm: Prevent kvm_init from corrupting debugfs structures

2009-10-14 Thread Darrick J. Wong
I'm seeing an oops condition when kvm-intel and kvm-amd are modprobe'd during boot (say on an Intel system) and then rmmod'd: # modprobe kvm-intel kvm_init() kvm_init_debug() kvm_arch_init() -- stores debugfs dentries internally (success, etc) # modprobe kvm-amd

Re: sync guest calls made async on host - SQLite performance

2009-10-14 Thread Avi Kivity
On 10/15/2009 07:54 AM, Anthony Liguori wrote: Christoph Hellwig wrote: On Thu, Oct 15, 2009 at 01:56:40AM +0900, Avi Kivity wrote: Does virtio say it has a write cache or not (and how does one say it?)? Historically it didn't and the only safe way to use virtio was in cache=writethrough

Re: [PATCH] allow userspace to adjust kvmclock offset

2009-10-14 Thread Avi Kivity
On 10/13/2009 09:46 PM, Glauber Costa wrote: On Tue, Oct 13, 2009 at 03:31:08PM +0300, Avi Kivity wrote: On 10/13/2009 03:28 PM, Glauber Costa wrote: Do we want an absolute or relative adjustment? What exactly do you mean? Absolute adjustment: clock = t

buildbot failure in qemu-kvm on default_i386_out_of_tree

2009-10-14 Thread qemu-kvm
The Buildbot has detected a new failure of default_i386_out_of_tree on qemu-kvm. Full details are available at: http://buildbot.b1-systems.de/qemu-kvm/builders/default_i386_out_of_tree/builds/51 Buildbot URL: http://buildbot.b1-systems.de/qemu-kvm/ Buildslave for this Build: b1_qemu_kvm_2

buildbot failure in qemu-kvm on default_i386_debian_5_0

2009-10-14 Thread qemu-kvm
The Buildbot has detected a new failure of default_i386_debian_5_0 on qemu-kvm. Full details are available at: http://buildbot.b1-systems.de/qemu-kvm/builders/default_i386_debian_5_0/builds/114 Buildbot URL: http://buildbot.b1-systems.de/qemu-kvm/ Buildslave for this Build: b1_qemu_kvm_2

buildbot failure in qemu-kvm on default_x86_64_debian_5_0

2009-10-14 Thread qemu-kvm
The Buildbot has detected a new failure of default_x86_64_debian_5_0 on qemu-kvm. Full details are available at: http://buildbot.b1-systems.de/qemu-kvm/builders/default_x86_64_debian_5_0/builds/112 Buildbot URL: http://buildbot.b1-systems.de/qemu-kvm/ Buildslave for this Build: b1_qemu_kvm_1

buildbot failure in qemu-kvm on default_x86_64_out_of_tree

2009-10-14 Thread qemu-kvm
The Buildbot has detected a new failure of default_x86_64_out_of_tree on qemu-kvm. Full details are available at: http://buildbot.b1-systems.de/qemu-kvm/builders/default_x86_64_out_of_tree/builds/53 Buildbot URL: http://buildbot.b1-systems.de/qemu-kvm/ Buildslave for this Build: b1_qemu_kvm_1

Can't make virtio block driver work on Windows 2003

2009-10-14 Thread Asdo
Hi all I have a new installation of Windows 2003 SBS server 32bit which I installed using IDE disk. KVM version is QEMU PC emulator version 0.10.50 (qemu-kvm-devel-86) compiled by myself on kernel 2.6.28-11-server. I have already moved networking from e1000 to virtio (e1000 was performing

Re: Latest -git qemu-kvm doesn't boot an x86 kernel

2009-10-14 Thread Aneesh Kumar K.V
On Wed, Oct 14, 2009 at 04:54:35PM -0300, Marcelo Tosatti wrote: On Wed, Oct 14, 2009 at 09:23:43PM +0530, Aneesh Kumar K.V wrote: Hi, I am trying qemu-system-x86_64 on a x86 host running 2.6.30-2 (debian testing) kernel and trying to boot latest linus git kernel (x86). The kernel

Re: Can't make virtio block driver work on Windows 2003

2009-10-14 Thread Vadim Rozenfeld
On 10/14/2009 07:52 PM, Asdo wrote: Hi all I have a new installation of Windows 2003 SBS server 32bit which I installed using IDE disk. KVM version is QEMU PC emulator version 0.10.50 (qemu-kvm-devel-86) compiled by myself on kernel 2.6.28-11-server. I have already moved networking from

[PATCH] Xen PV-on-HVM guest support (v2)

2009-10-14 Thread Ed Swierk
Support for Xen PV-on-HVM guests can be implemented almost entirely in userspace, except for handling one annoying MSR that maps a Xen hypercall blob into guest address space. A generic mechanism to delegate MSR writes to userspace seems overkill and risks encouraging similar MSR abuse in the

Re: linux-next: tree build failure

2009-10-14 Thread Hollis Blanchard
On Fri, 2009-10-09 at 12:14 -0700, Hollis Blanchard wrote: Rusty's version of BUILD_BUG_ON() does indeed fix the build break, and also exposes the bug in kvmppc_account_exit_stat(). So to recap: original: built but didn't work Jan's: doesn't build Rusty's: builds and works Where do you