Hi,
I'm not able to get MementOS booting while using kvm modules. QEmu
hangs on Floppy boot. Here is the procedure:
cd /tmp
wget -c 'http://www.menuetos.be/download.php?CurrentMenuetOS' -O menuetos.zip
unzip -u menuetos.zip
qemu-kvm -m 512 -fda M64-*.IMG -boot a
I have a intel i7 920, I use
Ed Swierk wrote:
As we discussed a while back, support for Xen PV-on-HVM guests can be
implemented almost entirely in userspace, except for handling one
annoying MSR that maps a Xen hypercall blob into guest address space.
A generic mechanism to delegate MSR writes to userspace seems
This adds the required bit to retrieve and set the so far hidden NMI
pending and NMI masked states of the KVM kernel side. It also extends
CPU VMState for proper saving/restoring. We can now savely reset a VM
while NMIs are on the fly, and we can live migrate etc. too.
Fortunately, the
This test suspends a guest OS to disk, it supports Linux and Windows.
Signed-off-by: Ken Cao k...@redhat.com
Signed-off-by: Yolkfull Chow yz...@redhat.com
---
client/tests/kvm/kvm_tests.cfg.sample | 16
client/tests/kvm/tests/guest_s4.py| 66 +
2
On Tue, Oct 13, 2009 at 11:54 PM, Yolkfull Chow yz...@redhat.com wrote:
On Tue, Oct 13, 2009 at 05:29:40PM -0300, Lucas Meneghel Rodrigues wrote:
Hi Yolkfull and Chen:
Thanks for your test! I have some comments and doubts to clear, most
of them are about content of the messages delivered for
On Wed, Oct 14, 2009 at 06:58:01AM -0300, Lucas Meneghel Rodrigues wrote:
On Tue, Oct 13, 2009 at 11:54 PM, Yolkfull Chow yz...@redhat.com wrote:
On Tue, Oct 13, 2009 at 05:29:40PM -0300, Lucas Meneghel Rodrigues wrote:
Hi Yolkfull and Chen:
Thanks for your test! I have some comments and
Ok, looks good. Commited as
http://autotest.kernel.org/changeset/3844
On Mon, Oct 12, 2009 at 11:36 PM, Cao, Chen k...@redhat.com wrote:
Since os.rename requires that the file is in the same partition with
the dest directory, we would get a python OSError if the result
directory is mounted to
Hi Amos, thanks for the patch, here are my comments (pretty much
concerning only coding style):
On Wed, Sep 23, 2009 at 8:19 AM, Amos Kong ak...@redhat.com wrote:
Test 802.1Q vlan of nic, config it by vconfig command.
1) Create two VMs
2) Setup guests in different vlan by vconfig and test
On 10/14/2009 07:37 AM, Christoph Hellwig wrote:
Christoph, wasn't there a bug where the guest didn't wait for requests
in response to a barrier request?
Can't remember anything like that. The bug was the complete lack of
cache flush infrastructure for virtio, and the lack of advertising
I understand. However the test itself is fairly trivial
representation of a single teir high-transactional load system. (Ie:
a system that is logging a large number of events).
The phoronix test suite simply hands over to a binary using sqlite and
does 25000 sequential inserts. The overhead of
Early implementations of virtio devices did not support barrier operations,
but did commit the data to disk. In such cases, drain the queue to emulate
barrier operations.
Signed-off-by: Avi Kivity a...@redhat.com
---
drivers/block/virtio_blk.c |6 +-
1 files changed, 5 insertions(+), 1
Yolkfull, I've studied about single root IO virtualization before
reviewing your patch, the general approach here looks good. There were
some stylistic points as far as code is concerned, so I have rebased
your patch against the latest trunk, and added some explanation about
the features being
On Thursday 08 October 2009, Anthony Liguori wrote:
Jens Osterkamp wrote:
On Wednesday 30 September 2009, Anthony Liguori wrote:
Please add to this list and I'll collect it all and post it somewhere.
What about Or Gerlitz' raw backend driver ? I did not see it go in yet, or
On Thu, Oct 08, 2009 at 09:21:04AM -0500, Anthony Liguori wrote:
Jens Osterkamp wrote:
On Wednesday 30 September 2009, Anthony Liguori wrote:
o VMState conversion -- I expect most of the pc target to be completed
o qdev conversion -- I hope that we'll get most of the pc target
On Wed, Oct 14, 2009 at 08:03:41PM +0900, Avi Kivity wrote:
Can't remember anything like that. The bug was the complete lack of
cache flush infrastructure for virtio, and the lack of advertising a
volative write cache on ide.
By complete flush infrastructure, you mean host-side and
Otherwise some cpus may start executing code before others
are fully initialized.
Signed-off-by: Gleb Natapov g...@redhat.com
---
v1-v2:
- reinit cpu_single_env after qemu_cond_wait()
qemu-kvm.c | 29 +++--
1 files changed, 15 insertions(+), 14 deletions(-)
diff
mp_state, unlike other cpu state, can be changed not only from vcpu
context it belongs to, but by other vcpus too. That makes its loading
from kernel/saving back not safe if mp_state value is changed inside
kernel between load and save. For example vcpu 1 loads mp_sate into
user-space and the
Arnd Bergmann wrote:
There are two reasons why I think this backend is important:
- As an easy way to provide isolation between guests (private ethernet
port aggregator, PEPA) and external enforcement of network priviledges
(virtual ethernet port aggregator, VEPA) using the macvlan
On Wed, Oct 14, 2009 at 08:53:55AM -0500, Anthony Liguori wrote:
Arnd Bergmann wrote:
There are two reasons why I think this backend is important:
- As an easy way to provide isolation between guests (private ethernet
port aggregator, PEPA) and external enforcement of network priviledges
On Wed, Oct 14, 2009 at 03:09:28PM +0200, Arnd Bergmann wrote:
On Thursday 08 October 2009, Anthony Liguori wrote:
Jens Osterkamp wrote:
On Wednesday 30 September 2009, Anthony Liguori wrote:
Please add to this list and I'll collect it all and post it somewhere.
What
Michael S. Tsirkin wrote:
Looks like Or has abandoned it. I have an updated version which works
with new APIs, etc. Let me post it and we'll go from there.
I'm generally inclined to oppose the functionality as I don't think it
offers any advantages over the existing backends.
I
On Wed, Oct 14, 2009 at 09:17:15AM -0500, Anthony Liguori wrote:
Michael S. Tsirkin wrote:
Looks like Or has abandoned it. I have an updated version which works
with new APIs, etc. Let me post it and we'll go from there.
I'm generally inclined to oppose the functionality as I don't
FYI, Amit pointed out that the correct name for this test would be PCI
device assignment, so the final version of this patch will be called
PCI device assignment instead.
On Wed, Oct 14, 2009 at 9:08 AM, Lucas Meneghel Rodrigues
l...@redhat.com wrote:
Add a new PCI pass trough test. It supports
When we migrate a kvm guest that uses pvclock between two hosts, we may
suffer a large skew. This is because there can be significant differences
between the monotonic clock of the hosts involved. When a new host with
a much larger monotonic time starts running the guest, the view of time
will be
On Wed, Oct 14, 2009 at 7:03 AM, Avi Kivity a...@redhat.com wrote:
Early implementations of virtio devices did not support barrier operations,
but did commit the data to disk. In such cases, drain the queue to emulate
barrier operations.
would this help on the (i think common) situation with
Michael S. Tsirkin wrote:
On Wed, Oct 14, 2009 at 09:17:15AM -0500, Anthony Liguori wrote:
Michael S. Tsirkin wrote:
Looks like Or has abandoned it. I have an updated version which works
with new APIs, etc. Let me post it and we'll go from there.
I'm generally inclined to oppose
Avi Kivity wrote:
Early implementations of virtio devices did not support barrier operations,
but did commit the data to disk. In such cases, drain the queue to emulate
barrier operations.
Are there any implementation currently that actually supports barriers?
As far as I remember there's no
On Wed, Oct 14, 2009 at 07:38:45PM +0400, Michael Tokarev wrote:
Avi Kivity wrote:
Early implementations of virtio devices did not support barrier operations,
but did commit the data to disk. In such cases, drain the queue to emulate
barrier operations.
Are there any implementation
On Wed, Oct 14, 2009 at 04:19:17PM +0100, Jamie Lokier wrote:
Michael S. Tsirkin wrote:
On Wed, Oct 14, 2009 at 09:17:15AM -0500, Anthony Liguori wrote:
Michael S. Tsirkin wrote:
Looks like Or has abandoned it. I have an updated version which works
with new APIs, etc. Let me post it
Hi,
I am trying qemu-system-x86_64 on a x86 host running 2.6.30-2 (debian testing)
kernel and trying to boot latest linus git kernel (x86). The kernel hang
after printing the below
[ 4.394392] ACPI: PCI Interrupt Link [LNKC] enabled at IRQ 11
[4.397837] virtio-pci :00:03.0: PCI INT A
Hello,
Done in three parts, the following patches get rid of vcpu structure in
qemu-kvm.
All state is now held in CPUState, getting us a bit closer to upstream qemu
again.
The last pass converts us to the use of kvm_vcpu_ioctl, allowing more code to
be shared.
--
To unsubscribe from this
At this point, vcpu arguments are passed only for the fd field.
We already provide that in env, as kvm_fd. Replace it.
Signed-off-by: Glauber Costa glom...@redhat.com
---
cpu-defs.h |1 -
hw/apic.c |4 +-
kvm-tpr-opt.c | 16 +-
qemu-kvm-x86.c | 91
We have no use for it anymore. Only trace of it was in vcpu_create.
Make it disappear.
Signed-off-by: Glauber Costa glom...@redhat.com
---
qemu-kvm.c | 11 +++
qemu-kvm.h |5 -
2 files changed, 3 insertions(+), 13 deletions(-)
diff --git a/qemu-kvm.c b/qemu-kvm.c
index
Signed-off-by: Glauber Costa glom...@redhat.com
---
kvm-all.c |3 ---
qemu-kvm-x86.c | 20 ++--
qemu-kvm.c | 26 +-
qemu-kvm.h |1 +
4 files changed, 24 insertions(+), 26 deletions(-)
diff --git a/kvm-all.c b/kvm-all.c
index
On 10/14/2009 11:46 PM, Javier Guerra wrote:
On Wed, Oct 14, 2009 at 7:03 AM, Avi Kivitya...@redhat.com wrote:
Early implementations of virtio devices did not support barrier operations,
but did commit the data to disk. In such cases, drain the queue to emulate
barrier operations.
On 10/14/2009 10:41 PM, Christoph Hellwig wrote:
But can't this be also implemented using QUEUE_ORDERED_DRAIN, and on the
host side disabling the backing device write cache? I'm talking about
cache=none, primarily.
Yes, it could. But as I found out in a long discussion with Stephen
it's
On Thu, Oct 15, 2009 at 01:56:40AM +0900, Avi Kivity wrote:
Does virtio say it has a write cache or not (and how does one say it?)?
Historically it didn't and the only safe way to use virtio was in
cache=writethrough mode. Since qemu git as of 4th Sempember and Linux
2.6.32-rc there is a
On 10/13/2009 11:04 PM, Andrew Theurer wrote:
Look at the address where vmx_vcpu_run starts, add 0x26d, and show the
surrounding code.
Thinking about it, it probably _is_ what you showed, due to module page
alignment. But please verify this; I can't reconcile the fault address
On Wed, Oct 14, 2009 at 8:30 AM, Gerd Hoffmann kra...@redhat.com wrote:
Well, partly just papering over the issues. But without proper scsi bus
infrastructure we hardly can do better. Changes:
* Avoid auto-attach by setting the bus number to -1.
* Ignore the unit value calculated by
A new register was added to the load/save list on commit
d283d5a65a2bdcc570065267be21848bd6fe3d78, but MSR_COUNT was not updated, leading
to potential stack corruption on kvm_arch_save_regs().
The following registers are saved by kvm_arch_save_regs():
1) MSR_IA32_SYSENTER_CS
2)
On Wed, Oct 14, 2009 at 03:52:31PM +0200, Gleb Natapov wrote:
Otherwise some cpus may start executing code before others
are fully initialized.
Signed-off-by: Gleb Natapov g...@redhat.com
Applied both, thanks.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a
On Wed, Oct 14, 2009 at 10:47:46AM -0400, Glauber Costa wrote:
When we migrate a kvm guest that uses pvclock between two hosts, we may
suffer a large skew. This is because there can be significant differences
between the monotonic clock of the hosts involved. When a new host with
a much larger
On Tue, Oct 13, 2009 at 10:51:48PM -0700, Kurt Kiefer wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi all,
In short, I have a need for trapping RDTSC with a VM Exit and this
works, but I'm having trouble handling it in userspace. I have added the
hooks I need (I only care about
On Wed, Oct 14, 2009 at 03:53:27PM -0300, Marcelo Tosatti wrote:
On Wed, Oct 14, 2009 at 10:47:46AM -0400, Glauber Costa wrote:
When we migrate a kvm guest that uses pvclock between two hosts, we may
suffer a large skew. This is because there can be significant differences
between the
Thanks for the feedback; I'll post a new version shortly.
On Tue, Oct 13, 2009 at 11:45 PM, Jan Kiszka jan.kis...@web.de wrote:
Interesting stuff. How usable is your work at this point? I've no
immediate demand, but the question if one could integrate Xen guests
with KVM already popped up more
On Wed, Oct 14, 2009 at 09:23:43PM +0530, Aneesh Kumar K.V wrote:
Hi,
I am trying qemu-system-x86_64 on a x86 host running 2.6.30-2 (debian testing)
kernel and trying to boot latest linus git kernel (x86). The kernel hang
after printing the below
[ 4.394392] ACPI: PCI Interrupt Link
On Wed, 2009-10-14 at 17:50 +0200, Michael S. Tsirkin wrote:
On Wed, Oct 14, 2009 at 04:19:17PM +0100, Jamie Lokier wrote:
Michael S. Tsirkin wrote:
On Wed, Oct 14, 2009 at 09:17:15AM -0500, Anthony Liguori wrote:
Michael S. Tsirkin wrote:
Looks like Or has abandoned it. I have an
On Mon, Oct 12, 2009 at 2:55 AM, Avi Kivity a...@redhat.com wrote:
On 10/12/2009 08:53 AM, Sivaram Kannan wrote:
Hi all,
I am a KVM newbie and I picked up the following task from the TODO of the
KVM wiki.
Add a qemu interface for sharing memory between guests. Using a pci device
to
Sridhar Samudrala wrote:
Can't we bind the raw socket to the tap interface instead of the
physical interface and allow the bridge config to work.
But why use the raw interface instead of tap directly.
Let me summarize the discussion so far:
Raw sockets
Pros:
o User specifies a network
Christoph Hellwig wrote:
On Thu, Oct 15, 2009 at 01:56:40AM +0900, Avi Kivity wrote:
Does virtio say it has a write cache or not (and how does one say it?)?
Historically it didn't and the only safe way to use virtio was in
cache=writethrough mode.
Which should be the default on
I'm seeing an oops condition when kvm-intel and kvm-amd are modprobe'd
during boot (say on an Intel system) and then rmmod'd:
# modprobe kvm-intel
kvm_init()
kvm_init_debug()
kvm_arch_init() -- stores debugfs dentries internally
(success, etc)
# modprobe kvm-amd
On 10/15/2009 07:54 AM, Anthony Liguori wrote:
Christoph Hellwig wrote:
On Thu, Oct 15, 2009 at 01:56:40AM +0900, Avi Kivity wrote:
Does virtio say it has a write cache or not (and how does one say it?)?
Historically it didn't and the only safe way to use virtio was in
cache=writethrough
On 10/13/2009 09:46 PM, Glauber Costa wrote:
On Tue, Oct 13, 2009 at 03:31:08PM +0300, Avi Kivity wrote:
On 10/13/2009 03:28 PM, Glauber Costa wrote:
Do we want an absolute or relative adjustment?
What exactly do you mean?
Absolute adjustment: clock = t
The Buildbot has detected a new failure of default_i386_out_of_tree on qemu-kvm.
Full details are available at:
http://buildbot.b1-systems.de/qemu-kvm/builders/default_i386_out_of_tree/builds/51
Buildbot URL: http://buildbot.b1-systems.de/qemu-kvm/
Buildslave for this Build: b1_qemu_kvm_2
The Buildbot has detected a new failure of default_i386_debian_5_0 on qemu-kvm.
Full details are available at:
http://buildbot.b1-systems.de/qemu-kvm/builders/default_i386_debian_5_0/builds/114
Buildbot URL: http://buildbot.b1-systems.de/qemu-kvm/
Buildslave for this Build: b1_qemu_kvm_2
The Buildbot has detected a new failure of default_x86_64_debian_5_0 on
qemu-kvm.
Full details are available at:
http://buildbot.b1-systems.de/qemu-kvm/builders/default_x86_64_debian_5_0/builds/112
Buildbot URL: http://buildbot.b1-systems.de/qemu-kvm/
Buildslave for this Build: b1_qemu_kvm_1
The Buildbot has detected a new failure of default_x86_64_out_of_tree on
qemu-kvm.
Full details are available at:
http://buildbot.b1-systems.de/qemu-kvm/builders/default_x86_64_out_of_tree/builds/53
Buildbot URL: http://buildbot.b1-systems.de/qemu-kvm/
Buildslave for this Build: b1_qemu_kvm_1
Hi all
I have a new installation of Windows 2003 SBS server 32bit which I
installed using IDE disk.
KVM version is QEMU PC emulator version 0.10.50 (qemu-kvm-devel-86)
compiled by myself on kernel 2.6.28-11-server.
I have already moved networking from e1000 to virtio (e1000 was
performing
On Wed, Oct 14, 2009 at 04:54:35PM -0300, Marcelo Tosatti wrote:
On Wed, Oct 14, 2009 at 09:23:43PM +0530, Aneesh Kumar K.V wrote:
Hi,
I am trying qemu-system-x86_64 on a x86 host running 2.6.30-2 (debian
testing)
kernel and trying to boot latest linus git kernel (x86). The kernel
On 10/14/2009 07:52 PM, Asdo wrote:
Hi all
I have a new installation of Windows 2003 SBS server 32bit which I
installed using IDE disk.
KVM version is QEMU PC emulator version 0.10.50 (qemu-kvm-devel-86)
compiled by myself on kernel 2.6.28-11-server.
I have already moved networking from
Support for Xen PV-on-HVM guests can be implemented almost entirely in
userspace, except for handling one annoying MSR that maps a Xen
hypercall blob into guest address space.
A generic mechanism to delegate MSR writes to userspace seems overkill
and risks encouraging similar MSR abuse in the
On Fri, 2009-10-09 at 12:14 -0700, Hollis Blanchard wrote:
Rusty's version of BUILD_BUG_ON() does indeed fix the build break, and
also exposes the bug in kvmppc_account_exit_stat(). So to recap:
original: built but didn't work
Jan's: doesn't build
Rusty's: builds and works
Where do you
62 matches
Mail list logo