Signed-off-by: Yolkfull Chow yz...@redhat.com
---
client/tests/kvm/kvm_tests.cfg.sample |4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/client/tests/kvm/kvm_tests.cfg.sample
b/client/tests/kvm/kvm_tests.cfg.sample
index ac9ef66..7f37994 100644
---
Hello,
11/01/2009 08:31 PM, Avi Kivity wrote:
Here is the code in question:
3ae7: 75 05 jne
3aeevmx_vcpu_run+0x26a
3ae9: 0f 01 c2vmlaunch
3aec: eb 03 jmp
3af1vmx_vcpu_run+0x26d
Hi folks,
If I migrate a virtual machine (2.6.31.6, amd64) from a host with
AMD cpu to an Intel host, then the guest is terminated on the old
host as expected, but it gets stuck on the new host. Every 60 seconds
it prints a message on the virtual console saying
BUG: soft lockup - CPU#0
Avi Kivity wrote:
On 11/17/2009 06:50 PM, Jan Kiszka wrote:
I think we're not on the same page here. As I see it, no interface
change is needed at all.
It's true that existing kernels don't handle this properly, which is why
I said I'm willing to treat it as a bug (and thus the -stable
Ok, applied, thanks!
On Wed, Nov 18, 2009 at 6:55 AM, Yolkfull Chow yz...@redhat.com wrote:
Signed-off-by: Yolkfull Chow yz...@redhat.com
---
client/tests/kvm/kvm_tests.cfg.sample | 4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
diff --git
Hi folks, I am resuming sending patch queue reports for the KVM test
framework. I will heavily base it on the latest autotest's patchwork
status. At any time you can see the status of the incoming patches by
checking
http://patchwork.test.kernel.org/project/autotest/list/
Summary: 4 patches
2009/11/17 Yoshiaki Tamura tamura.yoshi...@lab.ntt.co.jp:
Avi Kivity wrote:
On 11/16/2009 04:18 PM, Fernando Luis Vázquez Cao wrote:
Avi Kivity wrote:
On 11/09/2009 05:53 AM, Fernando Luis Vázquez Cao wrote:
Kemari runs paired virtual machines in an active-passive configuration
and
Hi All,
This Weekly KVM Testing Report against latest kvm.git
cb59a30c417e6cb45f6522e550307dfe4f6e6836 and qemu-kvm.git
c04b2aebf50c7d8cba883b86d1b872ccfc8f2249.
The build failure issue is fixed. The VF IRQ source exhausted error is fixed.
Live Migration in latest commit cannot work.
One New
On 11/18/2009 11:50 AM, Jan Kiszka wrote:
INIT, too.
INIT should be handled by queuing up the next mp_state.
And clearing the previous queue; otherwise our queue is unbounded.
BTW, as we do not inject mp_state changes from user space during
runtime, the issue I saw with the
On 11/17/2009 06:58 PM, Jan Kiszka wrote:
That wouldn't be required anymore with the always queue policy.
Hmm, unless we need mp_state manipulation analogously to KVM_NMI vs.
KVM_SET_VCPU_STATE: The former will queue, the latter write, but may be
overwritten by anything queued. If you
On 11/18/2009 03:28 PM, Yoshiaki Tamura wrote:
I don't think lmbench is intensive but it's sensitive to memory latency.
We'll measure kernel build time with minimum config, and post it later.
Here are some quick numbers of parallel kernel compile time.
The number of vcpu is 1, just for
ok,
Below is the patch. This time resent with mutt and hopefully without whitespace
changes.
This patch reduces the amount of memory being cleared on every virtio-blk IO
operation.
Improve number of IOPS when using
Almost there. Just a couple of minor things.
On Tue, Nov 17, 2009 at 5:04 PM, Lucas Meneghel Rodrigues
l...@redhat.com wrote:
In order to make it possible the autotest client to use the
global_config.ini configuration files:
* Modified global_config.py to support verifying if the
In order to make it possible the autotest client to use the
global_config.ini configuration files:
* Modified global_config.py to support verifying if the
configuration file is under autotest's root directory,
or the client directory
* Extended the autotest run method to copy over
the
Looks good.
On Wed, Nov 18, 2009 at 8:09 AM, Lucas Meneghel Rodrigues
l...@redhat.com wrote:
In order to make it possible the autotest client to use the
global_config.ini configuration files:
* Modified global_config.py to support verifying if the
configuration file is under autotest's
Greg, would you mind giving a last review on the patchset before I
check this in?
On Wed, Nov 18, 2009 at 2:09 PM, Lucas Meneghel Rodrigues
l...@redhat.com wrote:
In order to make it possible the autotest client to use the
global_config.ini configuration files:
* Modified global_config.py to
---BeginMessage---
I recently updated to the latest qemu-kvm git tree (commit c04b2ae) and
I ran into the following problem. I want to do a direct Linux boot for
some of my testing work, using the -kernel option. Apparently the the
gPXE boot code corrupts something in memory or other CPU state
Jim Paris wrote:
Erik Rull wrote:
Hi all,
I want to run an epson inkjet within my windows xp guest. my host has
enabled usb 2.0, the USB flashdrive works without any problems. When I
plug in the printer (works with the same drivers on a native windows
xp!), it is recognized and the status
Erik Rull wrote:
Jim Paris wrote:
Erik Rull wrote:
Hi all,
I want to run an epson inkjet within my windows xp guest. my host has
enabled usb 2.0, the USB flashdrive works without any problems. When
I plug in the printer (works with the same drivers on a native
windows xp!), it is
Jim Paris wrote:
Erik Rull wrote:
Hi Jim,
sorry, still a bluescreen - but another one :-)
BUGCODE_USBDRIVER is its name.
Any other ideas? With USB 1.1 on the host everything is fine, after
enabling USB 2.0 in BIOS on the host, USB is faster within the guest, but
I have the given Bluescreen
Erik Rull wrote:
Jim Paris wrote:
Erik Rull wrote:
Hi Jim,
sorry, still a bluescreen - but another one :-)
BUGCODE_USBDRIVER is its name.
Any other ideas? With USB 1.1 on the host everything is fine, after
enabling USB 2.0 in BIOS on the host, USB is faster within the guest,
but I
On Wed, Nov 18, 2009 at 12:19:20AM -0500, Kevin O'Connor wrote:
On Tue, Nov 17, 2009 at 03:21:31PM +0200, Avi Kivity wrote:
qemu-kvm's switch to seabios uncovered a regression with cdrom handling.
Vista x64 no longer recognizes the cdrom, while pc-bios still works.
Installing works, but
2009/11/18 Avi Kivity a...@redhat.com:
On 11/18/2009 03:28 PM, Yoshiaki Tamura wrote:
I don't think lmbench is intensive but it's sensitive to memory latency.
We'll measure kernel build time with minimum config, and post it later.
Here are some quick numbers of parallel kernel compile time.
repository: /home/vadimr/shares/kvm-guest-drivers-windows
branch: XP
commit 3b2926a281a769499944a23cc3c9b905593e6838
Author: Vadim Rozenfeldvroze...@redhat.com
Date: Thu Nov 19 09:14:38 2009 +0200
[PATCH] viostor driver. Xp driver performance.
Signed-off-by: Vadim
On 11/18/2009 02:13 AM, Alexander Graf wrote:
Paravirt ops is currently only capable of either replacing a lot of Linux
internal code or none at all. The are users that don't need all of the
possibilities pv-ops delivers though.
On KVM for example we're perfectly fine not using the PV MMU, thus
repository: /home/vadimr/shares/kvm-guest-drivers-windows
branch: XP
commit 7f637876e7f8ef9a18d3baac31a4648034dcedaf
Author: Vadim Rozenfeldvroze...@redhat.com
Date: Thu Nov 19 09:50:32 2009 +0200
[PATCH] viostor driver. small fix in startio routine (storport related
path).
26 matches
Mail list logo