On 07/28/2011 02:25 AM, Boris Dolgov wrote:
Hello!
I am using Fedora 14 with kernel and qemu-kvm from Fedora 15:
[root@serv ~]# qemu-kvm --help | head -1
QEMU emulator version 0.14.0 (qemu-kvm-0.14.0), Copyright (c)
2003-2008 Fabrice Bellard
[root@serv ~]# uname -a
Linux serv
On Thu, Jul 28, 2011 at 6:43 AM, Zhi Yong Wu zwu.ker...@gmail.com wrote:
On Wed, Jul 27, 2011 at 8:58 PM, Stefan Hajnoczi stefa...@gmail.com wrote:
On Wed, Jul 27, 2011 at 11:17 AM, Zhi Yong Wu zwu.ker...@gmail.com wrote:
On Wed, Jul 27, 2011 at 3:26 AM, Marcelo Tosatti mtosa...@redhat.com
On Thu, Jul 28, 2011 at 9:20 AM, Stefan Hajnoczi stefa...@gmail.com wrote:
On Thu, Jul 28, 2011 at 6:43 AM, Zhi Yong Wu zwu.ker...@gmail.com wrote:
On Wed, Jul 27, 2011 at 8:58 PM, Stefan Hajnoczi stefa...@gmail.com wrote:
On Wed, Jul 27, 2011 at 11:17 AM, Zhi Yong Wu zwu.ker...@gmail.com
Architecturally, PDPTEs are cached in the PDPTRs when CR3 is reloaded.
On SVM, it is not possible to implement this, but on VMX this is possible
and was indeed implemented until nested SVM changed this to unconditionally
read PDPTEs dynamically. This has noticable impact when running PAE guests.
On Thu, Jul 28, 2011 at 4:25 PM, Stefan Hajnoczi stefa...@gmail.com wrote:
On Thu, Jul 28, 2011 at 9:20 AM, Stefan Hajnoczi stefa...@gmail.com wrote:
On Thu, Jul 28, 2011 at 6:43 AM, Zhi Yong Wu zwu.ker...@gmail.com wrote:
On Wed, Jul 27, 2011 at 8:58 PM, Stefan Hajnoczi stefa...@gmail.com
Map GSIs manually when starting the guest.
This will allow us mapping new GSIs for MSIX in the future.
Signed-off-by: Sasha Levin levinsasha...@gmail.com
---
tools/kvm/builtin-run.c |3 ++
tools/kvm/include/kvm/irq.h |3 ++
tools/kvm/irq.c | 75
PCI BAR probing is done in four steps:
1. Read address (and flags).
2. Mask BAR.
3. Read BAR again - Now the expected result is the size of the BAR.
4. Mask BAR with address.
So far, we have only took care of the first step. This means that the kernel
was using address as the size, causing a
This makes MMIO callback similar to it's PIO counterpart by passing
a void* value provided in the registration to the callback function.
This allows to keep context within the MMIO callback function.
Signed-off-by: Sasha Levin levinsasha...@gmail.com
---
tools/kvm/include/kvm/kvm.h |2 +-
This patch implements basic MSI-X support for virtio-rng.
The device uses the virtio preferred method of working with MSI-X by
creating one vector for configuration and one vector for each vq in the
device.
Signed-off-by: Sasha Levin levinsasha...@gmail.com
---
tools/kvm/include/kvm/pci.h |
On 07/27/2011 04:00 PM, Sasha Levin wrote:
Currently the method of dealing with an IO operation on a bus (PIO/MMIO)
is to call the read or write callback for each device registered
on the bus until we find a device which handles it.
Since the number of devices on a bus can be significant due to
Hi Sasha,
On Thu, Jul 28, 2011 at 12:01 PM, Sasha Levin levinsasha...@gmail.com wrote:
Map GSIs manually when starting the guest.
This will allow us mapping new GSIs for MSIX in the future.
Signed-off-by: Sasha Levin levinsasha...@gmail.com
---
tools/kvm/builtin-run.c | 3 ++
On Thu, Jul 28, 2011 at 12:01 PM, Sasha Levin levinsasha...@gmail.com wrote:
PCI BAR probing is done in four steps:
1. Read address (and flags).
2. Mask BAR.
3. Read BAR again - Now the expected result is the size of the BAR.
4. Mask BAR with address.
So far, we have only took care of
On Thu, Jul 28, 2011 at 12:01 PM, Sasha Levin levinsasha...@gmail.com wrote:
@@ -163,8 +167,21 @@ static bool virtio_rng_pci_io_out(struct ioport *ioport,
struct kvm *kvm, u16 po
rdev-status = ioport__read8(data);
break;
case
On Thu, Jul 28, 2011 at 12:31:51PM +0300, Pekka Enberg wrote:
On Thu, Jul 28, 2011 at 12:01 PM, Sasha Levin levinsasha...@gmail.com wrote:
PCI BAR probing is done in four steps:
1. Read address (and flags).
2. Mask BAR.
3. Read BAR again - Now the expected result is the size of the
On Thu, Jul 28, 2011 at 12:01:54PM +0300, Sasha Levin wrote:
...
struct mmio_mapping {
struct rb_int_node node;
- void(*kvm_mmio_callback_fn)(u64 addr, u8 *data, u32
len, u8 is_write);
+ void(*kvm_mmio_callback_fn)(u64 addr, u8
On Thu, Jul 28, 2011 at 12:38 PM, Cyrill Gorcunov gorcu...@gmail.com wrote:
@@ -51,5 +51,6 @@ struct pci_device_header {
void pci__init(void);
void pci__register(struct pci_device_header *dev, u8 dev_num);
+u32 pci_get_io_space_block(void);
On Thu, Jul 28, 2011 at 12:01:52PM +0300, Sasha Levin wrote:
Map GSIs manually when starting the guest.
This will allow us mapping new GSIs for MSIX in the future.
Signed-off-by: Sasha Levin levinsasha...@gmail.com
---
Other than a few nits the series looks good to me, thanks Sasha!
--
To
On Thu, Jul 28, 2011 at 12:42 PM, Cyrill Gorcunov gorcu...@gmail.com wrote:
On Thu, Jul 28, 2011 at 12:01:54PM +0300, Sasha Levin wrote:
...
struct mmio_mapping {
struct rb_int_node node;
- void (*kvm_mmio_callback_fn)(u64 addr, u8 *data,
u32 len, u8
On Mon, 25 Jul 2011, Kevin Wolf wrote:
Looks okay, except that in case of a crash you'll most likely corrupt
the image because the order in which refcounts and mapping are written
out is completely undefined.
For a reliable implementation you need to make sure that for cluster
allocation you
The main goal of the patch is to effectively cap the disk I/O speed or counts
of one single VM.It is only one draft, so it unavoidably has some drawbacks, if
you catch them, please let me know.
The patch will mainly introduce one block I/O throttling algorithm, one timer
and one block queue
Signed-off-by: Zhi Yong Wu wu...@linux.vnet.ibm.com
---
Makefile.objs |2 +-
blockdev.c | 22 ++
qemu-config.c | 24
qemu-option.c | 17 +
qemu-option.h |1 +
qemu-options.hx |1 +
6 files changed, 66
Note: 1.) When bps/iops limits are specified to a small value such as 511
bytes/s, this VM will hang up. We are considering how to handle this senario.
2.) When dd command is issued in guest, if its option bs is set to a
large value such as bs=1024K, the result speed will slightly bigger
On Wed, Jul 20, 2011, Zachary Amsden wrote about Re: Nested VMX - L1 hangs on
running L2:
No, both patches are wrong.
The correct fix is to make kvm_get_msr() return the L1 guest TSC at all
times.
We are serving the L1 guest in this hypervisor, not the L2 guest, and so
should
On 07/07/2011 07:26 AM, André Weidemann wrote:
Hi,
I am running Windows7 x64 in a VM which crashes after starting a certain
game. Actually there are two games both from the same company, that make
the VM crash after starting them.
Windows crashes right after starting the game. With the 1st game
On Thu, Jul 28, 2011 at 02:01:09PM +0200, Paolo Bonzini wrote:
On 07/07/2011 07:26 AM, André Weidemann wrote:
Hi,
I am running Windows7 x64 in a VM which crashes after starting a certain
game. Actually there are two games both from the same company, that make
the VM crash after starting them.
Hi Paolo,
On 28.07.2011 14:01, Paolo Bonzini wrote:
On 07/07/2011 07:26 AM, André Weidemann wrote:
Hi,
I am running Windows7 x64 in a VM which crashes after starting a certain
game. Actually there are two games both from the same company, that make
the VM crash after starting them.
Windows
On 07/28/2011 04:16 PM, André Weidemann wrote:
Can you open the produced dump in WinDbg and post a disassemble around
the failing instruction?
I haven't used debuggers very much, so I hope I grabbed the correct
lines from the disassembly:
http://pastebin.com/t3sfvmTg
That's the bug check
Valeri Kuchansky valeri.kuchan...@genband.com wrote on 07/27/2011
10:47:14 AM:
Folks,
I'm looking for some help on guest network configuration when enabling
VT-d.
I have fully completed steps given on http://www.linux-kvm.org/page/
How_to_assign_devices_with_VT-d_in_KVM on detaching and
On 07/28/2011 03:21 PM, Avi Kivity wrote:
I haven't used debuggers very much, so I hope I grabbed the correct
lines from the disassembly:
http://pastebin.com/t3sfvmTg
That's the bug check routine. Can you go up a frame?
Or just do what Gleb suggested. Open the dump, type !analyze -v and
From: Liu Yuan tailai...@taobao.com
Vhost-blk driver is an in-kernel accelerator, intercepting the
IO requests from KVM virtio-capable guests. It is based on the
vhost infrastructure.
This is supposed to be a module over latest kernel tree, but it
needs some symbols from fs/aio.c and
From: Liu Yuan tailai...@taobao.com
vhost-blk is an in-kernel accelerator for virtio-blk
device. This patch is the counterpart of the vhost-blk
module in the kernel. It basically does setup of the
vhost-blk, pass on the virtio buffer information via
/dev/vhost-blk.
Useage:
$:qemu -drvie
[design idea]
The vhost-blk uses two kernel threads to handle the guests' requests.
One is tosubmit them via Linux kernel's internal AIO structs, and the other is
signal the guests the completion of the IO requests.
The current qemu-kvm's native AIO in the user mode acctually
On Thu, Jul 28, 2011 at 10:29:05PM +0800, Liu Yuan wrote:
From: Liu Yuan tailai...@taobao.com
Vhost-blk driver is an in-kernel accelerator, intercepting the
IO requests from KVM virtio-capable guests. It is based on the
vhost infrastructure.
This is supposed to be a module over latest
On Thu, Jul 28, 2011 at 12:24:48PM +0800, Zhi Yong Wu wrote:
On Wed, Jul 27, 2011 at 11:49 PM, Marcelo Tosatti mtosa...@redhat.com wrote:
On Wed, Jul 27, 2011 at 06:17:15PM +0800, Zhi Yong Wu wrote:
+ wait_time = 1;
+ }
+
+ wait_time = wait_time + (slice_time -
On Thu, Jul 28, 2011 at 3:29 PM, Liu Yuan namei.u...@gmail.com wrote:
+static int blk_completion_worker(void *priv)
+{
Do you really need this? How about using the vhost poll helper to
observe the eventfd? Then you can drop your own worker thread code
and simply have a work function to handle
On Thu, Jul 28, 2011 at 10:29:05PM +0800, Liu Yuan wrote:
From: Liu Yuan tailai...@taobao.com
Vhost-blk driver is an in-kernel accelerator, intercepting the
IO requests from KVM virtio-capable guests. It is based on the
vhost infrastructure.
This is supposed to be a module over latest
On Thu, Jul 28, 2011 at 3:29 PM, Liu Yuan namei.u...@gmail.com wrote:
Did you investigate userspace virtio-blk performance? If so, what
issues did you find?
I have a hacked up world here that basically implements vhost-blk in userspace:
On 2011-07-27 18:35, Vasilis Liaskovitis wrote:
Hi,
On Mon, Jul 25, 2011 at 3:18 PM, Jan Kiszka jan.kis...@siemens.com wrote:
OK, hacks below plus the following three patches make CPU hotplug work
again - with some exceptions. Here are the patches:
1.
Hi,
On 28.07.2011 15:49, Paolo Bonzini wrote:
On 07/28/2011 03:21 PM, Avi Kivity wrote:
I haven't used debuggers very much, so I hope I grabbed the correct
lines from the disassembly:
http://pastebin.com/t3sfvmTg
That's the bug check routine. Can you go up a frame?
Or just do what Gleb
Map GSIs manually when starting the guest.
This will allow us mapping new GSIs for MSIX in the future.
Signed-off-by: Sasha Levin levinsasha...@gmail.com
---
tools/kvm/builtin-run.c |3 ++
tools/kvm/include/kvm/irq.h |5 +++
tools/kvm/irq.c | 80
PCI BAR probing is done in four steps:
1. Read address (and flags).
2. Mask BAR.
3. Read BAR again - Now the expected result is the size of the BAR.
4. Mask BAR with address.
So far, we have only took care of the first step. This means that the kernel
was using address as the size, causing a
This makes MMIO callback similar to it's PIO counterpart by passing
a void* value provided in the registration to the callback function.
This allows to keep context within the MMIO callback function.
Signed-off-by: Sasha Levin levinsasha...@gmail.com
---
tools/kvm/include/kvm/kvm.h |2 +-
This patch implements basic MSI-X support for virtio-rng.
The device uses the virtio preferred method of working with MSI-X by
creating one vector for configuration and one vector for each vq in the
device.
Signed-off-by: Sasha Levin levinsasha...@gmail.com
---
tools/kvm/include/kvm/pci.h |
On Wed, Jul 27, 2011 at 4:48 AM, Avi Kivity a...@redhat.com wrote:
On 07/22/2011 03:47 AM, AP wrote:
I am trying to add 1366x768 resolution for standard VGA. I looked at
http://www.linux-kvm.org/page/FAQ#Can_I_have_higher_or_widescreen_resolutions_.28eg_1680_x_1050.29_in_KVM.3F
and
This test does some basic operation on the virtual floppy,
it supports both Linux and Windows guests.
Signed-off-by: Amos Kong ak...@redhat.com
---
client/tests/kvm/tests/floppy.py | 62
client/tests/kvm/tests_base.cfg.sample | 23
2 files
Hello all,
I've been poking at this bug for awhile and following the discussion, so I
thought I'd bring all the information together in one place.
First, I've been able to reliably reproduce this bug. Here is (what I believe
to be) the relevant information:
* Host setup (8 CPUs):
Ubuntu
On Thu, Jul 28, 2011 at 10:42 PM, Marcelo Tosatti mtosa...@redhat.com wrote:
On Thu, Jul 28, 2011 at 12:24:48PM +0800, Zhi Yong Wu wrote:
On Wed, Jul 27, 2011 at 11:49 PM, Marcelo Tosatti mtosa...@redhat.com
wrote:
On Wed, Jul 27, 2011 at 06:17:15PM +0800, Zhi Yong Wu wrote:
+
On Thu, Jul 28, 2011 at 4:44 PM, Stefan Hajnoczi stefa...@gmail.com wrote:
On Thu, Jul 28, 2011 at 3:29 PM, Liu Yuan namei.u...@gmail.com wrote:
Did you investigate userspace virtio-blk performance? If so, what
issues did you find?
I have a hacked up world here that basically implements
This test adds a usb storage for the guest, and do some check from monitor and
inside the guest.
It's not very stable, could you help to review if something is wrong?
@ qemu-kvm -drive file='vm.qcow2',index=0,if=virtio,cache=none
-device usb-ehci,id=ehci
-drive
On Thu, Mar 4, 2010 at 3:20 PM, sshang ssh...@redhat.com wrote:
This test mainly tests whether all guest cpu flags are supported by host
machine.
Signed-off-by: sshang ssh...@redhat.com
Hi Lucas,
It seems that this patch[1] was lost by us. Have confirmed with shuang,
this subtest needs to
50 matches
Mail list logo