Hello Derek,
On Tuesday 23 August 2011 06:30:13 Derek wrote:
I found 'virsh edit' is the recommended way to make changes to a VM
configuration, the XML file is not referenced at boot, it is only saved to
with 'virsh dumpxml'. 'virsh edit' will update both areas - hypervisor and
XML file.
On Mon, 2011-08-22 at 17:52 -0700, aafabbri wrote:
I'm not following you.
You have to enforce group/iommu domain assignment whether you have the
existing uiommu API, or if you change it to your proposed
ioctl(inherit_iommu) API.
The only change needed to VFIO here should be to make
On Tue, Aug 16, 2011 at 02:46:47PM +0800, Xiao Guangrong wrote:
Detecting write-flooding does not work well, when we handle page written, if
the last speculative spte is not accessed, we treat the page is
write-flooding, however, we can speculative spte on many path, such as pte
prefetch, page
Hi, Avi,
Both Eddie and Marcello once suggested vEOI optimization by skipping
heavy-weight instruction decode, which reduces vEOI overhead greatly:
http://www.mail-archive.com/kvm@vger.kernel.org/msg18619.html
http://www.spinics.net/lists/kvm/msg36691.html
Though virtual x2apic serves similar
Hi,
From trace messages, it seemed no interrupts for guest.
I also tried sysrq, but it didn't work. I doubt that kvm-qemu entered
some infinite loop.
Thanks,
Paul
On Mon, Aug 22, 2011 at 8:10 PM, Stefan Hajnoczi stefa...@gmail.com wrote:
On Mon, Aug 22, 2011 at 10:37 AM, Paul fly...@gmail.com
On Tue, Aug 23, 2011 at 9:10 AM, Paul fly...@gmail.com wrote:
From trace messages, it seemed no interrupts for guest.
I also tried sysrq, but it didn't work. I doubt that kvm-qemu entered
some infinite loop.
The fact that a fresh VNC connection to the guest works (but the mouse
doesn't move)
On Tue, Aug 16, 2011 at 11:56:37PM -0400, Umesh Deshpande wrote:
ramlist mutex is implemented to protect the RAMBlock list traversal in the
migration thread from their addition/removal from the iothread.
Signed-off-by: Umesh Deshpande udesh...@redhat.com
---
cpu-all.h |2 ++
On Tue, Aug 23, 2011 at 06:15:33AM -0300, Marcelo Tosatti wrote:
On Tue, Aug 16, 2011 at 11:56:37PM -0400, Umesh Deshpande wrote:
ramlist mutex is implemented to protect the RAMBlock list traversal in the
migration thread from their addition/removal from the iothread.
Signed-off-by:
On Mon, Aug 22, 2011 at 12:18:49PM +0300, Avi Kivity wrote:
On 08/17/2011 07:19 AM, Liu, Jinsong wrote:
From a9670ddff84080c56183e2d678189e100f891174 Mon Sep 17 00:00:00 2001
From: Liu, Jinsongjinsong@intel.com
Date: Wed, 17 Aug 2011 11:36:28 +0800
Subject: [PATCH] KVM: emulate lapic tsc
Hi Marcelo,
On 08/23/2011 04:00 PM, Marcelo Tosatti wrote:
On Tue, Aug 16, 2011 at 02:46:47PM +0800, Xiao Guangrong wrote:
Detecting write-flooding does not work well, when we handle page written, if
the last speculative spte is not accessed, we treat the page is
write-flooding, however, we
On Mon, Aug 22, 2011 at 6:29 PM, Jan Kiszka jan.kis...@siemens.com wrote:
On 2011-08-14 06:04, Avi Kivity wrote:
In certain circumstances, posix-aio-compat can incur a lot of latency:
- threads are created by vcpu threads, so if vcpu affinity is set,
aio threads inherit vcpu affinity.
On Mon, Aug 22, 2011 at 08:52:18PM -0400, aafabbri wrote:
You have to enforce group/iommu domain assignment whether you have the
existing uiommu API, or if you change it to your proposed
ioctl(inherit_iommu) API.
The only change needed to VFIO here should be to make uiommu fd assignment
On Tue, Aug 23, 2011 at 02:54:43AM -0400, Benjamin Herrenschmidt wrote:
Possibly, the question that interest me the most is what interface will
KVM end up using. I'm also not terribly fan with the (perceived)
discrepancy between using uiommu to create groups but using the group fd
to actually
On 08/23/2011 11:17 AM, Marcelo Tosatti wrote:
typedef struct RAMList {
+QemuMutex mutex;
uint8_t *phys_dirty;
QLIST_HEAD(ram, RAMBlock) blocks;
QLIST_HEAD(, RAMBlock) blocks_mru;
A comment on what the mutex protects would be good.
Indeed,
On Tue, Aug 23, 2011 at 01:41:48PM +0200, Paolo Bonzini wrote:
On 08/23/2011 11:17 AM, Marcelo Tosatti wrote:
typedef struct RAMList {
+QemuMutex mutex;
uint8_t *phys_dirty;
QLIST_HEAD(ram, RAMBlock) blocks;
QLIST_HEAD(, RAMBlock) blocks_mru;
On Tue, Aug 23, 2011 at 06:55:39PM +0800, Xiao Guangrong wrote:
Hi Marcelo,
On 08/23/2011 04:00 PM, Marcelo Tosatti wrote:
On Tue, Aug 16, 2011 at 02:46:47PM +0800, Xiao Guangrong wrote:
Detecting write-flooding does not work well, when we handle page written,
if
the last speculative
On 08/23/2011 06:01 AM, Stefan Hajnoczi wrote:
On Mon, Aug 22, 2011 at 6:29 PM, Jan Kiszkajan.kis...@siemens.com wrote:
On 2011-08-14 06:04, Avi Kivity wrote:
In certain circumstances, posix-aio-compat can incur a lot of latency:
- threads are created by vcpu threads, so if vcpu affinity is
On 19-08-11 0:02, Gardziejczyk, Kamil wrote:
Hi,
Have you resolve your problem with shared IRQ lines? I have checked out
qemu-kvm 0.15.0 and found that Jan Kiszka patch was applied to latest version
of kvm.
[PATCH 0/5] pci-assign: Host IRQ sharing suppport + some fixes and cleanups
On 2011-08-23 14:40, Anthony Liguori wrote:
On 08/23/2011 06:01 AM, Stefan Hajnoczi wrote:
On Mon, Aug 22, 2011 at 6:29 PM, Jan Kiszkajan.kis...@siemens.com wrote:
On 2011-08-14 06:04, Avi Kivity wrote:
In certain circumstances, posix-aio-compat can incur a lot of latency:
- threads are
On 2011-08-23 14:52, Michael Sturm wrote:
On 19-08-11 0:02, Gardziejczyk, Kamil wrote:
Hi,
Have you resolve your problem with shared IRQ lines? I have checked
out qemu-kvm 0.15.0 and found that Jan Kiszka patch was applied to
latest version of kvm.
[PATCH 0/5] pci-assign: Host IRQ sharing
On Mon, Aug 22, 2011 at 05:03:53PM -0400, Benjamin Herrenschmidt wrote:
I am in favour of /dev/vfio/$GROUP. If multiple devices should be
assigned to a guest, there can also be an ioctl to bind a group to an
address-space of another group (certainly needs some care to not allow
that both
On Mon, Aug 22, 2011 at 03:17:00PM -0400, Alex Williamson wrote:
On Mon, 2011-08-22 at 19:25 +0200, Joerg Roedel wrote:
I am in favour of /dev/vfio/$GROUP. If multiple devices should be
assigned to a guest, there can also be an ioctl to bind a group to an
address-space of another group
Hi Alex,
just ran into some corner case with my reanimated IRQ sharing patches
that may affect vfio as well:
How are vfio_enable/disable_intx synchronized against all other possible
spots that call pci_block_user_cfg_access?
I hit the recursion bug check in pci_block_user_cfg_access with my
On 08/23/2011 08:02 AM, Jan Kiszka wrote:
On 2011-08-23 14:40, Anthony Liguori wrote:
You should be able to just use an eventfd or pipe.
Better yet, we should look at using GThreadPool to replace posix-aio-compat.
When interacting with the thread pool is part of some time-critical path
On 2011-08-23 16:02, Anthony Liguori wrote:
On 08/23/2011 08:02 AM, Jan Kiszka wrote:
On 2011-08-23 14:40, Anthony Liguori wrote:
You should be able to just use an eventfd or pipe.
Better yet, we should look at using GThreadPool to replace posix-aio-compat.
When interacting with the thread
On Tue, 2011-08-23 at 12:38 +1000, David Gibson wrote:
On Mon, Aug 22, 2011 at 09:45:48AM -0600, Alex Williamson wrote:
On Mon, 2011-08-22 at 15:55 +1000, David Gibson wrote:
On Sat, Aug 20, 2011 at 09:51:39AM -0700, Alex Williamson wrote:
We had an extremely productive VFIO BoF on
On 08/23/2011 08:38 PM, Marcelo Tosatti wrote:
And, i think there are not problems since: if the spte without accssed bit is
written frequently, it means the guest page table is accessed infrequently or
during the writing, the guest page table is not accessed, in this time,
zapping
this
On 8/23/11 4:04 AM, Joerg Roedel joerg.roe...@amd.com wrote:
On Mon, Aug 22, 2011 at 08:52:18PM -0400, aafabbri wrote:
You have to enforce group/iommu domain assignment whether you have the
existing uiommu API, or if you change it to your proposed
ioctl(inherit_iommu) API.
The only
On Tue, 2011-08-23 at 16:54 +1000, Benjamin Herrenschmidt wrote:
On Mon, 2011-08-22 at 17:52 -0700, aafabbri wrote:
I'm not following you.
You have to enforce group/iommu domain assignment whether you have the
existing uiommu API, or if you change it to your proposed
On Tue, 2011-08-23 at 15:14 +0200, Roedel, Joerg wrote:
On Mon, Aug 22, 2011 at 03:17:00PM -0400, Alex Williamson wrote:
On Mon, 2011-08-22 at 19:25 +0200, Joerg Roedel wrote:
I am in favour of /dev/vfio/$GROUP. If multiple devices should be
assigned to a guest, there can also be an
Rebased version of the previous round.
Jan Kiszka (7):
pci-assign: Fix kvm_deassign_irq handling in assign_irq
pci-assign: Update legacy interrupts only if used
pci-assign: Drop libpci header dependency
pci-assign: Refactor calc_assigned_dev_id
pci-assign: Track MSI/MSI-X capability
Don't mess with assign_intx on devices that are in MSI or MSI-X mode, it
would corrupt their interrupt routing.
Signed-off-by: Jan Kiszka jan.kis...@siemens.com
---
hw/device-assignment.c |9 ++---
1 files changed, 6 insertions(+), 3 deletions(-)
diff --git a/hw/device-assignment.c
Always clear AssignedDevice::irq_requested_type after calling
kvm_deassign_irq. Moreover, drop the obviously incorrect exclusion when
reporting related errors - if irq_requested_type is non-zero, deassign
must not fail.
Signed-off-by: Jan Kiszka jan.kis...@siemens.com
---
hw/device-assignment.c
Make calc_assigned_dev_id pick up all required bits from the device
passed to it.
Signed-off-by: Jan Kiszka jan.kis...@siemens.com
---
hw/device-assignment.c | 25 +
hw/device-assignment.h |6 +++---
2 files changed, 12 insertions(+), 19 deletions(-)
diff --git
All constants are now available through QEMU. Also drop the upstream
diff of pci_regs.h, it cannot clash with libpci anymore.
Signed-off-by: Jan Kiszka jan.kis...@siemens.com
---
configure | 21 -
hw/device-assignment.c | 13 ++---
hw/pci_regs.h
This drastically simplifies config space access management: Instead of
coding various range checks and merging bits, set up two access control
bitmaps. One defines, which bits can be directly read from the device,
the other allows direct write to the device, also with bit-granularity.
The setup
Store the MSI and MSI-X capability position in the same fields the QEMU
core uses as well. Although we still open-code MSI support, this does
not cause conflicts. Instead, it allow us to drop config space searches
from assigned_device_pci_cap_write_config. Moreover, we no longer need
to pass the
Device assignment no longer peeks into config_map, so we can drop all
the related changes and sync the PCI core with upstream.
Signed-off-by: Jan Kiszka jan.kis...@siemens.com
---
hw/pci.c | 29 +++--
hw/pci.h |7 +--
2 files changed, 20 insertions(+), 16
On 8/23/11 10:01 AM, Alex Williamson alex.william...@redhat.com wrote:
On Tue, 2011-08-23 at 16:54 +1000, Benjamin Herrenschmidt wrote:
On Mon, 2011-08-22 at 17:52 -0700, aafabbri wrote:
I'm not following you.
You have to enforce group/iommu domain assignment whether you have the
On Tue, 2011-08-23 at 10:33 -0700, Aaron Fabbri wrote:
On 8/23/11 10:01 AM, Alex Williamson alex.william...@redhat.com wrote:
On Tue, 2011-08-23 at 16:54 +1000, Benjamin Herrenschmidt wrote:
On Mon, 2011-08-22 at 17:52 -0700, aafabbri wrote:
I'm not following you.
You have to
Several fixes in this patch:
* Don't ignore function level and per-vector masking. We're not
supposed to signal when masked and not doing so will improve
performance a bit (in addition to behaving correctly).
* Implement the missing PBA array. lspci will now show the correct
output:
On Wed, Aug 24, 2011 at 12:32:32AM +0800, Xiao Guangrong wrote:
On 08/23/2011 08:38 PM, Marcelo Tosatti wrote:
And, i think there are not problems since: if the spte without accssed bit
is
written frequently, it means the guest page table is accessed infrequently
or
during the
Several fixes in this patch:
* Don't ignore function level and per-vector masking. We're not
supposed to signal when masked and not doing so will improve
performance a bit (in addition to behaving correctly).
* Implement the missing PBA array. 'lspci -vv' will now show the correct
output:
On Tue, 2011-08-23 at 07:01 +1000, Benjamin Herrenschmidt wrote:
On Mon, 2011-08-22 at 09:45 -0600, Alex Williamson wrote:
Yes, that's the idea. An open question I have towards the configuration
side is whether we might add iommu driver specific options to the
groups. For instance on
On 08/24/2011 03:09 AM, Marcelo Tosatti wrote:
On Wed, Aug 24, 2011 at 12:32:32AM +0800, Xiao Guangrong wrote:
On 08/23/2011 08:38 PM, Marcelo Tosatti wrote:
And, i think there are not problems since: if the spte without accssed bit
is
written frequently, it means the guest page table is
Turns out we were using a hardcoded cleanup command in the
cleanup phase for the file_transfer test, rather than
picking up the one available on params. This patch
fixes that bug.
Signed-off-by: Lucas Meneghel Rodrigues l...@redhat.com
---
client/virt/tests/file_transfer.py |2 +-
1 files
On Tue, 2011-08-23 at 15:31 +0200, Jan Kiszka wrote:
Hi Alex,
just ran into some corner case with my reanimated IRQ sharing patches
that may affect vfio as well:
How are vfio_enable/disable_intx synchronized against all other possible
spots that call pci_block_user_cfg_access?
I hit
On LinuxCon I had a nice chat with Linus on what he thinks kvm-tool
would be doing and what he expects from it. Basically he wants a
small and simple tool he and other developers can run to try out and
see if the kernel they just built actually works.
Fortunately, Qemu can do that today already!
On Tue, 2011-08-23 at 19:32 +0200, Jan Kiszka wrote:
Rebased version of the previous round.
Jan Kiszka (7):
pci-assign: Fix kvm_deassign_irq handling in assign_irq
pci-assign: Update legacy interrupts only if used
pci-assign: Drop libpci header dependency
pci-assign: Refactor
* André Weidemann (andre.weidem...@web.de) wrote:
snip
git clone git://git.kernel.org/pub/scm/virt/kvm/qemu-kvm.git
snip
./configure --audio-drv-list=alsa --target-list=x86_64-softmmu
--enable-kvm-device-assignment
ERROR: unknown option --enable-kvm-device-assignment
snip
How come so many
On Tue, 2011-08-23 at 15:18 +0200, Roedel, Joerg wrote:
On Mon, Aug 22, 2011 at 05:03:53PM -0400, Benjamin Herrenschmidt wrote:
I am in favour of /dev/vfio/$GROUP. If multiple devices should be
assigned to a guest, there can also be an ioctl to bind a group to an
address-space of
On Tue, 2011-08-23 at 10:23 -0600, Alex Williamson wrote:
Yeah. Joerg's idea of binding groups internally (pass the fd of one
group to another via ioctl) is one option. The tricky part will be
implementing it to support hot unplug of any group from the
supergroup.
I believe Ben had a
For us the most simple and logical approach (which is also what pHyp
uses and what Linux handles well) is really to expose a given PCI host
bridge per group to the guest. Believe it or not, it makes things
easier :-)
I'm all for easier. Why does exposing the bridge use less bus
Following patch series deals with VCPU and iothread starvation during the
migration of a guest. Currently the iothread is responsible for performing the
guest migration. It holds qemu_mutex during the migration and doesn't allow VCPU
to enter the qemu mode and delays its return to the guest. The
This patch creates a new list of RAM blocks in MRU order. So that separate
locking rules can be applied to the regular RAM block list and the MRU list.
Signed-off-by: Paolo Bonzini pbonz...@redhat.com
---
cpu-all.h |2 ++
exec.c| 17 -
2 files changed, 14 insertions(+),
This patch creates a migration bitmap, which is periodically kept in sync with
the qemu bitmap. A separate copy of the dirty bitmap for the migration avoids
concurrent access to the qemu bitmap from iothread and migration thread.
Signed-off-by: Umesh Deshpande udesh...@redhat.com
---
arch_init.c
ramlist mutex is implemented to protect the RAMBlock list traversal in the
migration thread from their addition/removal from the iothread.
Note: Combination of iothread mutex and migration thread mutex works as a
rw-lock. Both mutexes are acquired while modifying the ram_list members or RAM
block
This patch creates a separate thread for the guest migration on the source side.
All exits (on completion/error) from the migration thread are handled by a
bottom
handler, which is called from the iothread.
Signed-off-by: Umesh Deshpande udesh...@redhat.com
---
buffered_file.c | 75
Hello all,
I have just started working with KVM and
virtualization in general. I have setup a client using Suse 9.3 x64
as the base and Fedora 15 x64 as the host. The kernel in the client
has been upgraded to a custom 2.6.28.4 build.
Everything seems
to be working fine but I wanted to get
On 23.08.2011, at 18:41, Benjamin Herrenschmidt wrote:
On Tue, 2011-08-23 at 10:23 -0600, Alex Williamson wrote:
Yeah. Joerg's idea of binding groups internally (pass the fd of one
group to another via ioctl) is one option. The tricky part will be
implementing it to support hot unplug of
On 23.08.2011, at 18:51, Benjamin Herrenschmidt wrote:
For us the most simple and logical approach (which is also what pHyp
uses and what Linux handles well) is really to expose a given PCI host
bridge per group to the guest. Believe it or not, it makes things
easier :-)
I'm all for
Tests the ability of adding virtual cpus on the fly to qemu using
the monitor command cpu_set, then after everything is OK, run the
cpu_hotplug testsuite on the guest through autotest.
Updates: As of the latest qemu-kvm (08-24-2011) HEAD, trying to
online more CPUs than the ones already available
On Wed, Aug 24, 2011 at 9:04 AM, arag...@dcsnow.com wrote:
Hello all,
I have just started working with KVM and
virtualization in general. I have setup a client using Suse 9.3 x64
as the base and Fedora 15 x64 as the host. The kernel in the client
has been upgraded to a custom 2.6.28.4
On Wed, 24 Aug 2011 01:05:13 -0300
Lucas Meneghel Rodrigues l...@redhat.com wrote:
Tests the ability of adding virtual cpus on the fly to qemu using
the monitor command cpu_set, then after everything is OK, run the
cpu_hotplug testsuite on the guest through autotest.
Updates: As of the
On Wed, Aug 24, 2011 at 1:25 AM, pradeep psuri...@linux.vnet.ibm.com wrote:
On Wed, 24 Aug 2011 01:05:13 -0300
Lucas Meneghel Rodrigues l...@redhat.com wrote:
Tests the ability of adding virtual cpus on the fly to qemu using
the monitor command cpu_set, then after everything is OK, run the
On Wed, Aug 24, 2011 at 1:16 AM, Alexander Graf ag...@suse.de wrote:
On LinuxCon I had a nice chat with Linus on what he thinks kvm-tool
would be doing and what he expects from it. Basically he wants a
small and simple tool he and other developers can run to try out and
see if the kernel they
On Wed, Aug 24, 2011 at 1:19 PM, Pekka Enberg penb...@kernel.org wrote:
It's nice to see such an honest attempt at improving QEMU usability,
Alexander!
One comment: in my experience, having shell scripts under
Documentation reduces the likelihood that people actually discover
them so you
Hi all,
I am working on Intel iommu staff and I have two questions -- just
send to kvm list as I am not sure which mail list should I send to,
and it will be very appreciated if you could help to forward to
related mail list. Thank you!
1) I see in Intel iommu's manual, caching behavior is
68 matches
Mail list logo