Move coalesced_mmio locking to its own device, instead of relying on
kvm-lock.
Signed-off-by: Marcelo Tosatti mtosa...@redhat.com
Index: kvm/virt/kvm/coalesced_mmio.c
===
--- kvm.orig/virt/kvm/coalesced_mmio.c
+++
Marcelo Tosatti wrote:
On Sun, May 31, 2009 at 03:14:36PM +0300, Avi Kivity wrote:
Marcelo Tosatti wrote:
Move coalesced_mmio locking to its own device, instead of relying on
kvm-lock.
Signed-off-by: Marcelo Tosatti mtosa...@redhat.com
Index: kvm-irqlock/virt/kvm/coalesced_mmio.c
Move coalesced_mmio locking to its own device, instead of relying on
kvm-lock.
Signed-off-by: Marcelo Tosatti mtosa...@redhat.com
Index: kvm-irqlock/virt/kvm/coalesced_mmio.c
===
--- kvm-irqlock.orig/virt/kvm/coalesced_mmio.c
+++
Marcelo Tosatti wrote:
===
--- kvm-irqlock.orig/virt/kvm/coalesced_mmio.c
+++ kvm-irqlock/virt/kvm/coalesced_mmio.c
@@ -26,9 +26,12 @@ static int coalesced_mmio_in_range(struc
if (!is_write)
return 0;
-
On Tue, May 26, 2009 at 02:24:33PM +0300, Avi Kivity wrote:
Marcelo Tosatti wrote:
===
--- kvm-irqlock.orig/virt/kvm/coalesced_mmio.c
+++ kvm-irqlock/virt/kvm/coalesced_mmio.c
@@ -26,9 +26,12 @@ static int
Marcelo Tosatti wrote:
Why not use slots_lock to protect the entire iodevice list (rcu one
day), and an internal spinlock for coalesced mmio?
Don't like using slots_lock to protect the entire iodevice list, its
reverse progress in my opinion. The PIO/MMIO device lists are data
On Sun, May 24, 2009 at 05:04:52PM +0300, Avi Kivity wrote:
Marcelo Tosatti wrote:
Get rid of kvm-lock dependency on coalesced_mmio methods. Use an
atomic variable instead to guarantee only one vcpu is batching
data into the ring at a given time.
Signed-off-by: Marcelo Tosatti
Marcelo Tosatti wrote:
Get rid of kvm-lock dependency on coalesced_mmio methods. Use an
atomic variable instead to guarantee only one vcpu is batching
data into the ring at a given time.
Signed-off-by: Marcelo Tosatti mtosa...@redhat.com
Index: kvm-irqlock/virt/kvm/coalesced_mmio.c
Marcelo Tosatti wrote:
Signed-off-by: Marcelo Tosatti mtosa...@redhat.com
Index: kvm/virt/kvm/coalesced_mmio.c
===
--- kvm.orig/virt/kvm/coalesced_mmio.c
+++ kvm/virt/kvm/coalesced_mmio.c
@@ -26,9 +26,10 @@ static int
On Wed, May 20, 2009 at 03:06:26PM +0300, Avi Kivity wrote:
Marcelo Tosatti wrote:
Signed-off-by: Marcelo Tosatti mtosa...@redhat.com
Index: kvm/virt/kvm/coalesced_mmio.c
===
--- kvm.orig/virt/kvm/coalesced_mmio.c
+++
Marcelo Tosatti wrote:
So we have a function that takes a lock and conditionally releases it?
Yes, but it is correct: it will only return with the lock held in case
it returns 1, in which case its guaranteed -write will be called (which
will unlock it).
It should check the range
On Wed, May 20, 2009 at 05:29:23PM +0300, Avi Kivity wrote:
Marcelo Tosatti wrote:
So we have a function that takes a lock and conditionally releases it?
Yes, but it is correct: it will only return with the lock held in case
it returns 1, in which case its guaranteed -write will be
Marcelo Tosatti wrote:
Yes it's correct but we'll get an endless stream of patches to 'fix' it
because it is so unorthodox.
Does it have to guarantee any kind of ordering in case of parallel
writes by distincting vcpus? This is what it does now (so if a vcpu
arrives first, the second
On Wed, May 20, 2009 at 12:13:03PM -0300, Marcelo Tosatti wrote:
On Wed, May 20, 2009 at 05:29:23PM +0300, Avi Kivity wrote:
Marcelo Tosatti wrote:
So we have a function that takes a lock and conditionally releases it?
Yes, but it is correct: it will only return with the
Get rid of kvm-lock dependency on coalesced_mmio methods. Use an
atomic variable instead to guarantee only one vcpu is batching
data into the ring at a given time.
Signed-off-by: Marcelo Tosatti mtosa...@redhat.com
Index: kvm-irqlock/virt/kvm/coalesced_mmio.c
Signed-off-by: Marcelo Tosatti mtosa...@redhat.com
Index: kvm/virt/kvm/coalesced_mmio.c
===
--- kvm.orig/virt/kvm/coalesced_mmio.c
+++ kvm/virt/kvm/coalesced_mmio.c
@@ -26,9 +26,10 @@ static int coalesced_mmio_in_range(struc
16 matches
Mail list logo