From: Suresh Warrier
Replaces the ICS mutex lock with a spin lock since we will be porting
these routines to real mode. Note that we need to disable interrupts
before we take the lock in anticipation of the fact that on the guest
side, we are running in the context of a hard irq and interrupts
From: Paul Mackerras
Currently, kvmppc_set_lpcr() has a spinlock around the whole function,
and inside that does mutex_lock(&kvm->lock). It is not permitted to
take a mutex while holding a spinlock, because the mutex_lock might
call schedule(). In addition, this causes lockdep to warn
From: Suresh Warrier
Replaces the ICS mutex lock with a spin lock since we will be porting
these routines to real mode. Note that we need to disable interrupts
before we take the lock in anticipation of the fact that on the guest
side, we are running in the context of a hard irq and interrupts
Currently, kvmppc_set_lpcr() has a spinlock around the whole function,
and inside that does mutex_lock(&kvm->lock). It is not permitted to
take a mutex while holding a spinlock, because the mutex_lock might
call schedule(). In addition, this causes lockdep to warn about a
lock orderin
The kvm mutex was (probably) used to protect against cpu hotplug.
The current code no longer needs to protect against that, as we only
rely on CPU data structures that are guaranteed to be available
if we can access the CPU. (e.g. vcpu_create will put the cpu
in the array AFTER the cpu is ready
From: Rusty Russell
There's currently a big lock around everything, and it means that we
can't query sysfs (eg /sys/devices/virtual/misc/hw_random/rng_current)
while the rng is reading. This is a real problem when the rng is slow,
or blocked (eg. virtio_rng with qemu's default /dev/random backen
From: Rusty Russell
There's currently a big lock around everything, and it means that we
can't query sysfs (eg /sys/devices/virtual/misc/hw_random/rng_current)
while the rng is reading. This is a real problem when the rng is slow,
or blocked (eg. virtio_rng with qemu's default /dev/random backen
From: Rusty Russell
There's currently a big lock around everything, and it means that we
can't query sysfs (eg /sys/devices/virtual/misc/hw_random/rng_current)
while the rng is reading. This is a real problem when the rng is slow,
or blocked (eg. virtio_rng with qemu's default /dev/random backen
];
wait_queue_head_t ipte_wq;
+ int ipte_lock_count;
+ struct mutex ipte_mutex;
spinlock_t start_stop_lock;
struct kvm_s390_crypto crypto;
};
diff --git a/arch/s390/kvm/gaccess.c b/arch/s390/kvm/gaccess.c
index 0f961a1..c1424e8 100644
--- a/arch/s390/kvm/gaccess.c
+++ b/arch/s390/kvm
Il 23/09/2014 10:06, Christian Borntraeger ha scritto:
> Yes. Davids explanation also makes sense as a commit message. Paolo,
> if you use David patch with a better description of the "why" I am
> fine with this patch.
Done, thanks everybody!
Paolo
--
To unsubscribe from this list: send the line
On 09/23/2014 08:49 AM, Gleb Natapov wrote:
> On Mon, Sep 22, 2014 at 09:29:19PM +0200, Paolo Bonzini wrote:
>> Il 22/09/2014 21:20, Christian Borntraeger ha scritto:
>>> "while using trinity to fuzz KVM, we noticed long stalls on invalid ioctls.
>>> Lets bail out early on invalid ioctls". or simi
On Mon, Sep 22, 2014 at 09:29:19PM +0200, Paolo Bonzini wrote:
> Il 22/09/2014 21:20, Christian Borntraeger ha scritto:
> > "while using trinity to fuzz KVM, we noticed long stalls on invalid ioctls.
> > Lets bail out early on invalid ioctls". or similar?
>
> Okay. David, can you explain how you
ngs with some generic
> process inspection code that was probing all open file descriptors.
> There's no reason non-kvm ioctls should have to wait for the vcpu
> mutex to become available just to fail.
OK then, please add the usecase to the changelog.
--
To unsubscribe from this list
ould not be executing vcpu ioctls/Should not be executing vcpu
ioctls which take vcpu mutex/
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On 09/22, Marcelo Tosatti wrote:
> On Fri, Sep 19, 2014 at 04:03:25PM -0700, David Matlack wrote:
> > vcpu ioctls can hang the calling thread if issued while a vcpu is
> > running.
>
> There is a mutex per-vcpu, so thats expected, OK...
>
> > If we know ioctl is g
Il 22/09/2014 22:08, Marcelo Tosatti ha scritto:
> > This patch does not change functionality, it just makes invalid ioctls
> > fail faster.
>
> Should not be executing vcpu ioctls without interrupt KVM_RUN in the
> first place.
This is not entirely true, there are a couple of asynchronous ioctls
On Fri, Sep 19, 2014 at 04:03:25PM -0700, David Matlack wrote:
> vcpu ioctls can hang the calling thread if issued while a vcpu is
> running.
There is a mutex per-vcpu, so thats expected, OK...
> If we know ioctl is going to be rejected as invalid anyway,
> we can fail before trying
y 2 more cycles for something that exited to
> userspace - nobody would even notice. I am just disturbed by the fact that we
> care about something that is not slow-path but broken beyond repair (why does
> userspace call a non-KVM ioctl on a fd of a vcpu from a different thread
> (ot
Il 22/09/2014 21:20, Christian Borntraeger ha scritto:
> "while using trinity to fuzz KVM, we noticed long stalls on invalid ioctls.
> Lets bail out early on invalid ioctls". or similar?
Okay. David, can you explain how you found it so that I can make up my
mind?
Gleb and Marcelo, a fourth and
ust disturbed by the fact that we
care about something that is not slow-path but broken beyond repair (why does
userspace call a non-KVM ioctl on a fd of a vcpu from a different thread
(otherwise the mutex would be free)?
Please, can we have an explanation, e.g. something like
"while using trini
,10 @@ bool kvm_is_mmio_pfn(pfn_t pfn)
/*
* Switches to specified vcpu, until a matching vcpu_put()
*/
-int vcpu_load(struct kvm_vcpu *vcpu)
+static void __vcpu_load(struct kvm_vcpu *vcpu)
{
int cpu;
- if (mutex_lock_killable(&vcpu->mutex))
- return -EINT
+ b/virt/kvm/kvm_main.c
@@ -117,12 +117,10 @@ bool kvm_is_mmio_pfn(pfn_t pfn)
/*
* Switches to specified vcpu, until a matching vcpu_put()
*/
-int vcpu_load(struct kvm_vcpu *vcpu)
+static void __vcpu_load(struct kvm_vcpu *vcpu)
{
int cpu;
- if (mutex_lock_killable(
On 09/22/2014 12:50 PM, Paolo Bonzini wrote:
> Il 20/09/2014 01:03, David Matlack ha scritto:
>> vcpu ioctls can hang the calling thread if issued while a vcpu is
>> running. If we know ioctl is going to be rejected as invalid anyway,
>> we can fail before trying to take the v
Il 20/09/2014 01:03, David Matlack ha scritto:
> vcpu ioctls can hang the calling thread if issued while a vcpu is
> running. If we know ioctl is going to be rejected as invalid anyway,
> we can fail before trying to take the vcpu mutex.
>
> This patch does not change functionality
vcpu ioctls can hang the calling thread if issued while a vcpu is
running. If we know ioctl is going to be rejected as invalid anyway,
we can fail before trying to take the vcpu mutex.
This patch does not change functionality, it just makes invalid ioctls
fail faster.
Signed-off-by: David
From: Rusty Russell
There's currently a big lock around everything, and it means that we
can't query sysfs (eg /sys/devices/virtual/misc/hw_random/rng_current)
while the rng is reading. This is a real problem when the rng is slow,
or blocked (eg. virtio_rng with qemu's default /dev/random backen
There's currently a big lock around everything, and it means that we
can't query sysfs (eg /sys/devices/virtual/misc/hw_random/rng_current)
while the rng is reading. This is a real problem when the rng is slow,
or blocked (eg. virtio_rng with qemu's default /dev/random backend)
This doesn't help
On Mon, Sep 15, 2014 at 06:13:20PM +0200, Michael Büsch wrote:
> On Tue, 16 Sep 2014 00:02:27 +0800
> Amos Kong wrote:
>
> > It doesn't save too much cpu time as expected, just a cleanup.
> >
> > Signed-off-by: Amos Kong
> > ---
> > drivers/char/hw_random/core.c | 6 +++---
> > 1 file changed,
On Tue, 16 Sep 2014 00:02:27 +0800
Amos Kong wrote:
> It doesn't save too much cpu time as expected, just a cleanup.
>
> Signed-off-by: Amos Kong
> ---
> drivers/char/hw_random/core.c | 6 +++---
> 1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/char/hw_random/core.c
It doesn't save too much cpu time as expected, just a cleanup.
Signed-off-by: Amos Kong
---
drivers/char/hw_random/core.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/drivers/char/hw_random/core.c b/drivers/char/hw_random/core.c
index aa30a25..c591d7e 100644
--- a/dr
On (Wed) 10 Sep 2014 [17:07:06], Amos Kong wrote:
> It doesn't save too much cpu time as expected, just a cleanup.
Frankly I won't bother with this. It doesn't completely remove all
copying from the mutex, so it's not worthwhile.
> Signed-off-by: Amos Kong
>
It doesn't save too much cpu time as expected, just a cleanup.
Signed-off-by: Amos Kong
---
drivers/char/hw_random/core.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/drivers/char/hw_random/core.c b/drivers/char/hw_random/core.c
index aa30a25..c591d7e 100644
--- a/dr
kill dd process.
We have some static variables (eg, current_rng, data_avail, etc) in
hw_random/core.c,
they are protected by rng_mutex. I try to workaround this issue by undelay(100)
after mutex_unlock() in rng_dev_read(). This gives chance for
hwrng_attr_*_show()
to get mutex.
This patch also c
Serializing open/release allows us to fix a refcnt error if we fail
to enable the device and lets us prevent devices from being unbound
or opened, giving us an opportunity to do bus resets on release. No
restriction added to serialize binding devices to vfio-pci while the
mutex is held though
On Thu, Jun 12, 2014 at 04:40:29PM +0900, Minchan Kim wrote:
> On Thu, Jun 12, 2014 at 12:21:47PM +0900, Joonsoo Kim wrote:
> > Currently, we should take the mutex for manipulating bitmap.
> > This job may be really simple and short so we don't need to sleep
> > if co
On Thu, Jun 12, 2014 at 12:21:47PM +0900, Joonsoo Kim wrote:
> Currently, we should take the mutex for manipulating bitmap.
> This job may be really simple and short so we don't need to sleep
> if contended. So I change it to spinlock.
I'm not sure it would be good always.
Ma
Currently, we should take the mutex for manipulating bitmap.
This job may be really simple and short so we don't need to sleep
if contended. So I change it to spinlock.
Signed-off-by: Joonsoo Kim
diff --git a/mm/cma.c b/mm/cma.c
index 22a5b23..3085e8c 100644
--- a/mm/cma.c
+++ b/mm/
ous
> > > >
> > > > What about using kfree_rcu() instead ?
> > >
> > > It would lead to unbound allocation from userspace.
> >
> > Look at how we did this in commit
> > c3059477fce2d956a0bb3e04357324780c5d8eeb
> >
> > >
t; > It would lead to unbound allocation from userspace.
>
> Look at how we did this in commit
> c3059477fce2d956a0bb3e04357324780c5d8eeb
>
> >
> > > translate_desc() still uses rcu_read_lock(), its not clear if the mutex
> > > is really held.
> >
&g
On Mon, Jun 02, 2014 at 02:58:00PM -0700, Eric Dumazet wrote:
> On Tue, 2014-06-03 at 00:30 +0300, Michael S. Tsirkin wrote:
> > All memory accesses are done under some VQ mutex.
> > So lock/unlock all VQs is a faster equivalent of synchronize_rcu()
> > for memory access cha
commit
c3059477fce2d956a0bb3e04357324780c5d8eeb
That would make VHOST_SET_MEMORY as slow as before (even though once
every few times).
translate_desc() still uses rcu_read_lock(), its not clear if the mutex
is really held.
Yes, vhost_get_vq_desc must be called with the vq mutex held.
The
t how we did this in commit
c3059477fce2d956a0bb3e04357324780c5d8eeb
>
> > translate_desc() still uses rcu_read_lock(), its not clear if the mutex
> > is really held.
>
> Yes, vhost_get_vq_desc must be called with the vq mutex held.
>
> The rcu_read_lock/unlock in translate_desc is unnecessary.
Yep,
Il 03/06/2014 15:35, Vlad Yasevich ha scritto:
> Yes, vhost_get_vq_desc must be called with the vq mutex held.
>
> The rcu_read_lock/unlock in translate_desc is unnecessary.
If that's true, then does dev->memory really needs to be rcu protected?
It appears to always be read un
On 06/03/2014 08:48 AM, Paolo Bonzini wrote:
> Il 02/06/2014 23:58, Eric Dumazet ha scritto:
>> This looks dubious
>>
>> What about using kfree_rcu() instead ?
>
> It would lead to unbound allocation from userspace.
>
>> translate_desc() still uses rcu_rea
Il 02/06/2014 23:58, Eric Dumazet ha scritto:
This looks dubious
What about using kfree_rcu() instead ?
It would lead to unbound allocation from userspace.
translate_desc() still uses rcu_read_lock(), its not clear if the mutex
is really held.
Yes, vhost_get_vq_desc must be called with
On Tue, 2014-06-03 at 00:30 +0300, Michael S. Tsirkin wrote:
> All memory accesses are done under some VQ mutex.
> So lock/unlock all VQs is a faster equivalent of synchronize_rcu()
> for memory access changes.
> Some guests cause a lot of these changes, so it's helpful
>
All memory accesses are done under some VQ mutex.
So lock/unlock all VQs is a faster equivalent of synchronize_rcu()
for memory access changes.
Some guests cause a lot of these changes, so it's helpful
to make them faster.
Reported-by: "Gonglei (Arei)"
Signed-off-by: Mic
On Thu, 9 May 2013 18:03:09 -0300
Rafael Aquini wrote:
> On Thu, May 09, 2013 at 10:53:48AM -0400, Luiz Capitulino wrote:
> > This commit moves the balloon_lock mutex out of the fill_balloon()
> > and leak_balloon() functions to their callers.
> >
> > The reason for t
On Thu, May 09, 2013 at 10:53:48AM -0400, Luiz Capitulino wrote:
> This commit moves the balloon_lock mutex out of the fill_balloon()
> and leak_balloon() functions to their callers.
>
> The reason for this change is that the next commit will introduce
> a shrinker callback for the
This commit moves the balloon_lock mutex out of the fill_balloon()
and leak_balloon() functions to their callers.
The reason for this change is that the next commit will introduce
a shrinker callback for the balloon driver, which will also call
leak_balloon() but will require different locking
@@ vhost_scsi_handle_vq(struct vhost_scsi *vs, struct
vhost_virtqueue *vq)
int head, ret;
u8 target;
+ mutex_lock(&vq->mutex);
/*
* We can handle the vq only after the endpoint is setup by calling the
* VHOST_SCSI_SET_ENDPOINT ioctl.
-*
-* TODO
(struct vhost_test *n)
size_t len, total_len = 0;
void *private;
- private = rcu_dereference_check(vq->private_data, 1);
- if (!private)
- return;
mutex_lock(&vq->mutex);
+ private = vq->private_data;
+
void handle_tx(struct vhost_net *net)
struct vhost_net_ubuf_ref *uninitialized_var(ubufs);
bool zcopy, zcopy_used;
- /* TODO: check that we are running from vhost_worker? */
- sock = rcu_dereference_check(vq->private_data, 1);
+ mutex_lock(&vq->mutex);
+
From: Umesh Deshpande
From: Umesh Deshpande
Add the new mutex that protects shared state between ram_save_live
and the iothread. If the iothread mutex has to be taken together
with the ramlist mutex, the iothread shall always be _outside_.
Signed-off-by: Paolo Bonzini
Signed-off-by: Umesh
On Wed, 24 Oct 2012, Sasha Levin wrote:
> We already have something to wrap pthread with mutex_[init,lock,unlock]
> calls. This patch creates a new struct mutex abstraction and moves
> everything to work with it.
>
> Signed-off-by: Sasha Levin
I applied this patch from the RFC
We already have something to wrap pthread with mutex_[init,lock,unlock]
calls. This patch creates a new struct mutex abstraction and moves
everything to work with it.
Signed-off-by: Sasha Levin
---
tools/kvm/hw/serial.c | 10 +-
tools/kvm/include/kvm/mutex.h | 22
On Sun, Sep 16, 2012 at 11:50:30AM +0300, Michael S. Tsirkin wrote:
> vcpu mutex can be held for unlimited time so
> taking it with mutex_lock on an ioctl is wrong:
> one process could be passed a vcpu fd and
> call this ioctl on the vcpu used by another process,
> it will then be u
vcpu mutex can be held for unlimited time so
taking it with mutex_lock on an ioctl is wrong:
one process could be passed a vcpu fd and
call this ioctl on the vcpu used by another process,
it will then be unkillable until the owner exits.
Call mutex_lock_killable instead and return status.
Note
This patch fixes the initialization of the following variables:
ndev->io_tx_lock
ndev->io_rx_lock
ndev->io_tx_cond
ndev->io_rx_cond
Signed-off-by: Asias He
---
tools/kvm/virtio/net.c |6 +++---
1 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/tools/kvm/virtio/net.c
On Sat, Aug 27, 2011 at 02:09:46PM -0400, Umesh Deshpande wrote:
> This patch implements migrate_ram mutex, which protects the RAMBlock list
> traversal in the migration thread during the transfer of a ram from their
> addition/removal from the iothread.
>
> Note: Combination of
On 08/29/2011 05:04 AM, Stefan Hajnoczi wrote:
On Sat, Aug 27, 2011 at 7:09 PM, Umesh Deshpande wrote:
This patch implements migrate_ram mutex, which protects the RAMBlock list
traversal in the migration thread during the transfer of a ram from their
addition/removal from the iothread.
Note
On Sat, Aug 27, 2011 at 7:09 PM, Umesh Deshpande wrote:
> This patch implements migrate_ram mutex, which protects the RAMBlock list
> traversal in the migration thread during the transfer of a ram from their
> addition/removal from the iothread.
>
> Note: Combination of iot
This patch implements migrate_ram mutex, which protects the RAMBlock list
traversal in the migration thread during the transfer of a ram from their
addition/removal from the iothread.
Note: Combination of iothread mutex and migration thread mutex works as a
rw-lock. Both mutexes are acquired
On Tue, Aug 23, 2011 at 11:12:48PM -0400, Umesh Deshpande wrote:
> ramlist mutex is implemented to protect the RAMBlock list traversal in the
> migration thread from their addition/removal from the iothread.
>
> Note: Combination of iothread mutex and migration thread mutex works as
ramlist mutex is implemented to protect the RAMBlock list traversal in the
migration thread from their addition/removal from the iothread.
Note: Combination of iothread mutex and migration thread mutex works as a
rw-lock. Both mutexes are acquired while modifying the ram_list members or RAM
block
On Tue, Aug 23, 2011 at 01:41:48PM +0200, Paolo Bonzini wrote:
> On 08/23/2011 11:17 AM, Marcelo Tosatti wrote:
> >>>> >typedef struct RAMList {
> >>>> > +QemuMutex mutex;
> >>>> >uint8_t *phys
On 08/23/2011 11:17 AM, Marcelo Tosatti wrote:
> >typedef struct RAMList {
> > + QemuMutex mutex;
> >uint8_t *phys_dirty;
> >QLIST_HEAD(ram, RAMBlock) blocks;
> >QLIST_HEAD(, RAMBlock) blocks_mru;
>
> A comment on w
On Tue, Aug 23, 2011 at 06:15:33AM -0300, Marcelo Tosatti wrote:
> On Tue, Aug 16, 2011 at 11:56:37PM -0400, Umesh Deshpande wrote:
> > ramlist mutex is implemented to protect the RAMBlock list traversal in the
> > migration thread from their addition/removal from the iothread.
>
On Tue, Aug 16, 2011 at 11:56:37PM -0400, Umesh Deshpande wrote:
> ramlist mutex is implemented to protect the RAMBlock list traversal in the
> migration thread from their addition/removal from the iothread.
>
> Signed-off-by: Umesh Deshpande
> ---
> cpu-all.h |
r the migration code. This way we don't have
to acquire the mutex for block list traversals.
I'm not sure... as I said, the MRU list is on a fast path and
restricting it to that fast path keeps us honest. Also, the non-MRU
list is almost never accessed outside the migration threa
ru order, whereas the sequence
of blocks doesn't matter for the migration code. This way we don't have
to acquire the mutex for block list traversals.
- Umesh
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vge
On 08/16/2011 08:56 PM, Umesh Deshpande wrote:
@@ -3001,8 +3016,10 @@ void qemu_ram_free_from_ptr(ram_addr_t addr)
QLIST_FOREACH(block,&ram_list.blocks, next) {
if (addr == block->offset) {
+qemu_mutex_lock_ramlist();
QLIST_REMOVE(block, next);
ramlist mutex is implemented to protect the RAMBlock list traversal in the
migration thread from their addition/removal from the iothread.
Signed-off-by: Umesh Deshpande
---
cpu-all.h |2 ++
exec.c| 19 +++
qemu-common.h |2 ++
3 files changed, 23
This temporarily requires our own initialization service as we are still
using the !IOTHREAD version of qemu_init_main_loop.
Signed-off-by: Jan Kiszka
---
cpus.c | 57 +++--
1 files changed, 31 insertions(+), 26 deletions(-)
diff --git a/cpu
Makes IRQ allocation for new devices thread-safe.
Signed-off-by: Sasha Levin
---
tools/kvm/irq.c | 20 +---
1 files changed, 13 insertions(+), 7 deletions(-)
diff --git a/tools/kvm/irq.c b/tools/kvm/irq.c
index 15f4702..f92123d 100644
--- a/tools/kvm/irq.c
+++ b/tools/kvm/irq.
* Pekka Enberg wrote:
> The pthread_mutex_{lock|unlock} functions return non-zero, not negative number
> upon error. Fix that wrong assumption in the code.
glibc/pthreads mutex API semantics are pretty silly IMO.
I *think* it would be better to try to match the kernel API here, and p
serial8250_device *dev = &devices[0];
- if (pthread_mutex_lock(&dev->mutex) < 0)
+ if (pthread_mutex_lock(&dev->mutex) != 0)
die("pthread_mutex_lock");
serial8250__receive(self, dev);
@@ -133,7 +133,7 @@ void serial8250__in
);
// Set size limit
SendMessage(hTextBox, EM_LIMITTEXT, TEXTBOX_LIMIT, 0);
-// Create mutex for text buffer access
-hTextBufferMutex = CreateMutex(NULL, FALSE, NULL);
+// Initialize critical section object for text buffer access
+InitializeCriticalSection(&crit
When the guest acknowledges an interrupt, it sends an EOI message to the local
apic, which broadcasts it to the ioapic. To handle the EOI, we need to take
the ioapic mutex.
On large guests, this causes a lot of contention on this mutex. Since large
guests usually don't route interrupts vi
struct file *filp,
if (kvm->arch.vpit)
r = 0;
create_pit_unlock:
- up_write(&kvm->slots_lock);
+ mutex_unlock(&kvm->slots_lock);
break;
case KVM_IRQ_LINE_STATUS:
case KVM_I
Dec 28, 2009 at 02:08:30PM +0200, Avi Kivity wrote:
>>>>
>>>>
>>>>> When the guest acknowledges an interrupt, it sends an EOI message to the
>>>>> local
>>>>> apic, which broadcasts it to the ioapic. To handle the EOI, w
message to the local
apic, which broadcasts it to the ioapic. To handle the EOI, we need to take
the ioapic mutex.
On large guests, this causes a lot of contention on this mutex. Since large
guests usually don't route interrupts via the ioapic (they use msi instead),
this is completely unnece
cal
>>> apic, which broadcasts it to the ioapic. To handle the EOI, we need to take
>>> the ioapic mutex.
>>>
>>> On large guests, this causes a lot of contention on this mutex. Since large
>>> guests usually don't route interrupts via the ioapic (
On 12/28/2009 10:37 PM, Marcelo Tosatti wrote:
On Mon, Dec 28, 2009 at 02:08:30PM +0200, Avi Kivity wrote:
When the guest acknowledges an interrupt, it sends an EOI message to the local
apic, which broadcasts it to the ioapic. To handle the EOI, we need to take
the ioapic mutex.
On large
On Mon, Dec 28, 2009 at 02:08:30PM +0200, Avi Kivity wrote:
> When the guest acknowledges an interrupt, it sends an EOI message to the local
> apic, which broadcasts it to the ioapic. To handle the EOI, we need to take
> the ioapic mutex.
>
> On large guests, this causes a lot o
When the guest acknowledges an interrupt, it sends an EOI message to the local
apic, which broadcasts it to the ioapic. To handle the EOI, we need to take
the ioapic mutex.
On large guests, this causes a lot of contention on this mutex. Since large
guests usually don't route interrupts vi
==
--- kvm.orig/include/linux/kvm_host.h
+++ kvm/include/linux/kvm_host.h
@@ -161,7 +161,7 @@ struct kvm_memslots {
struct kvm {
spinlock_t mmu_lock;
spinlock_t requests_lock;
- struct rw_semaphore slots_lock;
+ struct mutex slots_lock;
struct mm_struct *m
==
--- kvm.orig/include/linux/kvm_host.h
+++ kvm/include/linux/kvm_host.h
@@ -161,7 +161,7 @@ struct kvm_memslots {
struct kvm {
spinlock_t mmu_lock;
spinlock_t requests_lock;
- struct rw_semaphore slots_lock;
+ struct mutex slots_lock;
struct mm_struct *m
On 11/12/2009 01:49 AM, Jan Kiszka wrote:
Needed to avoid some missing symbols when KVM is disabled.
Applied, thanks.
--
error compiling committee.c: too many arguments to function
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger
Needed to avoid some missing symbols when KVM is disabled.
Signed-off-by: Jan Kiszka
---
vl.c |2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/vl.c b/vl.c
index 594ca34..6556075 100644
--- a/vl.c
+++ b/vl.c
@@ -3443,7 +3443,7 @@ void qemu_notify_event(void)
}
}
On Wed, Aug 12, 2009 at 12:22:34PM +0300, Avi Kivity wrote:
> On 08/12/2009 12:11 PM, Gleb Natapov wrote:
>> On Wed, Aug 12, 2009 at 11:29:00AM +0300, Avi Kivity wrote:
>>
>>> On 08/11/2009 03:31 PM, Gleb Natapov wrote:
>>>
>>>> Change irq
On 08/12/2009 12:11 PM, Gleb Natapov wrote:
On Wed, Aug 12, 2009 at 11:29:00AM +0300, Avi Kivity wrote:
On 08/11/2009 03:31 PM, Gleb Natapov wrote:
Change irq_lock from mutex to spinlock. We do not sleep while holding
it.
But why change?
Isn't it more lightw
On Wed, Aug 12, 2009 at 11:29:00AM +0300, Avi Kivity wrote:
> On 08/11/2009 03:31 PM, Gleb Natapov wrote:
>> Change irq_lock from mutex to spinlock. We do not sleep while holding
>> it.
>>
>
> But why change?
>
Isn't it more lightweight? For the remainin
On 08/11/2009 03:31 PM, Gleb Natapov wrote:
Change irq_lock from mutex to spinlock. We do not sleep while holding
it.
But why change?
The only motivation I can see is to allow injection from irqfd and
interrupt contexts without requiring a tasklet/work. But that needs
spin_lock_irqsave
Change irq_lock from mutex to spinlock. We do not sleep while holding
it.
Signed-off-by: Gleb Natapov
---
include/linux/kvm_host.h |2 +-
virt/kvm/irq_comm.c | 28 ++--
virt/kvm/kvm_main.c |2 +-
3 files changed, 16 insertions(+), 16 deletions
Mark McLoughlin wrote:
This allows the guest vcpu thread to exit while the I/O thread is
churning away.
+ kvm_sleep_begin();
What if the nic is hot-unplugged here?
len += qemu_sendv_packet(n->vc, out_sg, out_num);
n is freed, no?
+ kvm_sleep_end();
--
error
This allows the guest vcpu thread to exit while the I/O thread is
churning away.
Signed-off-by: Mark McLoughlin <[EMAIL PROTECTED]>
---
qemu/hw/virtio-net.c |2 ++
1 files changed, 2 insertions(+), 0 deletions(-)
diff --git a/qemu/hw/virtio-net.c b/qemu/hw/virtio-net.c
index 0612f5f..aa1c107
The idea here is that with GSO, packets are much larger
and we can allow the vcpu threads to e.g. process irq
acks during the window where we're reading these
packets from the tapfd.
One known issue with this is that it triggers a subtle
SMP race in the kernel's posix-timers and signalfd code.
See
s->fd, NULL, &sbuf, &f) >=0 ? sbuf.len : -1;
> > #else
> >
> Maybe do it only when GSO is actually used by the guest/tap.
> Otherwise it can cause some ctx trashing right?
(Strange habit you have of "top commenting" on patches :-)
I've been meaning t
Mark McLoughlin wrote:
The idea here is that with GSO, packets are much larger
and we can allow the vcpu threads to e.g. process irq
acks during the window where we're reading these
packets from the tapfd.
Signed-off-by: Mark McLoughlin <[EMAIL PROTECTED]>
---
qemu/vl.c |2 ++
1 files chang
1 - 100 of 101 matches
Mail list logo