On Sat, Feb 26, 2011 at 01:29:01PM +0100, Jan Kiszka wrote:
> >     at 
> > /var/tmp/portage/app-emulation/qemu-kvm-0.14.0/work/qemu-kvm-0.14.0/qemu-kvm.c:1466
> > #12 0x00007ffff77bb944 in start_thread () from /lib/libpthread.so.0
> > #13 0x00007ffff5e491dd in clone () from /lib/libc.so.6
> > (gdb)
> 
> That's a spice bug. In fact, there are a lot of
> qemu_mutex_lock/unlock_iothread in that subsystem. I bet at least a few
> of them can cause even more subtle problems.
> 
> Two general issues with dropping the global mutex like this:
>  - The caller of mutex_unlock is responsible for maintaining
>    cpu_single_env across the unlocked phase (that's related to the
>    abort above).
>  - Dropping the lock in the middle of a callback is risky. That may
>    enable re-entrances of code sections that weren't designed for this
>    (I'm skeptic about the side effects of
>    qemu_spice_vm_change_state_handler - why dropping the lock here?).
> 
> Spice requires a careful review regarding such issues. Or it should
> pioneer with introducing its own lock so that we can handle at least
> related I/O activities over the VCPUs without holding the global mutex
> (but I bet it's not the simplest candidate for such a new scheme).
> 
> Jan
> 

Agree with the concern regarding spice.

Regarding global mutex, TCG and KVM execution behaviour can become more
similar wrt locking by dropping qemu_global_mutex during generation and
execution of TBs.

Of course for memory or PIO accesses from vcpu context qemu_global_mutex
must be acquired.

With that in place, it becomes easier to justify further improvements
regarding parallelization, such as using a read-write lock for
l1_phys_map / phys_page_find_alloc.


 21.62%               sh            3d38920b3f  [.] 0x00003d38920b3f            
      
  6.38%               sh  qemu-system-x86_64    [.] phys_page_find_alloc        
      
  4.90%               sh  qemu-system-x86_64    [.] tb_find_fast                
      
  4.34%               sh  qemu-system-x86_64    [.] tlb_flush  

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to