* Avi Kivity <[EMAIL PROTECTED]> [2008-04-26 02:23]:
> 
> Please reuse qemu_mutex for this, no need for a new one.

I'm having a little trouble wrapping my head around all of the locking
here.  If I avoid qemu_mutex and use a new one, I've got everything
working.  However, attemping to use qemu_mutex is stepping into a pile
of locking crap.  I'm sure there is a good reason... 

The current code looks like this:

Thread1:
main()
  kvm_qemu_init() // mutex_lock()
  machine->init()
   pc_init1()
    pc_new_cpu()
     cpu_init()
      cpu_x86_init()
       kvm_init_new_ap() // create vcpu Thread2
       <--      
      <--      
     <--      
    <--      
   <--      
  <--      
  kvm_main_loop() // mutex_unlock()
                        
Thread2:
ap_main_loop()
 /* vcpu init */
 kvm_main_loop_cpu()
  kvm_main_loop_wait() // mutex_unlock() on enter, lock on exit
   kvm_eat_signals()   // mutex_lock() on enter, unlock on exit
   <--
  <--
 <--


It wedges up in kvm_init_new_ap() if I attempt acquire qemu_mutex.
Quite obvious after I looked at the call trace and discovered
kvm_qemu_init() locking on exit.  I see other various functions that
unlock and then lock; I really don't want to wade into this mess...
rather whomever cooked it up should do some cleanup.  I tried the
unlock, then re-lock on exit in kvm_init_new_ap() but that also wedged.

Here is a rework with a new flag in vcpu_info indicating vcpu
creation.  Tested this with 64 1VCPU guests booting with 1 second delay,
and single 16-way SMP guest boot.

-- 
Ryan Harper
Software Engineer; Linux Technology Center
IBM Corp., Austin, Tx
(512) 838-9253   T/L: 678-9253
[EMAIL PROTECTED]


diffstat output:
 qemu-kvm.c |   14 ++++++++++++--
 1 files changed, 12 insertions(+), 2 deletions(-)

Signed-off-by: Ryan Harper <[EMAIL PROTECTED]>
---
diff --git a/qemu/qemu-kvm.c b/qemu/qemu-kvm.c
index 78127de..2768ea5 100644
--- a/qemu/qemu-kvm.c
+++ b/qemu/qemu-kvm.c
@@ -31,7 +31,9 @@ extern int smp_cpus;
 static int qemu_kvm_reset_requested;
 
 pthread_mutex_t qemu_mutex = PTHREAD_MUTEX_INITIALIZER;
+pthread_mutex_t vcpu_mutex = PTHREAD_MUTEX_INITIALIZER;
 pthread_cond_t qemu_aio_cond = PTHREAD_COND_INITIALIZER;
+pthread_cond_t qemu_vcpuup_cond = PTHREAD_COND_INITIALIZER;
 __thread struct vcpu_info *vcpu;
 
 struct qemu_kvm_signal_table {
@@ -53,6 +55,7 @@ struct vcpu_info {
     int stop;
     int stopped;
     int reload_regs;
+    int created;
 } vcpu_info[256];
 
 pthread_t io_thread;
@@ -369,6 +372,10 @@ static void *ap_main_loop(void *_env)
     sigfillset(&signals);
     sigprocmask(SIG_BLOCK, &signals, NULL);
     kvm_create_vcpu(kvm_context, env->cpu_index);
+    pthread_mutex_lock(&vcpu_mutex);
+    vcpu->created = 1;
+    pthread_cond_signal(&qemu_vcpuup_cond);
+    pthread_mutex_unlock(&vcpu_mutex);
     kvm_qemu_init_env(env);
     kvm_main_loop_cpu(env);
     return NULL;
@@ -388,9 +395,12 @@ static void kvm_add_signal(struct qemu_kvm_signal_table 
*sigtab, int signum)
 
 void kvm_init_new_ap(int cpu, CPUState *env)
 {
+    pthread_mutex_lock(&vcpu_mutex);
     pthread_create(&vcpu_info[cpu].thread, NULL, ap_main_loop, env);
-    /* FIXME: wait for thread to spin up */
-    usleep(200);
+    while (vcpu_info[cpu].created == 0) {
+        pthread_cond_wait(&qemu_vcpuup_cond, &vcpu_mutex);
+    }
+    pthread_mutex_unlock(&vcpu_mutex);
 }
 
 static void qemu_kvm_init_signal_tables(void)

-------------------------------------------------------------------------
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
Don't miss this year's exciting event. There's still time to save $100. 
Use priority code J8TL2D2. 
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
_______________________________________________
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel

Reply via email to