On 19/01/2022 11:31, Paolo Bonzini wrote:

diff --git a/job.c b/job.c
index f16a4ef542..8a5b710d9b 100644
--- a/job.c
+++ b/job.c
@@ -566,7 +566,7 @@ void job_enter_cond_locked(Job *job, bool(*fn)(Job *job))
      job->busy = true;
      real_job_unlock();
      job_unlock();
-    aio_co_enter(job->aio_context, job->co);
+    aio_co_enter(job_get_aio_context(job), job->co);
      job_lock();
  }

If you replace aio_co_enter with aio_co_schedule, you can call it without dropping the lock.  The difference being that aio_co_schedule will always go through a bottom half.

@@ -1138,7 +1138,6 @@ static void coroutine_fn job_co_entry(void *opaque)
      Job *job = opaque;
      int ret;
-    assert(job->aio_context == qemu_get_current_aio_context());
      assert(job && job->driver && job->driver->run);
      job_pause_point(job);
      ret = job->driver->run(job, &job->err);
@@ -1177,7 +1176,7 @@ void job_start(Job *job)
          job->paused = false;
          job_state_transition_locked(job, JOB_STATUS_RUNNING);
      }
-    aio_co_enter(job->aio_context, job->co);
+    aio_co_enter(job_get_aio_context(job), job->co);

Better to use aio_co_schedule here, too, and move it under the previous WITH_JOB_LOCK_GUARD.

Unfortunately this does not work straightforward: aio_co_enter invokes aio_co_schedule only if the context is different from the main loop, otherwise it can directly enter the coroutine with qemu_aio_coroutine_enter. So always replacing it with aio_co_schedule breaks the unit tests assumptions, as they expect that when control is returned the job has already executed.

A possible solution is to aio_poll() on the condition we want to assert, waiting for the bh to be scheduled. But I don't know if this is then useful to test something.

Thank you,
Emanuele


Reply via email to