On 1/5/22 15:01, Emanuele Giuseppe Esposito wrote:
job mutex will be used to protect the job struct elements and list,
replacing AioContext locks.
Right now use a shared lock for all jobs, in order to keep things
simple. Once the AioContext lock is gone, we can introduce per-job
locks.
Not even needed in my opinion, this is not a fast path. But we'll see.
To simplify the switch from aiocontext to job lock, introduce
*nop* lock/unlock functions and macros. Once everything is protected
by jobs, we can add the mutex and remove the aiocontext.
Since job_mutex is already being used, add static
real_job_{lock/unlock}.
Out of curiosity, what breaks if the real job lock is used from the
start? (It probably should be mentioned in the commit message).
-static void job_lock(void)
+static void real_job_lock(void)
{
qemu_mutex_lock(&job_mutex);
}
-static void job_unlock(void)
+static void real_job_unlock(void)
{
qemu_mutex_unlock(&job_mutex);
}
Would it work to
#define job_lock real_job_lock
#define job_unlock real_job_unlock
instead of having to do the changes below?
Paolo
@@ -449,21 +460,21 @@ void job_enter_cond(Job *job, bool(*fn)(Job *job))
return;
}
- job_lock();
+ real_job_lock();
if (job->busy) {
- job_unlock();
+ real_job_unlock();
return;
}
if (fn && !fn(job)) {
- job_unlock();
+ real_job_unlock();
return;
}
assert(!job->deferred_to_main_loop);
timer_del(&job->sleep_timer);
job->busy = true;
- job_unlock();
+ real_job_unlock();
aio_co_enter(job->aio_context, job->co);
}
@@ -480,13 +491,13 @@ void job_enter(Job *job)
* called explicitly. */
static void coroutine_fn job_do_yield(Job *job, uint64_t ns)
{
- job_lock();
+ real_job_lock();
if (ns != -1) {
timer_mod(&job->sleep_timer, ns);
}
job->busy = false;
job_event_idle(job);
- job_unlock();
+ real_job_unlock();
qemu_coroutine_yield();
/* Set by job_enter_cond() before re-entering the coroutine. */