Thanks for the comments Bharata. On Fri, Nov 27, 2020 at 11:12 AM Bharata B Rao <bhar...@linux.ibm.com> wrote: > ... > > + qemu_mutex_lock(&drc->async_hcall_states_lock); > > +retry: > > + if (qemu_guest_getrandom(&token, sizeof(token), &err) < 0) { > > + error_report_err(err); > > + g_free(state); > > + return 0; > > + } > > Returning w/o releasing the lock.
Ah, right.. Will fix it. > > > + > > + if (!token) /* Token should be non-zero */ > > + goto retry; ... > > + SpaprDrcDeviceAsyncHCallState *state, *next; > > + > > + qemu_mutex_lock(&drc->async_hcall_states_lock); > > + QLIST_FOREACH_SAFE(state, &drc->async_hcall_states, node, next) { > > + if (state->continue_token == token) { > > + state->func = func; > > + state->data = data; > > + qemu_thread_create(&state->thread, "sPAPR Async HCALL", > > + spapr_drc_async_hcall_runner, state, > > + QEMU_THREAD_JOINABLE); > > + break; > > + } > > + } > > Looks like QLIST_FOREACH should be enough here as you don't > seem to be removing any list entry in this path. okay. > > > + qemu_mutex_unlock(&drc->async_hcall_states_lock); > > +} > > + > > +/* > > + * spapr_drc_finish_async_hcalls > > + * Waits for all pending async requests to complete > > + * thier execution and free the states > > + */ > > +static void spapr_drc_finish_async_hcalls(SpaprDrc *drc) > > +{ > > + SpaprDrcDeviceAsyncHCallState *state, *next; > > + > > + if (QLIST_EMPTY(&drc->async_hcall_states)) { > > + return; > > + } > > + > > + QLIST_FOREACH_SAFE(state, &drc->async_hcall_states, node, next) { > > + qemu_thread_join(&state->thread); > > + QLIST_REMOVE(state, node); > > + g_free(state); > > + } > > +} > > Why is it safe to iterate the list here w/o the lock? This is called in the reset path, the chances that a previous hcall to check pending status is still running at this stage is extremely low. I can still hold the lock to be safe though. > > Regards, > Bharata.