On Fri, Jun 01, 2018 at 07:26:03AM -0500, Eric Blake wrote:
> On 05/31/2018 04:17 PM, Ari Sundholm wrote:
> > +static void blk_log_writes_co_do_file(void *opaque)
> > +{
> > + BlkLogWritesFileReq *fr = opaque;
> > +
> > + fr->file_ret = fr->func(fr);
> > +
> > + fr->r->done++;
>
> Two non-atomic increments...
>
> > + qemu_coroutine_enter_if_inactive(fr->r->co);
> > +}
> > +
> > +static int coroutine_fn
> > +blk_log_writes_co_log(BlockDriverState *bs, uint64_t offset, uint64_t
> > bytes,
> > + QEMUIOVector *qiov, int flags,
> > + int (*file_func)(BlkLogWritesFileReq *r),
> > + uint64_t entry_flags)
> > +{
>
> > + qemu_coroutine_enter(co_file);
> > + qemu_coroutine_enter(co_log);
> > +
> > + while (r.done < 2) {
> > + qemu_coroutine_yield();
> > + }
>
> ...used as the condition for waiting. Since the point of coroutines is to
> allow (restricted) parallel operation, there's a chance that the coroutine
> implementation can be utilizing parallel threads; if that's the case, then
> on the rare race when both threads try to increment at near the same time,
> they can both read 0 and write 1, at which point this wait loop would be an
> infinite loop. You're probably better off using atomics (even if I'm wrong
> about coroutines being able to race each other on the increment, as the
> other point of coroutines is that they provide restricted parallelism where
> you can also implement them in only a single thread because of well-defined
> yield points).In this case the coroutines run from a single event loop (the BlockDriverState's AioContext) so they cannot race. As QEMU transitions to a multi-queue block layer we will need to think about parallelism more. But the multi-queue block layer isn't implemented yet, so I prefer writing straightforward code now without trying to anticipate what parallelism issues might arise in the future. Stefan
signature.asc
Description: PGP signature
