On Tue, Nov 11, 2025 at 04:48:04PM +0100, Kevin Wolf wrote:
> Am 11.11.2025 um 15:43 hat Daniel P. Berrangé geschrieben:
> > On Wed, Nov 05, 2025 at 03:57:29PM -0600, Eric Blake wrote:
> > > On Tue, Nov 04, 2025 at 11:13:48AM +0000, Daniel P. Berrangé wrote:
> > > > On Mon, Nov 03, 2025 at 02:10:57PM -0600, Eric Blake wrote:
> > > > > The point of QIONetListener is to allow a server to listen to more
> > > > > than one socket address at a time, and respond to clients in a
> > > > > first-come-first-serve order across any of those addresses.  While
> > > > > some servers (like NBD) really do want to serve multiple simultaneous
> > > > > clients, many other servers only care about the first client to
> > > > > connect, and will immediately deregister the callback, possibly by
> > > > > dropping their reference to the QIONetListener.  The existing code
> > > > > ensures that all other pending callbacks remain safe once the first
> > > > > callback drops the listener, by adding an extra reference to the
> > > > > listener for each GSource created, where those references pair to the
> > > > > eventual teardown of each GSource after a given callbacks has been
> > > > > serviced or aborted.  But it is equally acceptable to hoist the
> > > > > reference to the listener outside the loop - as long as there is a
> > > > > callback function registered, it is sufficient to have a single
> > > > > reference live for the entire array of sioc, rather than one reference
> > > > > per sioc in the array.
> > > > > 
> > > > > Hoisting the reference like this will make it easier for an upcoming
> > > > > patch to still ensure the listener cannot be prematurely garbage
> > > > > collected during the user's callback, even when the callback no longer
> > > > > uses a per-sioc GSource.
> > > > 
> > > > It isn't quite this simple. Glib reference counts the callback
> > > > func / data, holding a reference when dispatching the callback.
> > > > 
> > > > IOW, even if the GSource is unrefed, the callback 'notify'
> > > > function won't be called if the main loop is in the process
> > > > of dispatching.

snip

> > That appears to work ok, however, there's still a race window that is
> > not solved. Between the time thread 2 sees POLLIN, and when it calls
> > the dispatch(sock) function, it is possible that thread 1 will drop
> > the last reference:
> > 
> > 
> > 
> >   Thread 1:
> >        qio_net_listener_set_client_func(lstnr, f, ...);
> >            => object_ref(listener)
> >            => foreach sock: socket
> >                => sock_src = qio_channel_socket_add_watch_source(sock, 
> > ...., lstnr, NULL);
> > 
> >   Thread 2:
> >        poll()
> >           => event POLLIN on socket
> > 
> >   Thread 1:
> >        qio_net_listener_set_client_func(lstnr, NULL, ...);
> >           => foreach sock: socket
> >                => g_source_unref(sock_src)
> >           => object_unref(listener)
> >        unref(lstnr)  (still 1 reference left)
> 
> Is what you're worried about that there is still a reference left in
> the opaque pointer of an fd handler, but it's not in the refcount and
> therefore this already frees the listener while thread 2 will still
> access it?

Yes, exactly.

> 
> > 
> >   Thread 2:
> >                => call dispatch(sock)
> >                     => ref(lstnr)
> >                     ...do stuff..
> >                     => unref(lstnr)    (the final reference)
> >                         => finalize(lstnr)
> >               => return dispatch(sock)
> >               => unref(GSourceCallback)
> > 
> > 
> > I don't see a way to solve this without synchronization with the event
> > loop for releasing the reference on the opaque data for the dispatcher
> > callback.  That's what the current code does, but I'm seeing no way for
> > the AioContext event loop callbacks to have anything equivalent. This
> > feels like a gap in the AioContext design.
> 
> I think the way you would normally do this is schedule a BH in thread 2
> to do the critical work. If you delete the fd handler and unref the
> listener in thread 2, then there is no race.

Yes, using a BH would be safe, provided you put the BH in the right
loop, given that we have choice of the main event loop, or a non-default
GMainContext, or the special  AioContext that NBD is relying on.

> But maybe adding a callback for node deletion in AioHandler wouldn't
> hurt because the opaque pointer pretty much always references something
> and doing an unref when deleting the AioHandler should be a pretty
> common pattern.

That would likely make the scenario less error-prone, compared to
remembering to use a BH to synchronize.

> > This is admittedly an incredibly hard to trigger race condition. It would
> > need a client to be calling a QMP command that tears down the NBD server,
> > at the exact same time as a new NBD client was incoming. Or the same kind
> > of scenario for other pieces of QEMU code using QIONetListener. This still
> > makes me worried though, as rare races have a habit of hitting QEMU
> > eventually.
> 
> Aren't both QMP and incoming NBD connections always handled in the main
> thread? I'm not sure if I know a case where we would actually get this
> pattern with two different threads today. Of course, that doesn't mean
> that we couldn't get it in the future.

Yeah, I believe we're probably safe in todays usage. My concern was
primarily about any surprises in the conceptual design that might
impact us in future.

I guess if NBD is the only thing using AioContext for QIONetListener
today, we could hoist the ref/unref only when using a AioContext,
and keep the GDestroyNotify usage for the GMainContext code path
to significantly limit the exposure. Avoid needing to do anything
extra for AioHandlers right before release.

With regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|


Reply via email to