Re: [Qemu-block] [PATCH] nbd-client: avoid spurious qio_channel_yield() re-entry
On 23/08/2017 16:51, Eric Blake wrote: > On 08/22/2017 07:51 AM, Stefan Hajnoczi wrote: >> The following scenario leads to an assertion failure in >> qio_channel_yield(): >> >> 1. Request coroutine calls qio_channel_yield() successfully when sending >>would block on the socket. It is now yielded. >> 2. nbd_read_reply_entry() calls nbd_recv_coroutines_enter_all() because >>nbd_receive_reply() failed. >> 3. Request coroutine is entered and returns from qio_channel_yield(). >>Note that the socket fd handler has not fired yet so >>ioc->write_coroutine is still set. >> 4. Request coroutine attempts to send the request body with nbd_rwv() >>but the socket would still block. qio_channel_yield() is called >>again and assert(!ioc->write_coroutine) is hit. >> >> The problem is that nbd_read_reply_entry() does not distinguish between >> request coroutines that are waiting to receive a reply and those that >> are not. >> >> This patch adds a per-request bool receiving flag so >> nbd_read_reply_entry() can avoid spurious aio_wake() calls. >> >> Reported-by: Dr. David Alan Gilbert>> Signed-off-by: Stefan Hajnoczi >> --- >> This should fix the issue that Dave is seeing but I'm concerned that >> there are more problems in nbd-client.c. We don't have good >> abstractions for writing coroutine socket I/O code. Something like Go's >> channels would avoid manual low-level coroutine calls. There is >> currently no way to cancel qio_channel_yield() so requests doing I/O may >> remain in-flight indefinitely and nbd-client.c doesn't join them... > > Vladimir has some cleanups that rewrite the NBD coroutines to be more > legible, but it is invasive enough to be 2.11 material. I think that > for a stop-gap of getting 2.10 out the door, we may be better off > including this patch - but I would still like some positive review from > more than just me. There's not much time left before I need to send the > -rc4 NBD pull request, though. > Reviewed-by: Paolo Bonzini signature.asc Description: OpenPGP digital signature
Re: [Qemu-block] [PATCH] nbd-client: avoid spurious qio_channel_yield() re-entry
On 23/08/2017 16:45, Stefan Hajnoczi wrote: > That depends on the BDRV_POLL_WHILE() allowing all request coroutines to > terminate before we call nbd_client_detach_aio_context(): > > qio_channel_shutdown(client->ioc, > QIO_CHANNEL_SHUTDOWN_BOTH, > NULL); > BDRV_POLL_WHILE(bs, client->read_reply_co); > > nbd_client_detach_aio_context(bs); > > I'm not sure we have any guarantee that request coroutines will have > terminated. Ok, I see my confusion, it's only because of the "receiving" flag which actually means "waiting for reply". Your patch is okay. Paolo > Once nbd_client_detach_aio_context() is called > ioc->read_coroutine/write_coroutine are set to NULL. At that point any > remaining coroutine doing I/O on ioc will be in trouble.
Re: [Qemu-block] [PATCH] nbd-client: avoid spurious qio_channel_yield() re-entry
On 08/22/2017 07:51 AM, Stefan Hajnoczi wrote: > The following scenario leads to an assertion failure in > qio_channel_yield(): > > 1. Request coroutine calls qio_channel_yield() successfully when sending >would block on the socket. It is now yielded. > 2. nbd_read_reply_entry() calls nbd_recv_coroutines_enter_all() because >nbd_receive_reply() failed. > 3. Request coroutine is entered and returns from qio_channel_yield(). >Note that the socket fd handler has not fired yet so >ioc->write_coroutine is still set. > 4. Request coroutine attempts to send the request body with nbd_rwv() >but the socket would still block. qio_channel_yield() is called >again and assert(!ioc->write_coroutine) is hit. > > The problem is that nbd_read_reply_entry() does not distinguish between > request coroutines that are waiting to receive a reply and those that > are not. > > This patch adds a per-request bool receiving flag so > nbd_read_reply_entry() can avoid spurious aio_wake() calls. > > Reported-by: Dr. David Alan Gilbert> Signed-off-by: Stefan Hajnoczi Using the steps in https://lists.gnu.org/archive/html/qemu-devel/2017-08/msg03853.html, I've verified that this avoids the hang that is otherwise present, so I'm adding: Tested-by: Eric Blake -- Eric Blake, Principal Software Engineer Red Hat, Inc. +1-919-301-3266 Virtualization: qemu.org | libvirt.org signature.asc Description: OpenPGP digital signature
Re: [Qemu-block] [PATCH] nbd-client: avoid spurious qio_channel_yield() re-entry
On 08/22/2017 07:51 AM, Stefan Hajnoczi wrote: > The following scenario leads to an assertion failure in > qio_channel_yield(): > > 1. Request coroutine calls qio_channel_yield() successfully when sending >would block on the socket. It is now yielded. > 2. nbd_read_reply_entry() calls nbd_recv_coroutines_enter_all() because >nbd_receive_reply() failed. > 3. Request coroutine is entered and returns from qio_channel_yield(). >Note that the socket fd handler has not fired yet so >ioc->write_coroutine is still set. > 4. Request coroutine attempts to send the request body with nbd_rwv() >but the socket would still block. qio_channel_yield() is called >again and assert(!ioc->write_coroutine) is hit. > > The problem is that nbd_read_reply_entry() does not distinguish between > request coroutines that are waiting to receive a reply and those that > are not. > > This patch adds a per-request bool receiving flag so > nbd_read_reply_entry() can avoid spurious aio_wake() calls. > > Reported-by: Dr. David Alan Gilbert> Signed-off-by: Stefan Hajnoczi > --- > This should fix the issue that Dave is seeing but I'm concerned that > there are more problems in nbd-client.c. We don't have good > abstractions for writing coroutine socket I/O code. Something like Go's > channels would avoid manual low-level coroutine calls. There is > currently no way to cancel qio_channel_yield() so requests doing I/O may > remain in-flight indefinitely and nbd-client.c doesn't join them... Vladimir has some cleanups that rewrite the NBD coroutines to be more legible, but it is invasive enough to be 2.11 material. I think that for a stop-gap of getting 2.10 out the door, we may be better off including this patch - but I would still like some positive review from more than just me. There's not much time left before I need to send the -rc4 NBD pull request, though. -- Eric Blake, Principal Software Engineer Red Hat, Inc. +1-919-301-3266 Virtualization: qemu.org | libvirt.org signature.asc Description: OpenPGP digital signature
Re: [Qemu-block] [PATCH] nbd-client: avoid spurious qio_channel_yield() re-entry
On Tue, Aug 22, 2017 at 03:23:32PM +0200, Paolo Bonzini wrote: > On 22/08/2017 14:51, Stefan Hajnoczi wrote: > > This should fix the issue that Dave is seeing but I'm concerned that > > there are more problems in nbd-client.c. We don't have good > > abstractions for writing coroutine socket I/O code. Something like Go's > > channels would avoid manual low-level coroutine calls. There is > > currently no way to cancel qio_channel_yield() so requests doing I/O may > > remain in-flight indefinitely and nbd-client.c doesn't join them... > > The idea was that shutdown(2) would force them to reenter... That depends on the BDRV_POLL_WHILE() allowing all request coroutines to terminate before we call nbd_client_detach_aio_context(): qio_channel_shutdown(client->ioc, QIO_CHANNEL_SHUTDOWN_BOTH, NULL); BDRV_POLL_WHILE(bs, client->read_reply_co); nbd_client_detach_aio_context(bs); I'm not sure we have any guarantee that request coroutines will have terminated. Once nbd_client_detach_aio_context() is called ioc->read_coroutine/write_coroutine are set to NULL. At that point any remaining coroutine doing I/O on ioc will be in trouble. Stefan
Re: [Qemu-block] [PATCH] nbd-client: avoid spurious qio_channel_yield() re-entry
On Wed, Aug 23, 2017 at 3:20 PM, Eric Blakewrote: > On 08/22/2017 07:51 AM, Stefan Hajnoczi wrote: >> The following scenario leads to an assertion failure in >> qio_channel_yield(): >> >> 1. Request coroutine calls qio_channel_yield() successfully when sending >>would block on the socket. It is now yielded. >> 2. nbd_read_reply_entry() calls nbd_recv_coroutines_enter_all() because >>nbd_receive_reply() failed. >> 3. Request coroutine is entered and returns from qio_channel_yield(). >>Note that the socket fd handler has not fired yet so >>ioc->write_coroutine is still set. >> 4. Request coroutine attempts to send the request body with nbd_rwv() >>but the socket would still block. qio_channel_yield() is called >>again and assert(!ioc->write_coroutine) is hit. >> >> The problem is that nbd_read_reply_entry() does not distinguish between >> request coroutines that are waiting to receive a reply and those that >> are not. >> >> This patch adds a per-request bool receiving flag so >> nbd_read_reply_entry() can avoid spurious aio_wake() calls. >> >> Reported-by: Dr. David Alan Gilbert >> Signed-off-by: Stefan Hajnoczi >> --- >> This should fix the issue that Dave is seeing but I'm concerned that >> there are more problems in nbd-client.c. We don't have good >> abstractions for writing coroutine socket I/O code. Something like Go's >> channels would avoid manual low-level coroutine calls. There is >> currently no way to cancel qio_channel_yield() so requests doing I/O may >> remain in-flight indefinitely and nbd-client.c doesn't join them... > > Is this patch needed for 2.10-rc4, or does Fam's series cover the issue? Fam's series fixes non-shared storage migration. This patch addresses the failure case when the server closes the connection prematurely. Stefan
Re: [Qemu-block] [PATCH] nbd-client: avoid spurious qio_channel_yield() re-entry
On 08/22/2017 07:51 AM, Stefan Hajnoczi wrote: > The following scenario leads to an assertion failure in > qio_channel_yield(): > > 1. Request coroutine calls qio_channel_yield() successfully when sending >would block on the socket. It is now yielded. > 2. nbd_read_reply_entry() calls nbd_recv_coroutines_enter_all() because >nbd_receive_reply() failed. > 3. Request coroutine is entered and returns from qio_channel_yield(). >Note that the socket fd handler has not fired yet so >ioc->write_coroutine is still set. > 4. Request coroutine attempts to send the request body with nbd_rwv() >but the socket would still block. qio_channel_yield() is called >again and assert(!ioc->write_coroutine) is hit. > > The problem is that nbd_read_reply_entry() does not distinguish between > request coroutines that are waiting to receive a reply and those that > are not. > > This patch adds a per-request bool receiving flag so > nbd_read_reply_entry() can avoid spurious aio_wake() calls. > > Reported-by: Dr. David Alan Gilbert> Signed-off-by: Stefan Hajnoczi > --- > This should fix the issue that Dave is seeing but I'm concerned that > there are more problems in nbd-client.c. We don't have good > abstractions for writing coroutine socket I/O code. Something like Go's > channels would avoid manual low-level coroutine calls. There is > currently no way to cancel qio_channel_yield() so requests doing I/O may > remain in-flight indefinitely and nbd-client.c doesn't join them... Is this patch needed for 2.10-rc4, or does Fam's series cover the issue? -- Eric Blake, Principal Software Engineer Red Hat, Inc. +1-919-301-3266 Virtualization: qemu.org | libvirt.org signature.asc Description: OpenPGP digital signature
Re: [Qemu-block] [PATCH] nbd-client: avoid spurious qio_channel_yield() re-entry
* Stefan Hajnoczi (stefa...@redhat.com) wrote: > The following scenario leads to an assertion failure in > qio_channel_yield(): > > 1. Request coroutine calls qio_channel_yield() successfully when sending >would block on the socket. It is now yielded. > 2. nbd_read_reply_entry() calls nbd_recv_coroutines_enter_all() because >nbd_receive_reply() failed. > 3. Request coroutine is entered and returns from qio_channel_yield(). >Note that the socket fd handler has not fired yet so >ioc->write_coroutine is still set. > 4. Request coroutine attempts to send the request body with nbd_rwv() >but the socket would still block. qio_channel_yield() is called >again and assert(!ioc->write_coroutine) is hit. > > The problem is that nbd_read_reply_entry() does not distinguish between > request coroutines that are waiting to receive a reply and those that > are not. > > This patch adds a per-request bool receiving flag so > nbd_read_reply_entry() can avoid spurious aio_wake() calls. > > Reported-by: Dr. David Alan Gilbert> Signed-off-by: Stefan Hajnoczi With that patch that assert does seem to go away; just leaving the other failure we're seeing. Dave > --- > This should fix the issue that Dave is seeing but I'm concerned that > there are more problems in nbd-client.c. We don't have good > abstractions for writing coroutine socket I/O code. Something like Go's > channels would avoid manual low-level coroutine calls. There is > currently no way to cancel qio_channel_yield() so requests doing I/O may > remain in-flight indefinitely and nbd-client.c doesn't join them... > > block/nbd-client.h | 7 ++- > block/nbd-client.c | 35 ++- > 2 files changed, 28 insertions(+), 14 deletions(-) > > diff --git a/block/nbd-client.h b/block/nbd-client.h > index 1935ffbcaa..b435754b82 100644 > --- a/block/nbd-client.h > +++ b/block/nbd-client.h > @@ -17,6 +17,11 @@ > > #define MAX_NBD_REQUESTS16 > > +typedef struct { > +Coroutine *coroutine; > +bool receiving; /* waiting for read_reply_co? */ > +} NBDClientRequest; > + > typedef struct NBDClientSession { > QIOChannelSocket *sioc; /* The master data channel */ > QIOChannel *ioc; /* The current I/O channel which may differ (eg TLS) */ > @@ -27,7 +32,7 @@ typedef struct NBDClientSession { > Coroutine *read_reply_co; > int in_flight; > > -Coroutine *recv_coroutine[MAX_NBD_REQUESTS]; > +NBDClientRequest requests[MAX_NBD_REQUESTS]; > NBDReply reply; > bool quit; > } NBDClientSession; > diff --git a/block/nbd-client.c b/block/nbd-client.c > index 422ecb4307..c2834f6b47 100644 > --- a/block/nbd-client.c > +++ b/block/nbd-client.c > @@ -39,8 +39,10 @@ static void nbd_recv_coroutines_enter_all(NBDClientSession > *s) > int i; > > for (i = 0; i < MAX_NBD_REQUESTS; i++) { > -if (s->recv_coroutine[i]) { > -aio_co_wake(s->recv_coroutine[i]); > +NBDClientRequest *req = >requests[i]; > + > +if (req->coroutine && req->receiving) { > +aio_co_wake(req->coroutine); > } > } > } > @@ -88,28 +90,28 @@ static coroutine_fn void nbd_read_reply_entry(void > *opaque) > * one coroutine is called until the reply finishes. > */ > i = HANDLE_TO_INDEX(s, s->reply.handle); > -if (i >= MAX_NBD_REQUESTS || !s->recv_coroutine[i]) { > +if (i >= MAX_NBD_REQUESTS || > +!s->requests[i].coroutine || > +!s->requests[i].receiving) { > break; > } > > -/* We're woken up by the recv_coroutine itself. Note that there > +/* We're woken up again by the request itself. Note that there > * is no race between yielding and reentering read_reply_co. This > * is because: > * > - * - if recv_coroutine[i] runs on the same AioContext, it is only > + * - if the request runs on the same AioContext, it is only > * entered after we yield > * > - * - if recv_coroutine[i] runs on a different AioContext, reentering > + * - if the request runs on a different AioContext, reentering > * read_reply_co happens through a bottom half, which can only > * run after we yield. > */ > -aio_co_wake(s->recv_coroutine[i]); > +aio_co_wake(s->requests[i].coroutine); > qemu_coroutine_yield(); > } > > -if (ret < 0) { > -s->quit = true; > -} > +s->quit = true; > nbd_recv_coroutines_enter_all(s); > s->read_reply_co = NULL; > } > @@ -128,14 +130,17 @@ static int nbd_co_send_request(BlockDriverState *bs, > s->in_flight++; > > for (i = 0; i < MAX_NBD_REQUESTS; i++) { > -if (s->recv_coroutine[i] == NULL) { > -s->recv_coroutine[i] = qemu_coroutine_self(); > +if (s->requests[i].coroutine ==
Re: [Qemu-block] [PATCH] nbd-client: avoid spurious qio_channel_yield() re-entry
On 22/08/2017 14:51, Stefan Hajnoczi wrote: > This should fix the issue that Dave is seeing but I'm concerned that > there are more problems in nbd-client.c. We don't have good > abstractions for writing coroutine socket I/O code. Something like Go's > channels would avoid manual low-level coroutine calls. There is > currently no way to cancel qio_channel_yield() so requests doing I/O may > remain in-flight indefinitely and nbd-client.c doesn't join them... The idea was that shutdown(2) would force them to reenter... Paolo
[Qemu-block] [PATCH] nbd-client: avoid spurious qio_channel_yield() re-entry
The following scenario leads to an assertion failure in qio_channel_yield(): 1. Request coroutine calls qio_channel_yield() successfully when sending would block on the socket. It is now yielded. 2. nbd_read_reply_entry() calls nbd_recv_coroutines_enter_all() because nbd_receive_reply() failed. 3. Request coroutine is entered and returns from qio_channel_yield(). Note that the socket fd handler has not fired yet so ioc->write_coroutine is still set. 4. Request coroutine attempts to send the request body with nbd_rwv() but the socket would still block. qio_channel_yield() is called again and assert(!ioc->write_coroutine) is hit. The problem is that nbd_read_reply_entry() does not distinguish between request coroutines that are waiting to receive a reply and those that are not. This patch adds a per-request bool receiving flag so nbd_read_reply_entry() can avoid spurious aio_wake() calls. Reported-by: Dr. David Alan GilbertSigned-off-by: Stefan Hajnoczi --- This should fix the issue that Dave is seeing but I'm concerned that there are more problems in nbd-client.c. We don't have good abstractions for writing coroutine socket I/O code. Something like Go's channels would avoid manual low-level coroutine calls. There is currently no way to cancel qio_channel_yield() so requests doing I/O may remain in-flight indefinitely and nbd-client.c doesn't join them... block/nbd-client.h | 7 ++- block/nbd-client.c | 35 ++- 2 files changed, 28 insertions(+), 14 deletions(-) diff --git a/block/nbd-client.h b/block/nbd-client.h index 1935ffbcaa..b435754b82 100644 --- a/block/nbd-client.h +++ b/block/nbd-client.h @@ -17,6 +17,11 @@ #define MAX_NBD_REQUESTS16 +typedef struct { +Coroutine *coroutine; +bool receiving; /* waiting for read_reply_co? */ +} NBDClientRequest; + typedef struct NBDClientSession { QIOChannelSocket *sioc; /* The master data channel */ QIOChannel *ioc; /* The current I/O channel which may differ (eg TLS) */ @@ -27,7 +32,7 @@ typedef struct NBDClientSession { Coroutine *read_reply_co; int in_flight; -Coroutine *recv_coroutine[MAX_NBD_REQUESTS]; +NBDClientRequest requests[MAX_NBD_REQUESTS]; NBDReply reply; bool quit; } NBDClientSession; diff --git a/block/nbd-client.c b/block/nbd-client.c index 422ecb4307..c2834f6b47 100644 --- a/block/nbd-client.c +++ b/block/nbd-client.c @@ -39,8 +39,10 @@ static void nbd_recv_coroutines_enter_all(NBDClientSession *s) int i; for (i = 0; i < MAX_NBD_REQUESTS; i++) { -if (s->recv_coroutine[i]) { -aio_co_wake(s->recv_coroutine[i]); +NBDClientRequest *req = >requests[i]; + +if (req->coroutine && req->receiving) { +aio_co_wake(req->coroutine); } } } @@ -88,28 +90,28 @@ static coroutine_fn void nbd_read_reply_entry(void *opaque) * one coroutine is called until the reply finishes. */ i = HANDLE_TO_INDEX(s, s->reply.handle); -if (i >= MAX_NBD_REQUESTS || !s->recv_coroutine[i]) { +if (i >= MAX_NBD_REQUESTS || +!s->requests[i].coroutine || +!s->requests[i].receiving) { break; } -/* We're woken up by the recv_coroutine itself. Note that there +/* We're woken up again by the request itself. Note that there * is no race between yielding and reentering read_reply_co. This * is because: * - * - if recv_coroutine[i] runs on the same AioContext, it is only + * - if the request runs on the same AioContext, it is only * entered after we yield * - * - if recv_coroutine[i] runs on a different AioContext, reentering + * - if the request runs on a different AioContext, reentering * read_reply_co happens through a bottom half, which can only * run after we yield. */ -aio_co_wake(s->recv_coroutine[i]); +aio_co_wake(s->requests[i].coroutine); qemu_coroutine_yield(); } -if (ret < 0) { -s->quit = true; -} +s->quit = true; nbd_recv_coroutines_enter_all(s); s->read_reply_co = NULL; } @@ -128,14 +130,17 @@ static int nbd_co_send_request(BlockDriverState *bs, s->in_flight++; for (i = 0; i < MAX_NBD_REQUESTS; i++) { -if (s->recv_coroutine[i] == NULL) { -s->recv_coroutine[i] = qemu_coroutine_self(); +if (s->requests[i].coroutine == NULL) { break; } } g_assert(qemu_in_coroutine()); assert(i < MAX_NBD_REQUESTS); + +s->requests[i].coroutine = qemu_coroutine_self(); +s->requests[i].receiving = false; + request->handle = INDEX_TO_HANDLE(s, i); if (s->quit) { @@ -173,10 +178,13 @@ static void nbd_co_receive_reply(NBDClientSession *s,