This case can be easily reproduced with the following steps: *) launch nbd-client with a timeout value specified:
nbd-client 127.0.0.1 /dev/nbd0 -block-size 4096 -name vol0 -timeout 10 *) block nbd traffic: iptables -I OUTPUT -p tcp --dport 10809 -j DROP iptables -I INPUT -p tcp --sport 10809 -j DROP *) issue a large write request to make nbd sender thread block in kernel_sendmsg() and cause the timeout to kick in: dd if=/dev/zero of=/dev/nbd0 bs=8M oflag=direct *) before the timeout occurs run nbd-client -d /dev/nbd0 in another terminal and watch the effects (see dmesg) What happens here is that nbd-client -d thread is waiting to acquire tx_lock which is acquired by the nbd sender thread (nbd_thread) but once the request fails the socket is shut down via sock_shutdown() (called from sock_xmit(), nbd->sock == NULL after this happens), later back in nbd_handle_req() the mutex is released, nbd-client -d enters __nbd_ioctl(NBD_CLEAR_QUE) code and the assertion is triggered as there are more requests waiting to be sent... Signed-off-by: Michal Belczyk <[email protected]> --- drivers/block/nbd.c | 6 +----- 1 file changed, 1 insertion(+), 5 deletions(-) diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c index 037288e..c09640a 100644 --- a/drivers/block/nbd.c +++ b/drivers/block/nbd.c @@ -746,11 +746,7 @@ static int __nbd_ioctl(struct block_device *bdev, struct nbd_device *nbd, } case NBD_CLEAR_QUE: - /* - * This is for compatibility only. The queue is always cleared - * by NBD_DO_IT or NBD_CLEAR_SOCK. - */ - BUG_ON(!nbd->sock && !list_empty(&nbd->queue_head)); + /* This is for compatibility only */ return 0; case NBD_PRINT_DEBUG: -- 1.8.1.4 -- Michal Belczyk Sr. ------------------------------------------------------------------------------ This SF.net email is sponsored by Windows: Build for Windows Store. http://p.sf.net/sfu/windows-dev2dev _______________________________________________ Nbd-general mailing list [email protected] https://lists.sourceforge.net/lists/listinfo/nbd-general
