Re: [PATCH v3] block/nbd: use non-blocking connect: fix vm hang on connect()

2020-08-20 Thread Vladimir Sementsov-Ogievskiy

19.08.2020 20:52, Eric Blake wrote:

On 8/12/20 9:52 AM, Vladimir Sementsov-Ogievskiy wrote:

This make nbd connection_co to yield during reconnects, so that
reconnect doesn't hang up the main thread. This is very important in
case of unavailable nbd server host: connect() call may take a long
time, blocking the main thread (and due to reconnect, it will hang
again and again with small gaps of working time during pauses between
connection attempts).




How to reproduce the bug, fixed with this commit:

1. Create an image on node1:
    qemu-img create -f qcow2 xx 100M

2. Start NBD server on node1:
    qemu-nbd xx

3. Start vm with second nbd disk on node2, like this:

   ./x86_64-softmmu/qemu-system-x86_64 -nodefaults -drive \
 file=/work/images/cent7.qcow2 -drive file=nbd+tcp://192.168.100.2 \
 -vnc :0 -qmp stdio -m 2G -enable-kvm -vga std


Where is the configuration to set up retry on the nbd connection?  I wonder if 
you have a non-upstream patch that turns it on by default in your builds; for 
upstream, I would have expected something more along the lines of -blockdev 
driver=nbd,reconnect-delay=20,server.type=inet,server.data.hostname=192.168.100.2,server.data.port=10809
 (typing off the top of my head, rather than actually tested).


No, it's not necessary: reconnect is enabled always. reconnect-delay just says 
what to do with guest requests when connection is down. By default, they just 
fails immediately. But even with reconnect-delay=0 reconnect code works and 
tries to reestablish the connection.





4. Access the vm through vnc (or some other way?), and check that NBD
    drive works:

    dd if=/dev/sdb of=/dev/null bs=1M count=10

    - the command should succeed.

5. Now, let's trigger nbd-reconnect loop in Qemu process. For this:

5.1 Kill NBD server on node1

5.2 run "dd if=/dev/sdb of=/dev/null bs=1M count=10" in the guest
 again. The command should fail and a lot of error messages about
 failing disk may appear as well.


Why does the guest access fail when the server goes away?  Shouldn't the 
pending guest requests merely be queued for retry (where the guest has not seen 
a failure yet, but may do so if timeouts are reached), rather than being 
instant errors?


And that's exactly how it should work when reconnect-delay is 0. If you set 
reconnect-delay to be >0, then in this period of time after detection of 
connection failure all the requests will be queued.





 Now NBD client driver in Qemu tries to reconnect.
 Still, VM works well.

6. Make node1 unavailable on NBD port, so connect() from node2 will
    last for a long time:

    On node1 (Note, that 10809 is just a default NBD port):

    sudo iptables -A INPUT -p tcp --dport 10809 -j DROP

    After some time the guest hangs, and you may check in gdb that Qemu
    hangs in connect() call, issued from the main thread. This is the
    BUG.

7. Don't forget to drop iptables rule from your node1:

    sudo iptables -D INPUT -p tcp --dport 10809 -j DROP






--
Best regards,
Vladimir



Re: [PATCH v3] block/nbd: use non-blocking connect: fix vm hang on connect()

2020-08-20 Thread Vladimir Sementsov-Ogievskiy

19.08.2020 17:46, Eric Blake wrote:

On 8/12/20 9:52 AM, Vladimir Sementsov-Ogievskiy wrote:

This make nbd connection_co to yield during reconnects, so that


s/make nbd connection_co to/makes nbd's connection_co/


reconnect doesn't hang up the main thread. This is very important in


s/hang up/block/


case of unavailable nbd server host: connect() call may take a long


s/of/of an/


time, blocking the main thread (and due to reconnect, it will hang
again and again with small gaps of working time during pauses between
connection attempts).

Realization notes:

  - We don't want to implement non-blocking connect() over non-blocking
  socket, because getaddrinfo() doesn't have portable non-blocking
  realization anyway, so let's just use a thread for both getaddrinfo()
  and connect().

  - We can't use qio_channel_socket_connect_async (which behave


behaves


  similarly and start a thread to execute connect() call), as it's rely


starts
relying


  on someone iterating main loop (g_main_loop_run() or something like
  this), which is not always the case.

  - We can't use thread_pool_submit_co API, as thread pool waits for all
  threads to finish (but we don't want to wait for blocking reconnect
  attempt on shutdown.

  So, we just create the thread by hand. Some additional difficulties
  are:

  - We want our connect don't block drained sections and aio context


s/don't block/to avoid blocking/


  switches. To achieve this, we make it possible to "cancel" synchronous
  wait for the connect (which is an coroutine yield actually), still,


s/an/a/


  the thread continues in background, and it successful result may be


s/it successful/if successful, its/


  reused on next reconnect attempt.

  - We don't want to wait for reconnect on shutdown, so there is
  CONNECT_THREAD_RUNNING_DETACHED thread state, which means that block
  layer not more interested in a result, and thread should close new


which means that the block layer is no longer interested


  connected socket on finish and free the state.

How to reproduce the bug, fixed with this commit:

1. Create an image on node1:
    qemu-img create -f qcow2 xx 100M

2. Start NBD server on node1:
    qemu-nbd xx

3. Start vm with second nbd disk on node2, like this:

   ./x86_64-softmmu/qemu-system-x86_64 -nodefaults -drive \
 file=/work/images/cent7.qcow2 -drive file=nbd+tcp://192.168.100.2 \
 -vnc :0 -qmp stdio -m 2G -enable-kvm -vga std

4. Access the vm through vnc (or some other way?), and check that NBD
    drive works:

    dd if=/dev/sdb of=/dev/null bs=1M count=10

    - the command should succeed.

5. Now, let's trigger nbd-reconnect loop in Qemu process. For this:

5.1 Kill NBD server on node1

5.2 run "dd if=/dev/sdb of=/dev/null bs=1M count=10" in the guest
 again. The command should fail and a lot of error messages about
 failing disk may appear as well.

 Now NBD client driver in Qemu tries to reconnect.
 Still, VM works well.

6. Make node1 unavailable on NBD port, so connect() from node2 will
    last for a long time:

    On node1 (Note, that 10809 is just a default NBD port):

    sudo iptables -A INPUT -p tcp --dport 10809 -j DROP

    After some time the guest hangs, and you may check in gdb that Qemu
    hangs in connect() call, issued from the main thread. This is the
    BUG.

7. Don't forget to drop iptables rule from your node1:

    sudo iptables -D INPUT -p tcp --dport 10809 -j DROP

Signed-off-by: Vladimir Sementsov-Ogievskiy 
---

Hi!

This a continuation of "[PATCH v2 for-5.1? 0/5] Fix nbd reconnect dead-locks",
which was mostly merged to 5.1. The only last patch was not merged, and
here is a no-change resend for convenience.


  block/nbd.c | 266 +++-
  1 file changed, 265 insertions(+), 1 deletion(-)


Looks big, but the commit message goes into good detail about what the problem 
is, why the solution takes the approach it does, and a good formula for 
reproduction.



diff --git a/block/nbd.c b/block/nbd.c
index 7bb881fef4..919ec5e573 100644
--- a/block/nbd.c
+++ b/block/nbd.c
@@ -38,6 +38,7 @@
  #include "qapi/qapi-visit-sockets.h"
  #include "qapi/qmp/qstring.h"
+#include "qapi/clone-visitor.h"
  #include "block/qdict.h"
  #include "block/nbd.h"
@@ -62,6 +63,47 @@ typedef enum NBDClientState {
  NBD_CLIENT_QUIT
  } NBDClientState;
+typedef enum NBDConnectThreadState {
+/* No thread, no pending results */
+    CONNECT_THREAD_NONE,


I'd indent the comments by four spaces, to line up with the enumeration values 
they describe.


+
+/* Thread is running, no results for now */
+    CONNECT_THREAD_RUNNING,
+
+/*
+ * Thread is running, but requestor exited. Thread should close the new socket
+ * and free the connect state on exit.
+ */
+    CONNECT_THREAD_RUNNING_DETACHED,
+
+/* Thread finished, results are stored in a state */
+    CONNECT_THREAD_FAIL,
+    CONNECT_THREAD_SUCCESS
+} NBDConnectThreadState;
+
+typedef struct NBDConnectThread 

Re: [PATCH v3] block/nbd: use non-blocking connect: fix vm hang on connect()

2020-08-19 Thread Eric Blake

On 8/12/20 9:52 AM, Vladimir Sementsov-Ogievskiy wrote:

This make nbd connection_co to yield during reconnects, so that
reconnect doesn't hang up the main thread. This is very important in
case of unavailable nbd server host: connect() call may take a long
time, blocking the main thread (and due to reconnect, it will hang
again and again with small gaps of working time during pauses between
connection attempts).




How to reproduce the bug, fixed with this commit:

1. Create an image on node1:
qemu-img create -f qcow2 xx 100M

2. Start NBD server on node1:
qemu-nbd xx

3. Start vm with second nbd disk on node2, like this:

   ./x86_64-softmmu/qemu-system-x86_64 -nodefaults -drive \
 file=/work/images/cent7.qcow2 -drive file=nbd+tcp://192.168.100.2 \
 -vnc :0 -qmp stdio -m 2G -enable-kvm -vga std


Where is the configuration to set up retry on the nbd connection?  I 
wonder if you have a non-upstream patch that turns it on by default in 
your builds; for upstream, I would have expected something more along 
the lines of -blockdev 
driver=nbd,reconnect-delay=20,server.type=inet,server.data.hostname=192.168.100.2,server.data.port=10809 
(typing off the top of my head, rather than actually tested).




4. Access the vm through vnc (or some other way?), and check that NBD
drive works:

dd if=/dev/sdb of=/dev/null bs=1M count=10

- the command should succeed.

5. Now, let's trigger nbd-reconnect loop in Qemu process. For this:

5.1 Kill NBD server on node1

5.2 run "dd if=/dev/sdb of=/dev/null bs=1M count=10" in the guest
 again. The command should fail and a lot of error messages about
 failing disk may appear as well.


Why does the guest access fail when the server goes away?  Shouldn't the 
pending guest requests merely be queued for retry (where the guest has 
not seen a failure yet, but may do so if timeouts are reached), rather 
than being instant errors?




 Now NBD client driver in Qemu tries to reconnect.
 Still, VM works well.

6. Make node1 unavailable on NBD port, so connect() from node2 will
last for a long time:

On node1 (Note, that 10809 is just a default NBD port):

sudo iptables -A INPUT -p tcp --dport 10809 -j DROP

After some time the guest hangs, and you may check in gdb that Qemu
hangs in connect() call, issued from the main thread. This is the
BUG.

7. Don't forget to drop iptables rule from your node1:

sudo iptables -D INPUT -p tcp --dport 10809 -j DROP



--
Eric Blake, Principal Software Engineer
Red Hat, Inc.   +1-919-301-3226
Virtualization:  qemu.org | libvirt.org




Re: [PATCH v3] block/nbd: use non-blocking connect: fix vm hang on connect()

2020-08-19 Thread Eric Blake

On 8/12/20 9:52 AM, Vladimir Sementsov-Ogievskiy wrote:

This make nbd connection_co to yield during reconnects, so that


s/make nbd connection_co to/makes nbd's connection_co/


reconnect doesn't hang up the main thread. This is very important in


s/hang up/block/


case of unavailable nbd server host: connect() call may take a long


s/of/of an/


time, blocking the main thread (and due to reconnect, it will hang
again and again with small gaps of working time during pauses between
connection attempts).

Realization notes:

  - We don't want to implement non-blocking connect() over non-blocking
  socket, because getaddrinfo() doesn't have portable non-blocking
  realization anyway, so let's just use a thread for both getaddrinfo()
  and connect().

  - We can't use qio_channel_socket_connect_async (which behave


behaves


  similarly and start a thread to execute connect() call), as it's rely


starts
relying


  on someone iterating main loop (g_main_loop_run() or something like
  this), which is not always the case.

  - We can't use thread_pool_submit_co API, as thread pool waits for all
  threads to finish (but we don't want to wait for blocking reconnect
  attempt on shutdown.

  So, we just create the thread by hand. Some additional difficulties
  are:

  - We want our connect don't block drained sections and aio context


s/don't block/to avoid blocking/


  switches. To achieve this, we make it possible to "cancel" synchronous
  wait for the connect (which is an coroutine yield actually), still,


s/an/a/


  the thread continues in background, and it successful result may be


s/it successful/if successful, its/


  reused on next reconnect attempt.

  - We don't want to wait for reconnect on shutdown, so there is
  CONNECT_THREAD_RUNNING_DETACHED thread state, which means that block
  layer not more interested in a result, and thread should close new


which means that the block layer is no longer interested


  connected socket on finish and free the state.

How to reproduce the bug, fixed with this commit:

1. Create an image on node1:
qemu-img create -f qcow2 xx 100M

2. Start NBD server on node1:
qemu-nbd xx

3. Start vm with second nbd disk on node2, like this:

   ./x86_64-softmmu/qemu-system-x86_64 -nodefaults -drive \
 file=/work/images/cent7.qcow2 -drive file=nbd+tcp://192.168.100.2 \
 -vnc :0 -qmp stdio -m 2G -enable-kvm -vga std

4. Access the vm through vnc (or some other way?), and check that NBD
drive works:

dd if=/dev/sdb of=/dev/null bs=1M count=10

- the command should succeed.

5. Now, let's trigger nbd-reconnect loop in Qemu process. For this:

5.1 Kill NBD server on node1

5.2 run "dd if=/dev/sdb of=/dev/null bs=1M count=10" in the guest
 again. The command should fail and a lot of error messages about
 failing disk may appear as well.

 Now NBD client driver in Qemu tries to reconnect.
 Still, VM works well.

6. Make node1 unavailable on NBD port, so connect() from node2 will
last for a long time:

On node1 (Note, that 10809 is just a default NBD port):

sudo iptables -A INPUT -p tcp --dport 10809 -j DROP

After some time the guest hangs, and you may check in gdb that Qemu
hangs in connect() call, issued from the main thread. This is the
BUG.

7. Don't forget to drop iptables rule from your node1:

sudo iptables -D INPUT -p tcp --dport 10809 -j DROP

Signed-off-by: Vladimir Sementsov-Ogievskiy 
---

Hi!

This a continuation of "[PATCH v2 for-5.1? 0/5] Fix nbd reconnect dead-locks",
which was mostly merged to 5.1. The only last patch was not merged, and
here is a no-change resend for convenience.


  block/nbd.c | 266 +++-
  1 file changed, 265 insertions(+), 1 deletion(-)


Looks big, but the commit message goes into good detail about what the 
problem is, why the solution takes the approach it does, and a good 
formula for reproduction.




diff --git a/block/nbd.c b/block/nbd.c
index 7bb881fef4..919ec5e573 100644
--- a/block/nbd.c
+++ b/block/nbd.c
@@ -38,6 +38,7 @@
  
  #include "qapi/qapi-visit-sockets.h"

  #include "qapi/qmp/qstring.h"
+#include "qapi/clone-visitor.h"
  
  #include "block/qdict.h"

  #include "block/nbd.h"
@@ -62,6 +63,47 @@ typedef enum NBDClientState {
  NBD_CLIENT_QUIT
  } NBDClientState;
  
+typedef enum NBDConnectThreadState {

+/* No thread, no pending results */
+CONNECT_THREAD_NONE,


I'd indent the comments by four spaces, to line up with the enumeration 
values they describe.



+
+/* Thread is running, no results for now */
+CONNECT_THREAD_RUNNING,
+
+/*
+ * Thread is running, but requestor exited. Thread should close the new socket
+ * and free the connect state on exit.
+ */
+CONNECT_THREAD_RUNNING_DETACHED,
+
+/* Thread finished, results are stored in a state */
+CONNECT_THREAD_FAIL,
+CONNECT_THREAD_SUCCESS
+} NBDConnectThreadState;
+
+typedef struct NBDConnectThread {
+/* 

[PATCH v3] block/nbd: use non-blocking connect: fix vm hang on connect()

2020-08-12 Thread Vladimir Sementsov-Ogievskiy
This make nbd connection_co to yield during reconnects, so that
reconnect doesn't hang up the main thread. This is very important in
case of unavailable nbd server host: connect() call may take a long
time, blocking the main thread (and due to reconnect, it will hang
again and again with small gaps of working time during pauses between
connection attempts).

Realization notes:

 - We don't want to implement non-blocking connect() over non-blocking
 socket, because getaddrinfo() doesn't have portable non-blocking
 realization anyway, so let's just use a thread for both getaddrinfo()
 and connect().

 - We can't use qio_channel_socket_connect_async (which behave
 similarly and start a thread to execute connect() call), as it's rely
 on someone iterating main loop (g_main_loop_run() or something like
 this), which is not always the case.

 - We can't use thread_pool_submit_co API, as thread pool waits for all
 threads to finish (but we don't want to wait for blocking reconnect
 attempt on shutdown.

 So, we just create the thread by hand. Some additional difficulties
 are:

 - We want our connect don't block drained sections and aio context
 switches. To achieve this, we make it possible to "cancel" synchronous
 wait for the connect (which is an coroutine yield actually), still,
 the thread continues in background, and it successful result may be
 reused on next reconnect attempt.

 - We don't want to wait for reconnect on shutdown, so there is
 CONNECT_THREAD_RUNNING_DETACHED thread state, which means that block
 layer not more interested in a result, and thread should close new
 connected socket on finish and free the state.

How to reproduce the bug, fixed with this commit:

1. Create an image on node1:
   qemu-img create -f qcow2 xx 100M

2. Start NBD server on node1:
   qemu-nbd xx

3. Start vm with second nbd disk on node2, like this:

  ./x86_64-softmmu/qemu-system-x86_64 -nodefaults -drive \
file=/work/images/cent7.qcow2 -drive file=nbd+tcp://192.168.100.2 \
-vnc :0 -qmp stdio -m 2G -enable-kvm -vga std

4. Access the vm through vnc (or some other way?), and check that NBD
   drive works:

   dd if=/dev/sdb of=/dev/null bs=1M count=10

   - the command should succeed.

5. Now, let's trigger nbd-reconnect loop in Qemu process. For this:

5.1 Kill NBD server on node1

5.2 run "dd if=/dev/sdb of=/dev/null bs=1M count=10" in the guest
again. The command should fail and a lot of error messages about
failing disk may appear as well.

Now NBD client driver in Qemu tries to reconnect.
Still, VM works well.

6. Make node1 unavailable on NBD port, so connect() from node2 will
   last for a long time:

   On node1 (Note, that 10809 is just a default NBD port):

   sudo iptables -A INPUT -p tcp --dport 10809 -j DROP

   After some time the guest hangs, and you may check in gdb that Qemu
   hangs in connect() call, issued from the main thread. This is the
   BUG.

7. Don't forget to drop iptables rule from your node1:

   sudo iptables -D INPUT -p tcp --dport 10809 -j DROP

Signed-off-by: Vladimir Sementsov-Ogievskiy 
---

Hi!

This a continuation of "[PATCH v2 for-5.1? 0/5] Fix nbd reconnect dead-locks",
which was mostly merged to 5.1. The only last patch was not merged, and
here is a no-change resend for convenience.


 block/nbd.c | 266 +++-
 1 file changed, 265 insertions(+), 1 deletion(-)

diff --git a/block/nbd.c b/block/nbd.c
index 7bb881fef4..919ec5e573 100644
--- a/block/nbd.c
+++ b/block/nbd.c
@@ -38,6 +38,7 @@
 
 #include "qapi/qapi-visit-sockets.h"
 #include "qapi/qmp/qstring.h"
+#include "qapi/clone-visitor.h"
 
 #include "block/qdict.h"
 #include "block/nbd.h"
@@ -62,6 +63,47 @@ typedef enum NBDClientState {
 NBD_CLIENT_QUIT
 } NBDClientState;
 
+typedef enum NBDConnectThreadState {
+/* No thread, no pending results */
+CONNECT_THREAD_NONE,
+
+/* Thread is running, no results for now */
+CONNECT_THREAD_RUNNING,
+
+/*
+ * Thread is running, but requestor exited. Thread should close the new socket
+ * and free the connect state on exit.
+ */
+CONNECT_THREAD_RUNNING_DETACHED,
+
+/* Thread finished, results are stored in a state */
+CONNECT_THREAD_FAIL,
+CONNECT_THREAD_SUCCESS
+} NBDConnectThreadState;
+
+typedef struct NBDConnectThread {
+/* Initialization constants */
+SocketAddress *saddr; /* address to connect to */
+/*
+ * Bottom half to schedule on completion. Scheduled only if bh_ctx is not
+ * NULL
+ */
+QEMUBHFunc *bh_func;
+void *bh_opaque;
+
+/*
+ * Result of last attempt. Valid in FAIL and SUCCESS states.
+ * If you want to steal error, don't forget to set pointer to NULL.
+ */
+QIOChannelSocket *sioc;
+Error *err;
+
+/* state and bh_ctx are protected by mutex */
+QemuMutex mutex;
+NBDConnectThreadState state; /* current state of the thread */
+AioContext *bh_ctx; /* where to schedule bh (NULL means don't schedule) */
+}