On 17/09/2021 00:58, Michael S. Tsirkin wrote:
On Thu, Sep 16, 2021 at 10:52:59AM +0200, Laurent Vivier wrote:
On 13/09/2021 10:25, Alexander Potapenko wrote:
Hi Laurent,

I took the latest kernel (5.15-rc1,
6880fa6c56601bb8ed59df6c30fd390cc5f6dd8f) and a slightly modified
config from syzbot (see attached)
The latter has a lot of unnecessary debug checks, but those should not
affect the RNG.

You then need to apply the following patch to the kernel:

====================================================
diff --git a/drivers/char/hw_random/core.c b/drivers/char/hw_random/core.c
index a3db27916256d..a4cba9f0ff8cb 100644
--- a/drivers/char/hw_random/core.c
+++ b/drivers/char/hw_random/core.c
@@ -433,8 +433,11 @@ static int hwrng_fillfn(void *unused)
                  if (IS_ERR(rng) || !rng)
                          break;
                  mutex_lock(&reading_mutex);
+               memset(rng_fillbuf, 'A', rng_buffer_size());
+               rng_fillbuf[rng_buffer_size()-1] = 0;
                  rc = rng_get_data(rng, rng_fillbuf,
                                    rng_buffer_size(), 1);
+               pr_err("rng_fillbuf: %s\n", rng_fillbuf);
                  mutex_unlock(&reading_mutex);
                  put_rng(rng);
                  if (rc <= 0) {
====================================================

and run the kernel under QEMU.

On my machine I'm seeing the following output:

$ cat log | strings | grep rng_fillbuf
[    4.901931][  T897] rng_fillbuf:
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
[    4.903104][  T897] rng_fillbuf: >
[    4.903641][  T897] rng_fillbuf:
[    4.904846][  T897] rng_fillbuf: ?
[    4.913442][  T897] rng_fillbuf: [

, which denotes that the first call to rng_get_data() leaves
rng_fillbuf uninitialized.


Thank you for the detailed steps.

The problem happens because we mix two different buffers:
- in add_early_randomness() we provide rng_buffer but don't wait it is full 
(see [1])
- in hwrng_fillfn() we provide rng_fillbuf, and we wait data here, but we
received the signal from QEMU that there are data, but these data are in
rng_buffer while we expect them in rng_fillbuf.

There are several ways to fix/workaround that:

1- ignore the read when wait=0 :

diff --git a/drivers/char/hw_random/virtio-rng.c 
b/drivers/char/hw_random/virtio-rng.c
index a90001e02bf7..8466d76566fd 100644
--- a/drivers/char/hw_random/virtio-rng.c
+++ b/drivers/char/hw_random/virtio-rng.c
@@ -59,15 +59,15 @@ static int virtio_read(struct hwrng *rng, void *buf,
size_t size, bool wait)
         if (vi->hwrng_removed)
                 return -ENODEV;

+       if (!wait)
+               return 0;
+
         if (!vi->busy) {
                 vi->busy = true;
                 reinit_completion(&vi->have_data);
                 register_buffer(vi, buf, size);
         }

-       if (!wait)
-               return 0;
-
         ret = wait_for_completion_killable(&vi->have_data);
         if (ret < 0)
                 return ret;


2- Use an internal intermediate buffer in virtio-rng, at a cost of a copy,
    I have some patches (somewhere) I can refresh to do that.

3- modify hw_random/core.c to use only one buffer

Thanks,
Laurent

[1] 78887832e765 ("hwrng: core - don't wait on add_early_randomness()")

4. actually differentiate between the two
using the pointer returned by get_buf.

Even if it can help I think we should avoid to keep mixing buffers.

For instance, if we submit a buffer with wait=0, the caller can re-use or release the memory while it is queued in the queue of the device.

Moreover, what to do if buffers differ?

Wait and use the data in the previous buffer (that can be corrupted by the submitter in-between)?

Or wait and drop, and wait again with the new buffer?

BTW, I found my patches that introduce an internal buffer in virtio-rng 
(solution 2):

https://github.com/vivier/linux/commits/virtio-rng

Thanks,
Laurent

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Reply via email to