On Sat, 28 Nov 2009 05:20:36 pm Amit Shah wrote:
> The old way of sending data to the host was to populate one buffer and
> then wait till the host consumed it before sending the next chunk of
> data.
> 
> Also, there was no support to send large chunks of data.
> 
> We now maintain a per-device list of buffers that are ready to be
> passed on to the host.
> 
> This patch adds support to send big chunks in multiple buffers of
> PAGE_SIZE each to the host.
> 
> When the host consumes the data and signals to us via an interrupt, we
> add the consumed buffer back to our unconsumed list.
> 
> Signed-off-by: Amit Shah <[email protected]>
> ---
>  drivers/char/virtio_console.c |  159 +++++++++++++++++++++++++++++++++-------
>  1 files changed, 131 insertions(+), 28 deletions(-)
> 
> diff --git a/drivers/char/virtio_console.c b/drivers/char/virtio_console.c
> index e8dabae..3111e4c 100644
> --- a/drivers/char/virtio_console.c
> +++ b/drivers/char/virtio_console.c
> @@ -67,9 +67,13 @@ struct ports_device {
>       struct work_struct rx_work;
>  
>       struct list_head unused_read_head;
> +     struct list_head unused_write_head;
>  
>       /* To protect the list of unused read buffers and the in_vq */
>       spinlock_t read_list_lock;
> +
> +     /* To protect the list of unused write buffers and the out_vq */
> +     spinlock_t write_list_lock;
>  };

Let's simplify this a little with a single "buffer_lock" or such in the
previous patch.

> +     if (!in_count)
>               return 0;

Not necessary: if it happens all we'll do is gratuitously kick the host.

> +     in_offset = 0; /* offset in the user buffer */
> +     spin_lock_irqsave(&port->portdev->write_list_lock, irqf);
> +     while (in_count - in_offset) {

while (in_offset < in_count) seems clearer to me here.

> +             copy_size = min(in_count - in_offset, PAGE_SIZE);

Shouldn't you be using buf->size here?

> +             spin_unlock_irqrestore(&port->portdev->write_list_lock, irqf);
> +
> +             /*
> +              * Since we're not sure when the host will actually
> +              * consume the data and tell us about it, we have
> +              * to copy the data here in case the caller
> +              * frees the in_buf
> +              */
> +             memcpy(buf->buf, in_buf + in_offset, copy_size);
> +
> +             buf->len = copy_size;
> +             sg_init_one(sg, buf->buf, buf->len);
> +
> +             spin_lock_irqsave(&port->portdev->write_list_lock, irqf);

Dropping the lock here seems gratuitous.

> +/*
> + * This function is only called from the init routine so the spinlock
> + * for the unused_write_head list isn't taken
> + */
> +static void alloc_write_bufs(struct ports_device *portdev)
> +{
> +     struct port_buffer *buf;
> +     int i;
> +
> +     for (i = 0; i < 30; i++) {

30?  And why aren't we allocating these somehow as they're
consumed?

>       fill_receive_queue(portdev);
> +     alloc_write_bufs(portdev);

What happens if this fails?

Rusty.
_______________________________________________
Virtualization mailing list
[email protected]
https://lists.linux-foundation.org/mailman/listinfo/virtualization

Reply via email to