Re: [PATCH v2] virtio_console: allocate inbufs in add_port() only if it is needed

2019-11-13 Thread Michael S. Tsirkin
On Wed, Nov 13, 2019 at 05:37:34PM +0100, Laurent Vivier wrote:
> On 13/11/2019 16:22, Michael S. Tsirkin wrote:
> > On Wed, Nov 13, 2019 at 10:21:11AM -0500, Michael S. Tsirkin wrote:
> >> On Wed, Nov 13, 2019 at 04:00:56PM +0100, Laurent Vivier wrote:
> >>> When we hot unplug a virtserialport and then try to hot plug again,
> >>> it fails:
> >>>
> >>> (qemu) chardev-add socket,id=serial0,path=/tmp/serial0,server,nowait
> >>> (qemu) device_add virtserialport,bus=virtio-serial0.0,nr=2,\
> >>>   chardev=serial0,id=serial0,name=serial0
> >>> (qemu) device_del serial0
> >>> (qemu) device_add virtserialport,bus=virtio-serial0.0,nr=2,\
> >>>   chardev=serial0,id=serial0,name=serial0
> >>> kernel error:
> >>>   virtio-ports vport2p2: Error allocating inbufs
> >>> qemu error:
> >>>   virtio-serial-bus: Guest failure in adding port 2 for device \
> >>>  virtio-serial0.0
> >>>
> >>> This happens because buffers for the in_vq are allocated when the port is
> >>> added but are not released when the port is unplugged.
> >>>
> >>> They are only released when virtconsole is removed (see a7a69ec0d8e4)
> >>>
> >>> To avoid the problem and to be symmetric, we could allocate all the 
> >>> buffers
> >>> in init_vqs() as they are released in remove_vqs(), but it sounds like
> >>> a waste of memory.
> >>>
> >>> Rather than that, this patch changes add_port() logic to ignore ENOSPC
> >>> error in fill_queue(), which means queue has already been filled.
> >>>
> >>> Fixes: a7a69ec0d8e4 ("virtio_console: free buffers after reset")
> >>> Cc: m...@redhat.com
> >>> Cc: sta...@vger.kernel.org
> >>> Signed-off-by: Laurent Vivier 
> >>> ---
> >>>
> >>> Notes:
> >>> v2: making fill_queue return int and testing return code for -ENOSPC
> >>>
> >>>  drivers/char/virtio_console.c | 24 +---
> >>>  1 file changed, 9 insertions(+), 15 deletions(-)
> >>>
> >>> diff --git a/drivers/char/virtio_console.c b/drivers/char/virtio_console.c
> >>> index 7270e7b69262..9e6534fd1aa4 100644
> >>> --- a/drivers/char/virtio_console.c
> >>> +++ b/drivers/char/virtio_console.c
> >>> @@ -1325,24 +1325,24 @@ static void set_console_size(struct port *port, 
> >>> u16 rows, u16 cols)
> >>>   port->cons.ws.ws_col = cols;
> >>>  }
> >>>  
> >>> -static unsigned int fill_queue(struct virtqueue *vq, spinlock_t *lock)
> >>> +static int fill_queue(struct virtqueue *vq, spinlock_t *lock)
> >>>  {
> >>>   struct port_buffer *buf;
> >>> - unsigned int nr_added_bufs;
> >>> + int nr_added_bufs;
> >>>   int ret;
> >>>  
> >>>   nr_added_bufs = 0;
> >>>   do {
> >>>   buf = alloc_buf(vq->vdev, PAGE_SIZE, 0);
> >>>   if (!buf)
> >>> - break;
> >>> + return -ENOMEM;
> >>>  
> >>>   spin_lock_irq(lock);
> >>>   ret = add_inbuf(vq, buf);
> >>>   if (ret < 0) {
> >>>   spin_unlock_irq(lock);
> >>>   free_buf(buf, true);
> >>> - break;
> >>> + return ret;
> >>>   }
> >>>   nr_added_bufs++;
> >>>   spin_unlock_irq(lock);
> > 
> > So actually, how about handling ENOSPC specially here, and
> > returning success? After all queue is full as requested ...
> 
> I think it's interesting to return -ENOSPC to manage it as a real error
> in virtcons_probe() as in this function the queue should not be already
> full (is this right?) and to return the real error code.
> 
> Thanks,
> Laurent

OK then. Pls add comments.

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH net-next 11/14] vsock: add multi-transports support

2019-11-13 Thread Stefano Garzarella
On Wed, Nov 13, 2019 at 02:30:24PM +, Jorgen Hansen wrote:
> > From: Stefano Garzarella [mailto:sgarz...@redhat.com]
> > Sent: Tuesday, November 12, 2019 11:37 AM
> 
> > > > > You already mentioned that you are working on a fix for loopback
> > > > > here for the guest, but presumably a host could also do loopback.
> > > >
> > > > IIUC we don't support loopback in the host, because in this case the
> > > > application will use the CID_HOST as address, but if we are in a nested
> > > > VM environment we are in trouble.
> > >
> > > If both src and dst CID are CID_HOST, we should be fairly sure that this
> > > Is host loopback, no? If src is anything else, we would do G2H.
> > >
> > 
> > The problem is that we don't know the src until we assign a transport
> > looking at the dst. (or if the user bound the socket to CID_HOST before
> > the connect(), but it is not very common)
> > 
> > So if we are in a L1 and the user uses the local guest CID, it works, but if
> > it uses the HOST_CID, the packet will go to the L0.
> > 
> > If we are in L0, it could be simple, because we can simply check if G2H
> > is not loaded, so any packet to CID_HOST, is host loopback.
> > 
> > I think that if the user uses the IOCTL_VM_SOCKETS_GET_LOCAL_CID, to set
> > the dest CID for the loopback, it works in both cases because we return the
> > HOST_CID in L0, and always the guest CID in L1, also if a H2G is loaded to
> > handle the L2.
> > 
> > Maybe we should document this in the man page.
> 
> Yeah, it seems like a good idea to flesh out the routing behavior for nested
> VMs in the man page.

I'll do it.

> 
> > 
> > But I have a question: Does vmci support the host loopback?
> > I've tried, and it seems not.
> 
> Only for datagrams - not for stream sockets.
>  

Ok, I'll leave the datagram loopback as before.

> > Also vhost-vsock doesn't support it, but virtio-vsock does.
> > 
> > > >
> > > > Since several people asked about this feature at the KVM Forum, I would
> > like
> > > > to add a new VMADDR_CID_LOCAL (i.e. using the reserved 1) and
> > implement
> > > > loopback in the core.
> > > >
> > > > What do you think?
> > >
> > > What kind of use cases are mentioned in the KVM forum for loopback?
> > One concern
> > > is that we have to maintain yet another interprocess communication
> > mechanism,
> > > even though other choices exist already  (and those are likely to be more
> > efficient
> > > given the development time and specific focus that went into those). To
> > me, the
> > > local connections are mainly useful as a way to sanity test the protocol 
> > > and
> > transports.
> > > However, if loopback is compelling, it would make sense have it in the 
> > > core,
> > since it
> > > shouldn't need a specific transport.
> > 
> > The common use cases is for developer point of view, and to test the
> > protocol and transports as you said.
> > 
> > People that are introducing VSOCK support in their projects, would like to
> > test them on their own PC without starting a VM.
> > 
> > The idea is to move the code to handle loopback from the virtio-vsock,
> > in the core, but in another series :-)
> 
> OK, that makes sense.

Thanks,
Stefano

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v2] virtio_console: allocate inbufs in add_port() only if it is needed

2019-11-13 Thread Laurent Vivier
On 13/11/2019 16:22, Michael S. Tsirkin wrote:
> On Wed, Nov 13, 2019 at 10:21:11AM -0500, Michael S. Tsirkin wrote:
>> On Wed, Nov 13, 2019 at 04:00:56PM +0100, Laurent Vivier wrote:
>>> When we hot unplug a virtserialport and then try to hot plug again,
>>> it fails:
>>>
>>> (qemu) chardev-add socket,id=serial0,path=/tmp/serial0,server,nowait
>>> (qemu) device_add virtserialport,bus=virtio-serial0.0,nr=2,\
>>>   chardev=serial0,id=serial0,name=serial0
>>> (qemu) device_del serial0
>>> (qemu) device_add virtserialport,bus=virtio-serial0.0,nr=2,\
>>>   chardev=serial0,id=serial0,name=serial0
>>> kernel error:
>>>   virtio-ports vport2p2: Error allocating inbufs
>>> qemu error:
>>>   virtio-serial-bus: Guest failure in adding port 2 for device \
>>>  virtio-serial0.0
>>>
>>> This happens because buffers for the in_vq are allocated when the port is
>>> added but are not released when the port is unplugged.
>>>
>>> They are only released when virtconsole is removed (see a7a69ec0d8e4)
>>>
>>> To avoid the problem and to be symmetric, we could allocate all the buffers
>>> in init_vqs() as they are released in remove_vqs(), but it sounds like
>>> a waste of memory.
>>>
>>> Rather than that, this patch changes add_port() logic to ignore ENOSPC
>>> error in fill_queue(), which means queue has already been filled.
>>>
>>> Fixes: a7a69ec0d8e4 ("virtio_console: free buffers after reset")
>>> Cc: m...@redhat.com
>>> Cc: sta...@vger.kernel.org
>>> Signed-off-by: Laurent Vivier 
>>> ---
>>>
>>> Notes:
>>> v2: making fill_queue return int and testing return code for -ENOSPC
>>>
>>>  drivers/char/virtio_console.c | 24 +---
>>>  1 file changed, 9 insertions(+), 15 deletions(-)
>>>
>>> diff --git a/drivers/char/virtio_console.c b/drivers/char/virtio_console.c
>>> index 7270e7b69262..9e6534fd1aa4 100644
>>> --- a/drivers/char/virtio_console.c
>>> +++ b/drivers/char/virtio_console.c
>>> @@ -1325,24 +1325,24 @@ static void set_console_size(struct port *port, u16 
>>> rows, u16 cols)
>>> port->cons.ws.ws_col = cols;
>>>  }
>>>  
>>> -static unsigned int fill_queue(struct virtqueue *vq, spinlock_t *lock)
>>> +static int fill_queue(struct virtqueue *vq, spinlock_t *lock)
>>>  {
>>> struct port_buffer *buf;
>>> -   unsigned int nr_added_bufs;
>>> +   int nr_added_bufs;
>>> int ret;
>>>  
>>> nr_added_bufs = 0;
>>> do {
>>> buf = alloc_buf(vq->vdev, PAGE_SIZE, 0);
>>> if (!buf)
>>> -   break;
>>> +   return -ENOMEM;
>>>  
>>> spin_lock_irq(lock);
>>> ret = add_inbuf(vq, buf);
>>> if (ret < 0) {
>>> spin_unlock_irq(lock);
>>> free_buf(buf, true);
>>> -   break;
>>> +   return ret;
>>> }
>>> nr_added_bufs++;
>>> spin_unlock_irq(lock);
> 
> So actually, how about handling ENOSPC specially here, and
> returning success? After all queue is full as requested ...

I think it's interesting to return -ENOSPC to manage it as a real error
in virtcons_probe() as in this function the queue should not be already
full (is this right?) and to return the real error code.

Thanks,
Laurent

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v2] virtio_console: allocate inbufs in add_port() only if it is needed

2019-11-13 Thread Michael S. Tsirkin
On Wed, Nov 13, 2019 at 10:21:11AM -0500, Michael S. Tsirkin wrote:
> On Wed, Nov 13, 2019 at 04:00:56PM +0100, Laurent Vivier wrote:
> > When we hot unplug a virtserialport and then try to hot plug again,
> > it fails:
> > 
> > (qemu) chardev-add socket,id=serial0,path=/tmp/serial0,server,nowait
> > (qemu) device_add virtserialport,bus=virtio-serial0.0,nr=2,\
> >   chardev=serial0,id=serial0,name=serial0
> > (qemu) device_del serial0
> > (qemu) device_add virtserialport,bus=virtio-serial0.0,nr=2,\
> >   chardev=serial0,id=serial0,name=serial0
> > kernel error:
> >   virtio-ports vport2p2: Error allocating inbufs
> > qemu error:
> >   virtio-serial-bus: Guest failure in adding port 2 for device \
> >  virtio-serial0.0
> > 
> > This happens because buffers for the in_vq are allocated when the port is
> > added but are not released when the port is unplugged.
> > 
> > They are only released when virtconsole is removed (see a7a69ec0d8e4)
> > 
> > To avoid the problem and to be symmetric, we could allocate all the buffers
> > in init_vqs() as they are released in remove_vqs(), but it sounds like
> > a waste of memory.
> > 
> > Rather than that, this patch changes add_port() logic to ignore ENOSPC
> > error in fill_queue(), which means queue has already been filled.
> > 
> > Fixes: a7a69ec0d8e4 ("virtio_console: free buffers after reset")
> > Cc: m...@redhat.com
> > Cc: sta...@vger.kernel.org
> > Signed-off-by: Laurent Vivier 
> > ---
> > 
> > Notes:
> > v2: making fill_queue return int and testing return code for -ENOSPC
> > 
> >  drivers/char/virtio_console.c | 24 +---
> >  1 file changed, 9 insertions(+), 15 deletions(-)
> > 
> > diff --git a/drivers/char/virtio_console.c b/drivers/char/virtio_console.c
> > index 7270e7b69262..9e6534fd1aa4 100644
> > --- a/drivers/char/virtio_console.c
> > +++ b/drivers/char/virtio_console.c
> > @@ -1325,24 +1325,24 @@ static void set_console_size(struct port *port, u16 
> > rows, u16 cols)
> > port->cons.ws.ws_col = cols;
> >  }
> >  
> > -static unsigned int fill_queue(struct virtqueue *vq, spinlock_t *lock)
> > +static int fill_queue(struct virtqueue *vq, spinlock_t *lock)
> >  {
> > struct port_buffer *buf;
> > -   unsigned int nr_added_bufs;
> > +   int nr_added_bufs;
> > int ret;
> >  
> > nr_added_bufs = 0;
> > do {
> > buf = alloc_buf(vq->vdev, PAGE_SIZE, 0);
> > if (!buf)
> > -   break;
> > +   return -ENOMEM;
> >  
> > spin_lock_irq(lock);
> > ret = add_inbuf(vq, buf);
> > if (ret < 0) {
> > spin_unlock_irq(lock);
> > free_buf(buf, true);
> > -   break;
> > +   return ret;
> > }
> > nr_added_bufs++;
> > spin_unlock_irq(lock);

So actually, how about handling ENOSPC specially here, and
returning success? After all queue is full as requested ...


> > @@ -1362,7 +1362,6 @@ static int add_port(struct ports_device *portdev, u32 
> > id)
> > char debugfs_name[16];
> > struct port *port;
> > dev_t devt;
> > -   unsigned int nr_added_bufs;
> > int err;
> >  
> > port = kmalloc(sizeof(*port), GFP_KERNEL);
> > @@ -1421,11 +1420,9 @@ static int add_port(struct ports_device *portdev, 
> > u32 id)
> > spin_lock_init(>outvq_lock);
> > init_waitqueue_head(>waitqueue);
> >  
> > -   /* Fill the in_vq with buffers so the host can send us data. */
> > -   nr_added_bufs = fill_queue(port->in_vq, >inbuf_lock);
> > -   if (!nr_added_bufs) {
> > +   err = fill_queue(port->in_vq, >inbuf_lock);
> > +   if (err < 0 && err != -ENOSPC) {
> > dev_err(port->dev, "Error allocating inbufs\n");
> > -   err = -ENOMEM;
> > goto free_device;
> > }
> >  
> 
> Pls add a comment explaining that -ENOSPC happens when
> queue already has buffers (e.g. from previous detach).
> 
> 
> > @@ -2059,14 +2056,11 @@ static int virtcons_probe(struct virtio_device 
> > *vdev)
> > INIT_WORK(>control_work, _work_handler);
> >  
> > if (multiport) {
> > -   unsigned int nr_added_bufs;
> > -
> > spin_lock_init(>c_ivq_lock);
> > spin_lock_init(>c_ovq_lock);
> >  
> > -   nr_added_bufs = fill_queue(portdev->c_ivq,
> > -  >c_ivq_lock);
> > -   if (!nr_added_bufs) {
> > +   err = fill_queue(portdev->c_ivq, >c_ivq_lock);
> > +   if (err < 0) {
> > dev_err(>dev,
> > "Error allocating buffers for control queue\n");
> > /*
> > @@ -2077,7 +2071,7 @@ static int virtcons_probe(struct virtio_device *vdev)
> >VIRTIO_CONSOLE_DEVICE_READY, 0);
> > /* Device was functional: we need full cleanup. */
> > virtcons_remove(vdev);
> > 

Re: [PATCH v2] virtio_console: allocate inbufs in add_port() only if it is needed

2019-11-13 Thread Michael S. Tsirkin
On Wed, Nov 13, 2019 at 04:00:56PM +0100, Laurent Vivier wrote:
> When we hot unplug a virtserialport and then try to hot plug again,
> it fails:
> 
> (qemu) chardev-add socket,id=serial0,path=/tmp/serial0,server,nowait
> (qemu) device_add virtserialport,bus=virtio-serial0.0,nr=2,\
>   chardev=serial0,id=serial0,name=serial0
> (qemu) device_del serial0
> (qemu) device_add virtserialport,bus=virtio-serial0.0,nr=2,\
>   chardev=serial0,id=serial0,name=serial0
> kernel error:
>   virtio-ports vport2p2: Error allocating inbufs
> qemu error:
>   virtio-serial-bus: Guest failure in adding port 2 for device \
>  virtio-serial0.0
> 
> This happens because buffers for the in_vq are allocated when the port is
> added but are not released when the port is unplugged.
> 
> They are only released when virtconsole is removed (see a7a69ec0d8e4)
> 
> To avoid the problem and to be symmetric, we could allocate all the buffers
> in init_vqs() as they are released in remove_vqs(), but it sounds like
> a waste of memory.
> 
> Rather than that, this patch changes add_port() logic to ignore ENOSPC
> error in fill_queue(), which means queue has already been filled.
> 
> Fixes: a7a69ec0d8e4 ("virtio_console: free buffers after reset")
> Cc: m...@redhat.com
> Cc: sta...@vger.kernel.org
> Signed-off-by: Laurent Vivier 
> ---
> 
> Notes:
> v2: making fill_queue return int and testing return code for -ENOSPC
> 
>  drivers/char/virtio_console.c | 24 +---
>  1 file changed, 9 insertions(+), 15 deletions(-)
> 
> diff --git a/drivers/char/virtio_console.c b/drivers/char/virtio_console.c
> index 7270e7b69262..9e6534fd1aa4 100644
> --- a/drivers/char/virtio_console.c
> +++ b/drivers/char/virtio_console.c
> @@ -1325,24 +1325,24 @@ static void set_console_size(struct port *port, u16 
> rows, u16 cols)
>   port->cons.ws.ws_col = cols;
>  }
>  
> -static unsigned int fill_queue(struct virtqueue *vq, spinlock_t *lock)
> +static int fill_queue(struct virtqueue *vq, spinlock_t *lock)
>  {
>   struct port_buffer *buf;
> - unsigned int nr_added_bufs;
> + int nr_added_bufs;
>   int ret;
>  
>   nr_added_bufs = 0;
>   do {
>   buf = alloc_buf(vq->vdev, PAGE_SIZE, 0);
>   if (!buf)
> - break;
> + return -ENOMEM;
>  
>   spin_lock_irq(lock);
>   ret = add_inbuf(vq, buf);
>   if (ret < 0) {
>   spin_unlock_irq(lock);
>   free_buf(buf, true);
> - break;
> + return ret;
>   }
>   nr_added_bufs++;
>   spin_unlock_irq(lock);
> @@ -1362,7 +1362,6 @@ static int add_port(struct ports_device *portdev, u32 
> id)
>   char debugfs_name[16];
>   struct port *port;
>   dev_t devt;
> - unsigned int nr_added_bufs;
>   int err;
>  
>   port = kmalloc(sizeof(*port), GFP_KERNEL);
> @@ -1421,11 +1420,9 @@ static int add_port(struct ports_device *portdev, u32 
> id)
>   spin_lock_init(>outvq_lock);
>   init_waitqueue_head(>waitqueue);
>  
> - /* Fill the in_vq with buffers so the host can send us data. */
> - nr_added_bufs = fill_queue(port->in_vq, >inbuf_lock);
> - if (!nr_added_bufs) {
> + err = fill_queue(port->in_vq, >inbuf_lock);
> + if (err < 0 && err != -ENOSPC) {
>   dev_err(port->dev, "Error allocating inbufs\n");
> - err = -ENOMEM;
>   goto free_device;
>   }
>  

Pls add a comment explaining that -ENOSPC happens when
queue already has buffers (e.g. from previous detach).


> @@ -2059,14 +2056,11 @@ static int virtcons_probe(struct virtio_device *vdev)
>   INIT_WORK(>control_work, _work_handler);
>  
>   if (multiport) {
> - unsigned int nr_added_bufs;
> -
>   spin_lock_init(>c_ivq_lock);
>   spin_lock_init(>c_ovq_lock);
>  
> - nr_added_bufs = fill_queue(portdev->c_ivq,
> ->c_ivq_lock);
> - if (!nr_added_bufs) {
> + err = fill_queue(portdev->c_ivq, >c_ivq_lock);
> + if (err < 0) {
>   dev_err(>dev,
>   "Error allocating buffers for control queue\n");
>   /*
> @@ -2077,7 +2071,7 @@ static int virtcons_probe(struct virtio_device *vdev)
>  VIRTIO_CONSOLE_DEVICE_READY, 0);
>   /* Device was functional: we need full cleanup. */
>   virtcons_remove(vdev);
> - return -ENOMEM;
> + return err;
>   }
>   } else {
>   /*
> -- 
> 2.23.0

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH v2] virtio_console: allocate inbufs in add_port() only if it is needed

2019-11-13 Thread Laurent Vivier
When we hot unplug a virtserialport and then try to hot plug again,
it fails:

(qemu) chardev-add socket,id=serial0,path=/tmp/serial0,server,nowait
(qemu) device_add virtserialport,bus=virtio-serial0.0,nr=2,\
  chardev=serial0,id=serial0,name=serial0
(qemu) device_del serial0
(qemu) device_add virtserialport,bus=virtio-serial0.0,nr=2,\
  chardev=serial0,id=serial0,name=serial0
kernel error:
  virtio-ports vport2p2: Error allocating inbufs
qemu error:
  virtio-serial-bus: Guest failure in adding port 2 for device \
 virtio-serial0.0

This happens because buffers for the in_vq are allocated when the port is
added but are not released when the port is unplugged.

They are only released when virtconsole is removed (see a7a69ec0d8e4)

To avoid the problem and to be symmetric, we could allocate all the buffers
in init_vqs() as they are released in remove_vqs(), but it sounds like
a waste of memory.

Rather than that, this patch changes add_port() logic to ignore ENOSPC
error in fill_queue(), which means queue has already been filled.

Fixes: a7a69ec0d8e4 ("virtio_console: free buffers after reset")
Cc: m...@redhat.com
Cc: sta...@vger.kernel.org
Signed-off-by: Laurent Vivier 
---

Notes:
v2: making fill_queue return int and testing return code for -ENOSPC

 drivers/char/virtio_console.c | 24 +---
 1 file changed, 9 insertions(+), 15 deletions(-)

diff --git a/drivers/char/virtio_console.c b/drivers/char/virtio_console.c
index 7270e7b69262..9e6534fd1aa4 100644
--- a/drivers/char/virtio_console.c
+++ b/drivers/char/virtio_console.c
@@ -1325,24 +1325,24 @@ static void set_console_size(struct port *port, u16 
rows, u16 cols)
port->cons.ws.ws_col = cols;
 }
 
-static unsigned int fill_queue(struct virtqueue *vq, spinlock_t *lock)
+static int fill_queue(struct virtqueue *vq, spinlock_t *lock)
 {
struct port_buffer *buf;
-   unsigned int nr_added_bufs;
+   int nr_added_bufs;
int ret;
 
nr_added_bufs = 0;
do {
buf = alloc_buf(vq->vdev, PAGE_SIZE, 0);
if (!buf)
-   break;
+   return -ENOMEM;
 
spin_lock_irq(lock);
ret = add_inbuf(vq, buf);
if (ret < 0) {
spin_unlock_irq(lock);
free_buf(buf, true);
-   break;
+   return ret;
}
nr_added_bufs++;
spin_unlock_irq(lock);
@@ -1362,7 +1362,6 @@ static int add_port(struct ports_device *portdev, u32 id)
char debugfs_name[16];
struct port *port;
dev_t devt;
-   unsigned int nr_added_bufs;
int err;
 
port = kmalloc(sizeof(*port), GFP_KERNEL);
@@ -1421,11 +1420,9 @@ static int add_port(struct ports_device *portdev, u32 id)
spin_lock_init(>outvq_lock);
init_waitqueue_head(>waitqueue);
 
-   /* Fill the in_vq with buffers so the host can send us data. */
-   nr_added_bufs = fill_queue(port->in_vq, >inbuf_lock);
-   if (!nr_added_bufs) {
+   err = fill_queue(port->in_vq, >inbuf_lock);
+   if (err < 0 && err != -ENOSPC) {
dev_err(port->dev, "Error allocating inbufs\n");
-   err = -ENOMEM;
goto free_device;
}
 
@@ -2059,14 +2056,11 @@ static int virtcons_probe(struct virtio_device *vdev)
INIT_WORK(>control_work, _work_handler);
 
if (multiport) {
-   unsigned int nr_added_bufs;
-
spin_lock_init(>c_ivq_lock);
spin_lock_init(>c_ovq_lock);
 
-   nr_added_bufs = fill_queue(portdev->c_ivq,
-  >c_ivq_lock);
-   if (!nr_added_bufs) {
+   err = fill_queue(portdev->c_ivq, >c_ivq_lock);
+   if (err < 0) {
dev_err(>dev,
"Error allocating buffers for control queue\n");
/*
@@ -2077,7 +2071,7 @@ static int virtcons_probe(struct virtio_device *vdev)
   VIRTIO_CONSOLE_DEVICE_READY, 0);
/* Device was functional: we need full cleanup. */
virtcons_remove(vdev);
-   return -ENOMEM;
+   return err;
}
} else {
/*
-- 
2.23.0

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH 3/3] virtiofs: Use completions while waiting for queue to be drained

2019-11-13 Thread Stefan Hajnoczi
On Wed, Oct 30, 2019 at 11:07:19AM -0400, Vivek Goyal wrote:
> While we wait for queue to finish draining, use completions instead of
> uslee_range(). This is better way of waiting for event.

s/uslee_range()/usleep_range()/


signature.asc
Description: PGP signature
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Re: [PATCH 0/3] virtiofs: Small Cleanups for 5.5

2019-11-13 Thread Stefan Hajnoczi
On Wed, Oct 30, 2019 at 11:07:16AM -0400, Vivek Goyal wrote:
> Hi Miklos,
> 
> Here are few small cleanups for virtiofs for 5.5. I had received some
> comments from Michael Tsirkin on original virtiofs patches and these
> cleanups are result of these comments.
> 
> Thanks
> Vivek
> 
> Vivek Goyal (3):
>   virtiofs: Use a common function to send forget
>   virtiofs: Do not send forget request "struct list_head" element
>   virtiofs: Use completions while waiting for queue to be drained
> 
>  fs/fuse/virtio_fs.c | 204 ++--
>  1 file changed, 103 insertions(+), 101 deletions(-)
> 
> -- 
> 2.20.1
> 

There are typos in the commit descriptions but the code looks fine:

Reviewed-by: Stefan Hajnoczi 


signature.asc
Description: PGP signature
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Re: [PATCH 2/3] virtiofs: Do not send forget request "struct list_head" element

2019-11-13 Thread Stefan Hajnoczi
On Wed, Oct 30, 2019 at 11:07:18AM -0400, Vivek Goyal wrote:
> We are sending whole of virtio_fs_foreget struct to the other end over

s/foreget/forget/


signature.asc
Description: PGP signature
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

RE: [PATCH net-next 11/14] vsock: add multi-transports support

2019-11-13 Thread Jorgen Hansen via Virtualization
> From: Stefano Garzarella [mailto:sgarz...@redhat.com]
> Sent: Tuesday, November 12, 2019 11:37 AM

> > > > You already mentioned that you are working on a fix for loopback
> > > > here for the guest, but presumably a host could also do loopback.
> > >
> > > IIUC we don't support loopback in the host, because in this case the
> > > application will use the CID_HOST as address, but if we are in a nested
> > > VM environment we are in trouble.
> >
> > If both src and dst CID are CID_HOST, we should be fairly sure that this
> > Is host loopback, no? If src is anything else, we would do G2H.
> >
> 
> The problem is that we don't know the src until we assign a transport
> looking at the dst. (or if the user bound the socket to CID_HOST before
> the connect(), but it is not very common)
> 
> So if we are in a L1 and the user uses the local guest CID, it works, but if
> it uses the HOST_CID, the packet will go to the L0.
> 
> If we are in L0, it could be simple, because we can simply check if G2H
> is not loaded, so any packet to CID_HOST, is host loopback.
> 
> I think that if the user uses the IOCTL_VM_SOCKETS_GET_LOCAL_CID, to set
> the dest CID for the loopback, it works in both cases because we return the
> HOST_CID in L0, and always the guest CID in L1, also if a H2G is loaded to
> handle the L2.
> 
> Maybe we should document this in the man page.

Yeah, it seems like a good idea to flesh out the routing behavior for nested
VMs in the man page.

> 
> But I have a question: Does vmci support the host loopback?
> I've tried, and it seems not.

Only for datagrams - not for stream sockets.
 
> Also vhost-vsock doesn't support it, but virtio-vsock does.
> 
> > >
> > > Since several people asked about this feature at the KVM Forum, I would
> like
> > > to add a new VMADDR_CID_LOCAL (i.e. using the reserved 1) and
> implement
> > > loopback in the core.
> > >
> > > What do you think?
> >
> > What kind of use cases are mentioned in the KVM forum for loopback?
> One concern
> > is that we have to maintain yet another interprocess communication
> mechanism,
> > even though other choices exist already  (and those are likely to be more
> efficient
> > given the development time and specific focus that went into those). To
> me, the
> > local connections are mainly useful as a way to sanity test the protocol and
> transports.
> > However, if loopback is compelling, it would make sense have it in the core,
> since it
> > shouldn't need a specific transport.
> 
> The common use cases is for developer point of view, and to test the
> protocol and transports as you said.
> 
> People that are introducing VSOCK support in their projects, would like to
> test them on their own PC without starting a VM.
> 
> The idea is to move the code to handle loopback from the virtio-vsock,
> in the core, but in another series :-)

OK, that makes sense.

Thanks,
Jorgen
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization