Re: [PATCH net-next v4 2/5] virtio_net: enable irq for the control vq

2024-07-08 Thread Heng Qi
On Mon, 8 Jul 2024 13:40:42 +0200, Jiri Pirko  wrote:
> Wed, Jun 26, 2024 at 01:51:00PM CEST, j...@resnulli.us wrote:
> >Wed, Jun 26, 2024 at 11:58:08AM CEST, m...@redhat.com wrote:
> >>On Wed, Jun 26, 2024 at 10:43:11AM +0200, Jiri Pirko wrote:
> >>> Wed, Jun 26, 2024 at 10:08:14AM CEST, m...@redhat.com wrote:
> >>> >On Wed, Jun 26, 2024 at 09:52:58AM +0200, Jiri Pirko wrote:
> >>> >> Thu, Jun 20, 2024 at 12:31:34PM CEST, hen...@linux.alibaba.com wrote:
> >>> >> >On Thu, 20 Jun 2024 06:11:40 -0400, "Michael S. Tsirkin" 
> >>> >> > wrote:
> >>> >> >> On Thu, Jun 20, 2024 at 06:10:51AM -0400, Michael S. Tsirkin wrote:
> >>> >> >> > On Thu, Jun 20, 2024 at 05:53:15PM +0800, Heng Qi wrote:
> >>> >> >> > > On Thu, 20 Jun 2024 16:26:05 +0800, Jason Wang 
> >>> >> >> > >  wrote:
> >>> >> >> > > > On Thu, Jun 20, 2024 at 4:21 PM Jason Wang 
> >>> >> >> > > >  wrote:
> >>> >> >> > > > >
> >>> >> >> > > > > On Thu, Jun 20, 2024 at 3:35 PM Heng Qi 
> >>> >> >> > > > >  wrote:
> >>> >> >> > > > > >
> >>> >> >> > > > > > On Wed, 19 Jun 2024 17:19:12 -0400, "Michael S. Tsirkin" 
> >>> >> >> > > > > >  wrote:
> >>> >> >> > > > > > > On Thu, Jun 20, 2024 at 12:19:05AM +0800, Heng Qi wrote:
> >>> >> >> > > > > > > > @@ -5312,7 +5315,7 @@ static int 
> >>> >> >> > > > > > > > virtnet_find_vqs(struct virtnet_info *vi)
> >>> >> >> > > > > > > >
> >>> >> >> > > > > > > > /* Parameters for control virtqueue, if any */
> >>> >> >> > > > > > > > if (vi->has_cvq) {
> >>> >> >> > > > > > > > -   callbacks[total_vqs - 1] = NULL;
> >>> >> >> > > > > > > > +   callbacks[total_vqs - 1] = 
> >>> >> >> > > > > > > > virtnet_cvq_done;
> >>> >> >> > > > > > > > names[total_vqs - 1] = "control";
> >>> >> >> > > > > > > > }
> >>> >> >> > > > > > > >
> >>> >> >> > > > > > >
> >>> >> >> > > > > > > If the # of MSIX vectors is exactly for data path VQs,
> >>> >> >> > > > > > > this will cause irq sharing between VQs which will 
> >>> >> >> > > > > > > degrade
> >>> >> >> > > > > > > performance significantly.
> >>> >> >> > > > > > >
> >>> >> >> > > > >
> >>> >> >> > > > > Why do we need to care about buggy management? I think 
> >>> >> >> > > > > libvirt has
> >>> >> >> > > > > been teached to use 2N+2 since the introduction of the 
> >>> >> >> > > > > multiqueue[1].
> >>> >> >> > > > 
> >>> >> >> > > > And Qemu can calculate it correctly automatically since:
> >>> >> >> > > > 
> >>> >> >> > > > commit 51a81a2118df0c70988f00d61647da9e298483a4
> >>> >> >> > > > Author: Jason Wang 
> >>> >> >> > > > Date:   Mon Mar 8 12:49:19 2021 +0800
> >>> >> >> > > > 
> >>> >> >> > > > virtio-net: calculating proper msix vectors on init
> >>> >> >> > > > 
> >>> >> >> > > > Currently, the default msix vectors for virtio-net-pci is 
> >>> >> >> > > > 3 which is
> >>> >> >> > > > obvious not suitable for multiqueue guest, so we depends 
> >>> >> >> > > > on the user
> >>> >> >> > > > or management tools to pass a correct vectors parameter. 
> >>> >> >> > > > In fact, we
> >>> >> >> > > > can simplifying this by calculating the number of vectors 
> >>> >> >> > > > on realize.
> >>> >> >> > > > 
> >>> >> >> > > > Consider we have N queues, the number of vectors needed 
> >>> >> >> > > > is 2*N + 2
> >>> >> >> > > > (#queue pairs + plus one config interrupt and control 
> >>> >> >> > > > vq). We didn't
> >>> >> >> > > > check whether or not host support control vq because it 
> >>> >> >> > > > was added
> >>> >> >> > > > unconditionally by qemu to avoid breaking legacy guests 
> >>> >> >> > > > such as Minix.
> >>> >> >> > > > 
> >>> >> >> > > > Reviewed-by: Philippe Mathieu-Daudé  >>> >> >> > > > Reviewed-by: Stefano Garzarella 
> >>> >> >> > > > Reviewed-by: Stefan Hajnoczi 
> >>> >> >> > > > Signed-off-by: Jason Wang 
> >>> >> >> > > 
> >>> >> >> > > Yes, devices designed according to the spec need to reserve an 
> >>> >> >> > > interrupt
> >>> >> >> > > vector for ctrlq. So, Michael, do we want to be compatible with 
> >>> >> >> > > buggy devices?
> >>> >> >> > > 
> >>> >> >> > > Thanks.
> >>> >> >> > 
> >>> >> >> > These aren't buggy, the spec allows this. So don't fail, but
> >>> >> >> > I'm fine with using polling if not enough vectors.
> >>> >> >> 
> >>> >> >> sharing with config interrupt is easier code-wise though, FWIW -
> >>> >> >> we don't need to maintain two code-paths.
> >>> >> >
> >>> >> >Yes, it works well - config change irq is used less before - and will 
> >>> >> >not fail.
> >>> >> 
> >>> >> Please note I'm working on such fallback for admin queue. I would Like
> >>> >> to send the patchset by the end of this week. You can then use it 
> >>> >> easily
> >>> >> for cvq.
> >>> >> 
> >>> >> Something like:
> >>> >> /* the config->find_vqs() implementation */
> >>> >> int vp_find_vqs(struct virtio_device *vdev, unsigned int nvqs,
> >>> >> struct virtqueue *vqs[], vq_callback_t *callbacks[],
> >>> >> const char * const names[

Re: [PATCH net-next v4 2/5] virtio_net: enable irq for the control vq

2024-07-08 Thread Jiri Pirko
Wed, Jun 26, 2024 at 01:51:00PM CEST, j...@resnulli.us wrote:
>Wed, Jun 26, 2024 at 11:58:08AM CEST, m...@redhat.com wrote:
>>On Wed, Jun 26, 2024 at 10:43:11AM +0200, Jiri Pirko wrote:
>>> Wed, Jun 26, 2024 at 10:08:14AM CEST, m...@redhat.com wrote:
>>> >On Wed, Jun 26, 2024 at 09:52:58AM +0200, Jiri Pirko wrote:
>>> >> Thu, Jun 20, 2024 at 12:31:34PM CEST, hen...@linux.alibaba.com wrote:
>>> >> >On Thu, 20 Jun 2024 06:11:40 -0400, "Michael S. Tsirkin" 
>>> >> > wrote:
>>> >> >> On Thu, Jun 20, 2024 at 06:10:51AM -0400, Michael S. Tsirkin wrote:
>>> >> >> > On Thu, Jun 20, 2024 at 05:53:15PM +0800, Heng Qi wrote:
>>> >> >> > > On Thu, 20 Jun 2024 16:26:05 +0800, Jason Wang 
>>> >> >> > >  wrote:
>>> >> >> > > > On Thu, Jun 20, 2024 at 4:21 PM Jason Wang 
>>> >> >> > > >  wrote:
>>> >> >> > > > >
>>> >> >> > > > > On Thu, Jun 20, 2024 at 3:35 PM Heng Qi 
>>> >> >> > > > >  wrote:
>>> >> >> > > > > >
>>> >> >> > > > > > On Wed, 19 Jun 2024 17:19:12 -0400, "Michael S. Tsirkin" 
>>> >> >> > > > > >  wrote:
>>> >> >> > > > > > > On Thu, Jun 20, 2024 at 12:19:05AM +0800, Heng Qi wrote:
>>> >> >> > > > > > > > @@ -5312,7 +5315,7 @@ static int 
>>> >> >> > > > > > > > virtnet_find_vqs(struct virtnet_info *vi)
>>> >> >> > > > > > > >
>>> >> >> > > > > > > > /* Parameters for control virtqueue, if any */
>>> >> >> > > > > > > > if (vi->has_cvq) {
>>> >> >> > > > > > > > -   callbacks[total_vqs - 1] = NULL;
>>> >> >> > > > > > > > +   callbacks[total_vqs - 1] = virtnet_cvq_done;
>>> >> >> > > > > > > > names[total_vqs - 1] = "control";
>>> >> >> > > > > > > > }
>>> >> >> > > > > > > >
>>> >> >> > > > > > >
>>> >> >> > > > > > > If the # of MSIX vectors is exactly for data path VQs,
>>> >> >> > > > > > > this will cause irq sharing between VQs which will degrade
>>> >> >> > > > > > > performance significantly.
>>> >> >> > > > > > >
>>> >> >> > > > >
>>> >> >> > > > > Why do we need to care about buggy management? I think 
>>> >> >> > > > > libvirt has
>>> >> >> > > > > been teached to use 2N+2 since the introduction of the 
>>> >> >> > > > > multiqueue[1].
>>> >> >> > > > 
>>> >> >> > > > And Qemu can calculate it correctly automatically since:
>>> >> >> > > > 
>>> >> >> > > > commit 51a81a2118df0c70988f00d61647da9e298483a4
>>> >> >> > > > Author: Jason Wang 
>>> >> >> > > > Date:   Mon Mar 8 12:49:19 2021 +0800
>>> >> >> > > > 
>>> >> >> > > > virtio-net: calculating proper msix vectors on init
>>> >> >> > > > 
>>> >> >> > > > Currently, the default msix vectors for virtio-net-pci is 3 
>>> >> >> > > > which is
>>> >> >> > > > obvious not suitable for multiqueue guest, so we depends on 
>>> >> >> > > > the user
>>> >> >> > > > or management tools to pass a correct vectors parameter. In 
>>> >> >> > > > fact, we
>>> >> >> > > > can simplifying this by calculating the number of vectors 
>>> >> >> > > > on realize.
>>> >> >> > > > 
>>> >> >> > > > Consider we have N queues, the number of vectors needed is 
>>> >> >> > > > 2*N + 2
>>> >> >> > > > (#queue pairs + plus one config interrupt and control vq). 
>>> >> >> > > > We didn't
>>> >> >> > > > check whether or not host support control vq because it was 
>>> >> >> > > > added
>>> >> >> > > > unconditionally by qemu to avoid breaking legacy guests 
>>> >> >> > > > such as Minix.
>>> >> >> > > > 
>>> >> >> > > > Reviewed-by: Philippe Mathieu-Daudé >> >> >> > > > Reviewed-by: Stefano Garzarella 
>>> >> >> > > > Reviewed-by: Stefan Hajnoczi 
>>> >> >> > > > Signed-off-by: Jason Wang 
>>> >> >> > > 
>>> >> >> > > Yes, devices designed according to the spec need to reserve an 
>>> >> >> > > interrupt
>>> >> >> > > vector for ctrlq. So, Michael, do we want to be compatible with 
>>> >> >> > > buggy devices?
>>> >> >> > > 
>>> >> >> > > Thanks.
>>> >> >> > 
>>> >> >> > These aren't buggy, the spec allows this. So don't fail, but
>>> >> >> > I'm fine with using polling if not enough vectors.
>>> >> >> 
>>> >> >> sharing with config interrupt is easier code-wise though, FWIW -
>>> >> >> we don't need to maintain two code-paths.
>>> >> >
>>> >> >Yes, it works well - config change irq is used less before - and will 
>>> >> >not fail.
>>> >> 
>>> >> Please note I'm working on such fallback for admin queue. I would Like
>>> >> to send the patchset by the end of this week. You can then use it easily
>>> >> for cvq.
>>> >> 
>>> >> Something like:
>>> >> /* the config->find_vqs() implementation */
>>> >> int vp_find_vqs(struct virtio_device *vdev, unsigned int nvqs,
>>> >> struct virtqueue *vqs[], vq_callback_t *callbacks[],
>>> >> const char * const names[], const bool *ctx,
>>> >> struct irq_affinity *desc)
>>> >> {
>>> >> int err;
>>> >> 
>>> >> /* Try MSI-X with one vector per queue. */
>>> >> err = vp_find_vqs_msix(vdev, nvqs, vqs, callbacks, names,
>>> >>VP_VQ_VECTOR_POLICY_EACH, ctx, d

Re: [PATCH net-next v4 2/5] virtio_net: enable irq for the control vq

2024-06-26 Thread Jiri Pirko
Wed, Jun 26, 2024 at 11:58:08AM CEST, m...@redhat.com wrote:
>On Wed, Jun 26, 2024 at 10:43:11AM +0200, Jiri Pirko wrote:
>> Wed, Jun 26, 2024 at 10:08:14AM CEST, m...@redhat.com wrote:
>> >On Wed, Jun 26, 2024 at 09:52:58AM +0200, Jiri Pirko wrote:
>> >> Thu, Jun 20, 2024 at 12:31:34PM CEST, hen...@linux.alibaba.com wrote:
>> >> >On Thu, 20 Jun 2024 06:11:40 -0400, "Michael S. Tsirkin" 
>> >> > wrote:
>> >> >> On Thu, Jun 20, 2024 at 06:10:51AM -0400, Michael S. Tsirkin wrote:
>> >> >> > On Thu, Jun 20, 2024 at 05:53:15PM +0800, Heng Qi wrote:
>> >> >> > > On Thu, 20 Jun 2024 16:26:05 +0800, Jason Wang 
>> >> >> > >  wrote:
>> >> >> > > > On Thu, Jun 20, 2024 at 4:21 PM Jason Wang  
>> >> >> > > > wrote:
>> >> >> > > > >
>> >> >> > > > > On Thu, Jun 20, 2024 at 3:35 PM Heng Qi 
>> >> >> > > > >  wrote:
>> >> >> > > > > >
>> >> >> > > > > > On Wed, 19 Jun 2024 17:19:12 -0400, "Michael S. Tsirkin" 
>> >> >> > > > > >  wrote:
>> >> >> > > > > > > On Thu, Jun 20, 2024 at 12:19:05AM +0800, Heng Qi wrote:
>> >> >> > > > > > > > @@ -5312,7 +5315,7 @@ static int virtnet_find_vqs(struct 
>> >> >> > > > > > > > virtnet_info *vi)
>> >> >> > > > > > > >
>> >> >> > > > > > > > /* Parameters for control virtqueue, if any */
>> >> >> > > > > > > > if (vi->has_cvq) {
>> >> >> > > > > > > > -   callbacks[total_vqs - 1] = NULL;
>> >> >> > > > > > > > +   callbacks[total_vqs - 1] = virtnet_cvq_done;
>> >> >> > > > > > > > names[total_vqs - 1] = "control";
>> >> >> > > > > > > > }
>> >> >> > > > > > > >
>> >> >> > > > > > >
>> >> >> > > > > > > If the # of MSIX vectors is exactly for data path VQs,
>> >> >> > > > > > > this will cause irq sharing between VQs which will degrade
>> >> >> > > > > > > performance significantly.
>> >> >> > > > > > >
>> >> >> > > > >
>> >> >> > > > > Why do we need to care about buggy management? I think libvirt 
>> >> >> > > > > has
>> >> >> > > > > been teached to use 2N+2 since the introduction of the 
>> >> >> > > > > multiqueue[1].
>> >> >> > > > 
>> >> >> > > > And Qemu can calculate it correctly automatically since:
>> >> >> > > > 
>> >> >> > > > commit 51a81a2118df0c70988f00d61647da9e298483a4
>> >> >> > > > Author: Jason Wang 
>> >> >> > > > Date:   Mon Mar 8 12:49:19 2021 +0800
>> >> >> > > > 
>> >> >> > > > virtio-net: calculating proper msix vectors on init
>> >> >> > > > 
>> >> >> > > > Currently, the default msix vectors for virtio-net-pci is 3 
>> >> >> > > > which is
>> >> >> > > > obvious not suitable for multiqueue guest, so we depends on 
>> >> >> > > > the user
>> >> >> > > > or management tools to pass a correct vectors parameter. In 
>> >> >> > > > fact, we
>> >> >> > > > can simplifying this by calculating the number of vectors on 
>> >> >> > > > realize.
>> >> >> > > > 
>> >> >> > > > Consider we have N queues, the number of vectors needed is 
>> >> >> > > > 2*N + 2
>> >> >> > > > (#queue pairs + plus one config interrupt and control vq). 
>> >> >> > > > We didn't
>> >> >> > > > check whether or not host support control vq because it was 
>> >> >> > > > added
>> >> >> > > > unconditionally by qemu to avoid breaking legacy guests such 
>> >> >> > > > as Minix.
>> >> >> > > > 
>> >> >> > > > Reviewed-by: Philippe Mathieu-Daudé > >> >> > > > Reviewed-by: Stefano Garzarella 
>> >> >> > > > Reviewed-by: Stefan Hajnoczi 
>> >> >> > > > Signed-off-by: Jason Wang 
>> >> >> > > 
>> >> >> > > Yes, devices designed according to the spec need to reserve an 
>> >> >> > > interrupt
>> >> >> > > vector for ctrlq. So, Michael, do we want to be compatible with 
>> >> >> > > buggy devices?
>> >> >> > > 
>> >> >> > > Thanks.
>> >> >> > 
>> >> >> > These aren't buggy, the spec allows this. So don't fail, but
>> >> >> > I'm fine with using polling if not enough vectors.
>> >> >> 
>> >> >> sharing with config interrupt is easier code-wise though, FWIW -
>> >> >> we don't need to maintain two code-paths.
>> >> >
>> >> >Yes, it works well - config change irq is used less before - and will 
>> >> >not fail.
>> >> 
>> >> Please note I'm working on such fallback for admin queue. I would Like
>> >> to send the patchset by the end of this week. You can then use it easily
>> >> for cvq.
>> >> 
>> >> Something like:
>> >> /* the config->find_vqs() implementation */
>> >> int vp_find_vqs(struct virtio_device *vdev, unsigned int nvqs,
>> >> struct virtqueue *vqs[], vq_callback_t *callbacks[],
>> >> const char * const names[], const bool *ctx,
>> >> struct irq_affinity *desc)
>> >> {
>> >> int err;
>> >> 
>> >> /* Try MSI-X with one vector per queue. */
>> >> err = vp_find_vqs_msix(vdev, nvqs, vqs, callbacks, names,
>> >>VP_VQ_VECTOR_POLICY_EACH, ctx, desc);
>> >> if (!err)
>> >> return 0;
>> >> /* Fallback: MSI-X with one shared vector for config and
>> >>  * slow path queues

Re: [PATCH net-next v4 2/5] virtio_net: enable irq for the control vq

2024-06-26 Thread Michael S. Tsirkin
On Wed, Jun 26, 2024 at 10:43:11AM +0200, Jiri Pirko wrote:
> Wed, Jun 26, 2024 at 10:08:14AM CEST, m...@redhat.com wrote:
> >On Wed, Jun 26, 2024 at 09:52:58AM +0200, Jiri Pirko wrote:
> >> Thu, Jun 20, 2024 at 12:31:34PM CEST, hen...@linux.alibaba.com wrote:
> >> >On Thu, 20 Jun 2024 06:11:40 -0400, "Michael S. Tsirkin" 
> >> > wrote:
> >> >> On Thu, Jun 20, 2024 at 06:10:51AM -0400, Michael S. Tsirkin wrote:
> >> >> > On Thu, Jun 20, 2024 at 05:53:15PM +0800, Heng Qi wrote:
> >> >> > > On Thu, 20 Jun 2024 16:26:05 +0800, Jason Wang 
> >> >> > >  wrote:
> >> >> > > > On Thu, Jun 20, 2024 at 4:21 PM Jason Wang  
> >> >> > > > wrote:
> >> >> > > > >
> >> >> > > > > On Thu, Jun 20, 2024 at 3:35 PM Heng Qi 
> >> >> > > > >  wrote:
> >> >> > > > > >
> >> >> > > > > > On Wed, 19 Jun 2024 17:19:12 -0400, "Michael S. Tsirkin" 
> >> >> > > > > >  wrote:
> >> >> > > > > > > On Thu, Jun 20, 2024 at 12:19:05AM +0800, Heng Qi wrote:
> >> >> > > > > > > > @@ -5312,7 +5315,7 @@ static int virtnet_find_vqs(struct 
> >> >> > > > > > > > virtnet_info *vi)
> >> >> > > > > > > >
> >> >> > > > > > > > /* Parameters for control virtqueue, if any */
> >> >> > > > > > > > if (vi->has_cvq) {
> >> >> > > > > > > > -   callbacks[total_vqs - 1] = NULL;
> >> >> > > > > > > > +   callbacks[total_vqs - 1] = virtnet_cvq_done;
> >> >> > > > > > > > names[total_vqs - 1] = "control";
> >> >> > > > > > > > }
> >> >> > > > > > > >
> >> >> > > > > > >
> >> >> > > > > > > If the # of MSIX vectors is exactly for data path VQs,
> >> >> > > > > > > this will cause irq sharing between VQs which will degrade
> >> >> > > > > > > performance significantly.
> >> >> > > > > > >
> >> >> > > > >
> >> >> > > > > Why do we need to care about buggy management? I think libvirt 
> >> >> > > > > has
> >> >> > > > > been teached to use 2N+2 since the introduction of the 
> >> >> > > > > multiqueue[1].
> >> >> > > > 
> >> >> > > > And Qemu can calculate it correctly automatically since:
> >> >> > > > 
> >> >> > > > commit 51a81a2118df0c70988f00d61647da9e298483a4
> >> >> > > > Author: Jason Wang 
> >> >> > > > Date:   Mon Mar 8 12:49:19 2021 +0800
> >> >> > > > 
> >> >> > > > virtio-net: calculating proper msix vectors on init
> >> >> > > > 
> >> >> > > > Currently, the default msix vectors for virtio-net-pci is 3 
> >> >> > > > which is
> >> >> > > > obvious not suitable for multiqueue guest, so we depends on 
> >> >> > > > the user
> >> >> > > > or management tools to pass a correct vectors parameter. In 
> >> >> > > > fact, we
> >> >> > > > can simplifying this by calculating the number of vectors on 
> >> >> > > > realize.
> >> >> > > > 
> >> >> > > > Consider we have N queues, the number of vectors needed is 
> >> >> > > > 2*N + 2
> >> >> > > > (#queue pairs + plus one config interrupt and control vq). We 
> >> >> > > > didn't
> >> >> > > > check whether or not host support control vq because it was 
> >> >> > > > added
> >> >> > > > unconditionally by qemu to avoid breaking legacy guests such 
> >> >> > > > as Minix.
> >> >> > > > 
> >> >> > > > Reviewed-by: Philippe Mathieu-Daudé  >> >> > > > Reviewed-by: Stefano Garzarella 
> >> >> > > > Reviewed-by: Stefan Hajnoczi 
> >> >> > > > Signed-off-by: Jason Wang 
> >> >> > > 
> >> >> > > Yes, devices designed according to the spec need to reserve an 
> >> >> > > interrupt
> >> >> > > vector for ctrlq. So, Michael, do we want to be compatible with 
> >> >> > > buggy devices?
> >> >> > > 
> >> >> > > Thanks.
> >> >> > 
> >> >> > These aren't buggy, the spec allows this. So don't fail, but
> >> >> > I'm fine with using polling if not enough vectors.
> >> >> 
> >> >> sharing with config interrupt is easier code-wise though, FWIW -
> >> >> we don't need to maintain two code-paths.
> >> >
> >> >Yes, it works well - config change irq is used less before - and will not 
> >> >fail.
> >> 
> >> Please note I'm working on such fallback for admin queue. I would Like
> >> to send the patchset by the end of this week. You can then use it easily
> >> for cvq.
> >> 
> >> Something like:
> >> /* the config->find_vqs() implementation */
> >> int vp_find_vqs(struct virtio_device *vdev, unsigned int nvqs,
> >> struct virtqueue *vqs[], vq_callback_t *callbacks[],
> >> const char * const names[], const bool *ctx,
> >> struct irq_affinity *desc)
> >> {
> >> int err;
> >> 
> >> /* Try MSI-X with one vector per queue. */
> >> err = vp_find_vqs_msix(vdev, nvqs, vqs, callbacks, names,
> >>VP_VQ_VECTOR_POLICY_EACH, ctx, desc);
> >> if (!err)
> >> return 0;
> >> /* Fallback: MSI-X with one shared vector for config and
> >>  * slow path queues, one vector per queue for the rest. */
> >> err = vp_find_vqs_msix(vdev, nvqs, vqs, callbacks, names,
> >>VP_VQ_VECTOR_POLICY_S

Re: [PATCH net-next v4 2/5] virtio_net: enable irq for the control vq

2024-06-26 Thread Jiri Pirko
Wed, Jun 26, 2024 at 10:08:14AM CEST, m...@redhat.com wrote:
>On Wed, Jun 26, 2024 at 09:52:58AM +0200, Jiri Pirko wrote:
>> Thu, Jun 20, 2024 at 12:31:34PM CEST, hen...@linux.alibaba.com wrote:
>> >On Thu, 20 Jun 2024 06:11:40 -0400, "Michael S. Tsirkin"  
>> >wrote:
>> >> On Thu, Jun 20, 2024 at 06:10:51AM -0400, Michael S. Tsirkin wrote:
>> >> > On Thu, Jun 20, 2024 at 05:53:15PM +0800, Heng Qi wrote:
>> >> > > On Thu, 20 Jun 2024 16:26:05 +0800, Jason Wang  
>> >> > > wrote:
>> >> > > > On Thu, Jun 20, 2024 at 4:21 PM Jason Wang  
>> >> > > > wrote:
>> >> > > > >
>> >> > > > > On Thu, Jun 20, 2024 at 3:35 PM Heng Qi 
>> >> > > > >  wrote:
>> >> > > > > >
>> >> > > > > > On Wed, 19 Jun 2024 17:19:12 -0400, "Michael S. Tsirkin" 
>> >> > > > > >  wrote:
>> >> > > > > > > On Thu, Jun 20, 2024 at 12:19:05AM +0800, Heng Qi wrote:
>> >> > > > > > > > @@ -5312,7 +5315,7 @@ static int virtnet_find_vqs(struct 
>> >> > > > > > > > virtnet_info *vi)
>> >> > > > > > > >
>> >> > > > > > > > /* Parameters for control virtqueue, if any */
>> >> > > > > > > > if (vi->has_cvq) {
>> >> > > > > > > > -   callbacks[total_vqs - 1] = NULL;
>> >> > > > > > > > +   callbacks[total_vqs - 1] = virtnet_cvq_done;
>> >> > > > > > > > names[total_vqs - 1] = "control";
>> >> > > > > > > > }
>> >> > > > > > > >
>> >> > > > > > >
>> >> > > > > > > If the # of MSIX vectors is exactly for data path VQs,
>> >> > > > > > > this will cause irq sharing between VQs which will degrade
>> >> > > > > > > performance significantly.
>> >> > > > > > >
>> >> > > > >
>> >> > > > > Why do we need to care about buggy management? I think libvirt has
>> >> > > > > been teached to use 2N+2 since the introduction of the 
>> >> > > > > multiqueue[1].
>> >> > > > 
>> >> > > > And Qemu can calculate it correctly automatically since:
>> >> > > > 
>> >> > > > commit 51a81a2118df0c70988f00d61647da9e298483a4
>> >> > > > Author: Jason Wang 
>> >> > > > Date:   Mon Mar 8 12:49:19 2021 +0800
>> >> > > > 
>> >> > > > virtio-net: calculating proper msix vectors on init
>> >> > > > 
>> >> > > > Currently, the default msix vectors for virtio-net-pci is 3 
>> >> > > > which is
>> >> > > > obvious not suitable for multiqueue guest, so we depends on the 
>> >> > > > user
>> >> > > > or management tools to pass a correct vectors parameter. In 
>> >> > > > fact, we
>> >> > > > can simplifying this by calculating the number of vectors on 
>> >> > > > realize.
>> >> > > > 
>> >> > > > Consider we have N queues, the number of vectors needed is 2*N 
>> >> > > > + 2
>> >> > > > (#queue pairs + plus one config interrupt and control vq). We 
>> >> > > > didn't
>> >> > > > check whether or not host support control vq because it was 
>> >> > > > added
>> >> > > > unconditionally by qemu to avoid breaking legacy guests such as 
>> >> > > > Minix.
>> >> > > > 
>> >> > > > Reviewed-by: Philippe Mathieu-Daudé > >> > > > Reviewed-by: Stefano Garzarella 
>> >> > > > Reviewed-by: Stefan Hajnoczi 
>> >> > > > Signed-off-by: Jason Wang 
>> >> > > 
>> >> > > Yes, devices designed according to the spec need to reserve an 
>> >> > > interrupt
>> >> > > vector for ctrlq. So, Michael, do we want to be compatible with buggy 
>> >> > > devices?
>> >> > > 
>> >> > > Thanks.
>> >> > 
>> >> > These aren't buggy, the spec allows this. So don't fail, but
>> >> > I'm fine with using polling if not enough vectors.
>> >> 
>> >> sharing with config interrupt is easier code-wise though, FWIW -
>> >> we don't need to maintain two code-paths.
>> >
>> >Yes, it works well - config change irq is used less before - and will not 
>> >fail.
>> 
>> Please note I'm working on such fallback for admin queue. I would Like
>> to send the patchset by the end of this week. You can then use it easily
>> for cvq.
>> 
>> Something like:
>> /* the config->find_vqs() implementation */
>> int vp_find_vqs(struct virtio_device *vdev, unsigned int nvqs,
>> struct virtqueue *vqs[], vq_callback_t *callbacks[],
>> const char * const names[], const bool *ctx,
>> struct irq_affinity *desc)
>> {
>> int err;
>> 
>> /* Try MSI-X with one vector per queue. */
>> err = vp_find_vqs_msix(vdev, nvqs, vqs, callbacks, names,
>>VP_VQ_VECTOR_POLICY_EACH, ctx, desc);
>> if (!err)
>> return 0;
>> /* Fallback: MSI-X with one shared vector for config and
>>  * slow path queues, one vector per queue for the rest. */
>> err = vp_find_vqs_msix(vdev, nvqs, vqs, callbacks, names,
>>VP_VQ_VECTOR_POLICY_SHARED_SLOW, ctx, desc);
>> if (!err)
>> return 0;
>> /* Fallback: MSI-X with one vector for config, one shared for 
>> queues. */
>> err = vp_find_vqs_msix(vdev, nvqs, vqs, callbacks, names,
>>VP_VQ_VECTOR_POLICY_SHARED

Re: [PATCH net-next v4 2/5] virtio_net: enable irq for the control vq

2024-06-26 Thread Michael S. Tsirkin
On Wed, Jun 26, 2024 at 09:52:58AM +0200, Jiri Pirko wrote:
> Thu, Jun 20, 2024 at 12:31:34PM CEST, hen...@linux.alibaba.com wrote:
> >On Thu, 20 Jun 2024 06:11:40 -0400, "Michael S. Tsirkin"  
> >wrote:
> >> On Thu, Jun 20, 2024 at 06:10:51AM -0400, Michael S. Tsirkin wrote:
> >> > On Thu, Jun 20, 2024 at 05:53:15PM +0800, Heng Qi wrote:
> >> > > On Thu, 20 Jun 2024 16:26:05 +0800, Jason Wang  
> >> > > wrote:
> >> > > > On Thu, Jun 20, 2024 at 4:21 PM Jason Wang  
> >> > > > wrote:
> >> > > > >
> >> > > > > On Thu, Jun 20, 2024 at 3:35 PM Heng Qi  
> >> > > > > wrote:
> >> > > > > >
> >> > > > > > On Wed, 19 Jun 2024 17:19:12 -0400, "Michael S. Tsirkin" 
> >> > > > > >  wrote:
> >> > > > > > > On Thu, Jun 20, 2024 at 12:19:05AM +0800, Heng Qi wrote:
> >> > > > > > > > @@ -5312,7 +5315,7 @@ static int virtnet_find_vqs(struct 
> >> > > > > > > > virtnet_info *vi)
> >> > > > > > > >
> >> > > > > > > > /* Parameters for control virtqueue, if any */
> >> > > > > > > > if (vi->has_cvq) {
> >> > > > > > > > -   callbacks[total_vqs - 1] = NULL;
> >> > > > > > > > +   callbacks[total_vqs - 1] = virtnet_cvq_done;
> >> > > > > > > > names[total_vqs - 1] = "control";
> >> > > > > > > > }
> >> > > > > > > >
> >> > > > > > >
> >> > > > > > > If the # of MSIX vectors is exactly for data path VQs,
> >> > > > > > > this will cause irq sharing between VQs which will degrade
> >> > > > > > > performance significantly.
> >> > > > > > >
> >> > > > >
> >> > > > > Why do we need to care about buggy management? I think libvirt has
> >> > > > > been teached to use 2N+2 since the introduction of the 
> >> > > > > multiqueue[1].
> >> > > > 
> >> > > > And Qemu can calculate it correctly automatically since:
> >> > > > 
> >> > > > commit 51a81a2118df0c70988f00d61647da9e298483a4
> >> > > > Author: Jason Wang 
> >> > > > Date:   Mon Mar 8 12:49:19 2021 +0800
> >> > > > 
> >> > > > virtio-net: calculating proper msix vectors on init
> >> > > > 
> >> > > > Currently, the default msix vectors for virtio-net-pci is 3 
> >> > > > which is
> >> > > > obvious not suitable for multiqueue guest, so we depends on the 
> >> > > > user
> >> > > > or management tools to pass a correct vectors parameter. In 
> >> > > > fact, we
> >> > > > can simplifying this by calculating the number of vectors on 
> >> > > > realize.
> >> > > > 
> >> > > > Consider we have N queues, the number of vectors needed is 2*N + 
> >> > > > 2
> >> > > > (#queue pairs + plus one config interrupt and control vq). We 
> >> > > > didn't
> >> > > > check whether or not host support control vq because it was added
> >> > > > unconditionally by qemu to avoid breaking legacy guests such as 
> >> > > > Minix.
> >> > > > 
> >> > > > Reviewed-by: Philippe Mathieu-Daudé  >> > > > Reviewed-by: Stefano Garzarella 
> >> > > > Reviewed-by: Stefan Hajnoczi 
> >> > > > Signed-off-by: Jason Wang 
> >> > > 
> >> > > Yes, devices designed according to the spec need to reserve an 
> >> > > interrupt
> >> > > vector for ctrlq. So, Michael, do we want to be compatible with buggy 
> >> > > devices?
> >> > > 
> >> > > Thanks.
> >> > 
> >> > These aren't buggy, the spec allows this. So don't fail, but
> >> > I'm fine with using polling if not enough vectors.
> >> 
> >> sharing with config interrupt is easier code-wise though, FWIW -
> >> we don't need to maintain two code-paths.
> >
> >Yes, it works well - config change irq is used less before - and will not 
> >fail.
> 
> Please note I'm working on such fallback for admin queue. I would Like
> to send the patchset by the end of this week. You can then use it easily
> for cvq.
> 
> Something like:
> /* the config->find_vqs() implementation */
> int vp_find_vqs(struct virtio_device *vdev, unsigned int nvqs,
> struct virtqueue *vqs[], vq_callback_t *callbacks[],
> const char * const names[], const bool *ctx,
> struct irq_affinity *desc)
> {
> int err;
> 
> /* Try MSI-X with one vector per queue. */
> err = vp_find_vqs_msix(vdev, nvqs, vqs, callbacks, names,
>VP_VQ_VECTOR_POLICY_EACH, ctx, desc);
> if (!err)
> return 0;
> /* Fallback: MSI-X with one shared vector for config and
>  * slow path queues, one vector per queue for the rest. */
> err = vp_find_vqs_msix(vdev, nvqs, vqs, callbacks, names,
>VP_VQ_VECTOR_POLICY_SHARED_SLOW, ctx, desc);
> if (!err)
> return 0;
> /* Fallback: MSI-X with one vector for config, one shared for queues. 
> */
> err = vp_find_vqs_msix(vdev, nvqs, vqs, callbacks, names,
>VP_VQ_VECTOR_POLICY_SHARED, ctx, desc);
> if (!err)
> return 0;
> /* Is there an interrupt? If not give up. */
> if (!(to_vp_device(vdev)->pci_dev->irq))
>

Re: [PATCH net-next v4 2/5] virtio_net: enable irq for the control vq

2024-06-26 Thread Jiri Pirko
Thu, Jun 20, 2024 at 12:31:34PM CEST, hen...@linux.alibaba.com wrote:
>On Thu, 20 Jun 2024 06:11:40 -0400, "Michael S. Tsirkin"  
>wrote:
>> On Thu, Jun 20, 2024 at 06:10:51AM -0400, Michael S. Tsirkin wrote:
>> > On Thu, Jun 20, 2024 at 05:53:15PM +0800, Heng Qi wrote:
>> > > On Thu, 20 Jun 2024 16:26:05 +0800, Jason Wang  
>> > > wrote:
>> > > > On Thu, Jun 20, 2024 at 4:21 PM Jason Wang  wrote:
>> > > > >
>> > > > > On Thu, Jun 20, 2024 at 3:35 PM Heng Qi  
>> > > > > wrote:
>> > > > > >
>> > > > > > On Wed, 19 Jun 2024 17:19:12 -0400, "Michael S. Tsirkin" 
>> > > > > >  wrote:
>> > > > > > > On Thu, Jun 20, 2024 at 12:19:05AM +0800, Heng Qi wrote:
>> > > > > > > > @@ -5312,7 +5315,7 @@ static int virtnet_find_vqs(struct 
>> > > > > > > > virtnet_info *vi)
>> > > > > > > >
>> > > > > > > > /* Parameters for control virtqueue, if any */
>> > > > > > > > if (vi->has_cvq) {
>> > > > > > > > -   callbacks[total_vqs - 1] = NULL;
>> > > > > > > > +   callbacks[total_vqs - 1] = virtnet_cvq_done;
>> > > > > > > > names[total_vqs - 1] = "control";
>> > > > > > > > }
>> > > > > > > >
>> > > > > > >
>> > > > > > > If the # of MSIX vectors is exactly for data path VQs,
>> > > > > > > this will cause irq sharing between VQs which will degrade
>> > > > > > > performance significantly.
>> > > > > > >
>> > > > >
>> > > > > Why do we need to care about buggy management? I think libvirt has
>> > > > > been teached to use 2N+2 since the introduction of the multiqueue[1].
>> > > > 
>> > > > And Qemu can calculate it correctly automatically since:
>> > > > 
>> > > > commit 51a81a2118df0c70988f00d61647da9e298483a4
>> > > > Author: Jason Wang 
>> > > > Date:   Mon Mar 8 12:49:19 2021 +0800
>> > > > 
>> > > > virtio-net: calculating proper msix vectors on init
>> > > > 
>> > > > Currently, the default msix vectors for virtio-net-pci is 3 which 
>> > > > is
>> > > > obvious not suitable for multiqueue guest, so we depends on the 
>> > > > user
>> > > > or management tools to pass a correct vectors parameter. In fact, 
>> > > > we
>> > > > can simplifying this by calculating the number of vectors on 
>> > > > realize.
>> > > > 
>> > > > Consider we have N queues, the number of vectors needed is 2*N + 2
>> > > > (#queue pairs + plus one config interrupt and control vq). We 
>> > > > didn't
>> > > > check whether or not host support control vq because it was added
>> > > > unconditionally by qemu to avoid breaking legacy guests such as 
>> > > > Minix.
>> > > > 
>> > > > Reviewed-by: Philippe Mathieu-Daudé > > > > Reviewed-by: Stefano Garzarella 
>> > > > Reviewed-by: Stefan Hajnoczi 
>> > > > Signed-off-by: Jason Wang 
>> > > 
>> > > Yes, devices designed according to the spec need to reserve an interrupt
>> > > vector for ctrlq. So, Michael, do we want to be compatible with buggy 
>> > > devices?
>> > > 
>> > > Thanks.
>> > 
>> > These aren't buggy, the spec allows this. So don't fail, but
>> > I'm fine with using polling if not enough vectors.
>> 
>> sharing with config interrupt is easier code-wise though, FWIW -
>> we don't need to maintain two code-paths.
>
>Yes, it works well - config change irq is used less before - and will not fail.

Please note I'm working on such fallback for admin queue. I would Like
to send the patchset by the end of this week. You can then use it easily
for cvq.

Something like:
/* the config->find_vqs() implementation */
int vp_find_vqs(struct virtio_device *vdev, unsigned int nvqs,
struct virtqueue *vqs[], vq_callback_t *callbacks[],
const char * const names[], const bool *ctx,
struct irq_affinity *desc)
{
int err;

/* Try MSI-X with one vector per queue. */
err = vp_find_vqs_msix(vdev, nvqs, vqs, callbacks, names,
   VP_VQ_VECTOR_POLICY_EACH, ctx, desc);
if (!err)
return 0;
/* Fallback: MSI-X with one shared vector for config and
 * slow path queues, one vector per queue for the rest. */
err = vp_find_vqs_msix(vdev, nvqs, vqs, callbacks, names,
   VP_VQ_VECTOR_POLICY_SHARED_SLOW, ctx, desc);
if (!err)
return 0;
/* Fallback: MSI-X with one vector for config, one shared for queues. */
err = vp_find_vqs_msix(vdev, nvqs, vqs, callbacks, names,
   VP_VQ_VECTOR_POLICY_SHARED, ctx, desc);
if (!err)
return 0;
/* Is there an interrupt? If not give up. */
if (!(to_vp_device(vdev)->pci_dev->irq))
return err;
/* Finally fall back to regular interrupts. */
return vp_find_vqs_intx(vdev, nvqs, vqs, callbacks, names, ctx);
}




>
>Thanks.
>
>> 
>> > > > 
>> > > > Thanks
>> > > > 
>> > > > >
>> > > > > > > So no, you can not just do it unconditionally.
>> > > > > > >
>> > > > > > > The correct fix probably requi

Re: [PATCH net-next v4 2/5] virtio_net: enable irq for the control vq

2024-06-25 Thread Michael S. Tsirkin
On Tue, Jun 25, 2024 at 09:27:24AM +0800, Jason Wang wrote:
> On Thu, Jun 20, 2024 at 6:12 PM Michael S. Tsirkin  wrote:
> >
> > On Thu, Jun 20, 2024 at 06:10:51AM -0400, Michael S. Tsirkin wrote:
> > > On Thu, Jun 20, 2024 at 05:53:15PM +0800, Heng Qi wrote:
> > > > On Thu, 20 Jun 2024 16:26:05 +0800, Jason Wang  
> > > > wrote:
> > > > > On Thu, Jun 20, 2024 at 4:21 PM Jason Wang  
> > > > > wrote:
> > > > > >
> > > > > > On Thu, Jun 20, 2024 at 3:35 PM Heng Qi  
> > > > > > wrote:
> > > > > > >
> > > > > > > On Wed, 19 Jun 2024 17:19:12 -0400, "Michael S. Tsirkin" 
> > > > > > >  wrote:
> > > > > > > > On Thu, Jun 20, 2024 at 12:19:05AM +0800, Heng Qi wrote:
> > > > > > > > > @@ -5312,7 +5315,7 @@ static int virtnet_find_vqs(struct 
> > > > > > > > > virtnet_info *vi)
> > > > > > > > >
> > > > > > > > > /* Parameters for control virtqueue, if any */
> > > > > > > > > if (vi->has_cvq) {
> > > > > > > > > -   callbacks[total_vqs - 1] = NULL;
> > > > > > > > > +   callbacks[total_vqs - 1] = virtnet_cvq_done;
> > > > > > > > > names[total_vqs - 1] = "control";
> > > > > > > > > }
> > > > > > > > >
> > > > > > > >
> > > > > > > > If the # of MSIX vectors is exactly for data path VQs,
> > > > > > > > this will cause irq sharing between VQs which will degrade
> > > > > > > > performance significantly.
> > > > > > > >
> > > > > >
> > > > > > Why do we need to care about buggy management? I think libvirt has
> > > > > > been teached to use 2N+2 since the introduction of the 
> > > > > > multiqueue[1].
> > > > >
> > > > > And Qemu can calculate it correctly automatically since:
> > > > >
> > > > > commit 51a81a2118df0c70988f00d61647da9e298483a4
> > > > > Author: Jason Wang 
> > > > > Date:   Mon Mar 8 12:49:19 2021 +0800
> > > > >
> > > > > virtio-net: calculating proper msix vectors on init
> > > > >
> > > > > Currently, the default msix vectors for virtio-net-pci is 3 which 
> > > > > is
> > > > > obvious not suitable for multiqueue guest, so we depends on the 
> > > > > user
> > > > > or management tools to pass a correct vectors parameter. In fact, 
> > > > > we
> > > > > can simplifying this by calculating the number of vectors on 
> > > > > realize.
> > > > >
> > > > > Consider we have N queues, the number of vectors needed is 2*N + 2
> > > > > (#queue pairs + plus one config interrupt and control vq). We 
> > > > > didn't
> > > > > check whether or not host support control vq because it was added
> > > > > unconditionally by qemu to avoid breaking legacy guests such as 
> > > > > Minix.
> > > > >
> > > > > Reviewed-by: Philippe Mathieu-Daudé  > > > > Reviewed-by: Stefano Garzarella 
> > > > > Reviewed-by: Stefan Hajnoczi 
> > > > > Signed-off-by: Jason Wang 
> > > >
> > > > Yes, devices designed according to the spec need to reserve an interrupt
> > > > vector for ctrlq. So, Michael, do we want to be compatible with buggy 
> > > > devices?
> > > >
> > > > Thanks.
> > >
> > > These aren't buggy, the spec allows this.
> 
> So it doesn't differ from the case when we are lacking sufficient msix
> vectors in the case of multiqueue. In that case we just fallback to
> share one msix for all queues and another for config and we don't
> bother at that time.

sharing queues is exactly "bothering".

> Any reason to bother now?
> 
> Thanks

This patch can make datapath slower for such configs by switching to
sharing msix for a control path benefit. Not a tradeoff many users want
to make.


> > >  So don't fail, but
> > > I'm fine with using polling if not enough vectors.
> >
> > sharing with config interrupt is easier code-wise though, FWIW -
> > we don't need to maintain two code-paths.
> >
> > > > >
> > > > > Thanks
> > > > >
> > > > > >
> > > > > > > > So no, you can not just do it unconditionally.
> > > > > > > >
> > > > > > > > The correct fix probably requires virtio core/API extensions.
> > > > > > >
> > > > > > > If the introduction of cvq irq causes interrupts to become 
> > > > > > > shared, then
> > > > > > > ctrlq need to fall back to polling mode and keep the status quo.
> > > > > >
> > > > > > Having to path sounds a burden.
> > > > > >
> > > > > > >
> > > > > > > Thanks.
> > > > > > >
> > > > > >
> > > > > >
> > > > > > Thanks
> > > > > >
> > > > > > [1] https://www.linux-kvm.org/page/Multiqueue
> > > > > >
> > > > > > > >
> > > > > > > > --
> > > > > > > > MST
> > > > > > > >
> > > > > > >
> > > > >
> >




Re: [PATCH net-next v4 2/5] virtio_net: enable irq for the control vq

2024-06-24 Thread Jason Wang
On Thu, Jun 20, 2024 at 6:12 PM Michael S. Tsirkin  wrote:
>
> On Thu, Jun 20, 2024 at 06:10:51AM -0400, Michael S. Tsirkin wrote:
> > On Thu, Jun 20, 2024 at 05:53:15PM +0800, Heng Qi wrote:
> > > On Thu, 20 Jun 2024 16:26:05 +0800, Jason Wang  
> > > wrote:
> > > > On Thu, Jun 20, 2024 at 4:21 PM Jason Wang  wrote:
> > > > >
> > > > > On Thu, Jun 20, 2024 at 3:35 PM Heng Qi  
> > > > > wrote:
> > > > > >
> > > > > > On Wed, 19 Jun 2024 17:19:12 -0400, "Michael S. Tsirkin" 
> > > > > >  wrote:
> > > > > > > On Thu, Jun 20, 2024 at 12:19:05AM +0800, Heng Qi wrote:
> > > > > > > > @@ -5312,7 +5315,7 @@ static int virtnet_find_vqs(struct 
> > > > > > > > virtnet_info *vi)
> > > > > > > >
> > > > > > > > /* Parameters for control virtqueue, if any */
> > > > > > > > if (vi->has_cvq) {
> > > > > > > > -   callbacks[total_vqs - 1] = NULL;
> > > > > > > > +   callbacks[total_vqs - 1] = virtnet_cvq_done;
> > > > > > > > names[total_vqs - 1] = "control";
> > > > > > > > }
> > > > > > > >
> > > > > > >
> > > > > > > If the # of MSIX vectors is exactly for data path VQs,
> > > > > > > this will cause irq sharing between VQs which will degrade
> > > > > > > performance significantly.
> > > > > > >
> > > > >
> > > > > Why do we need to care about buggy management? I think libvirt has
> > > > > been teached to use 2N+2 since the introduction of the multiqueue[1].
> > > >
> > > > And Qemu can calculate it correctly automatically since:
> > > >
> > > > commit 51a81a2118df0c70988f00d61647da9e298483a4
> > > > Author: Jason Wang 
> > > > Date:   Mon Mar 8 12:49:19 2021 +0800
> > > >
> > > > virtio-net: calculating proper msix vectors on init
> > > >
> > > > Currently, the default msix vectors for virtio-net-pci is 3 which is
> > > > obvious not suitable for multiqueue guest, so we depends on the user
> > > > or management tools to pass a correct vectors parameter. In fact, we
> > > > can simplifying this by calculating the number of vectors on 
> > > > realize.
> > > >
> > > > Consider we have N queues, the number of vectors needed is 2*N + 2
> > > > (#queue pairs + plus one config interrupt and control vq). We didn't
> > > > check whether or not host support control vq because it was added
> > > > unconditionally by qemu to avoid breaking legacy guests such as 
> > > > Minix.
> > > >
> > > > Reviewed-by: Philippe Mathieu-Daudé  > > > Reviewed-by: Stefano Garzarella 
> > > > Reviewed-by: Stefan Hajnoczi 
> > > > Signed-off-by: Jason Wang 
> > >
> > > Yes, devices designed according to the spec need to reserve an interrupt
> > > vector for ctrlq. So, Michael, do we want to be compatible with buggy 
> > > devices?
> > >
> > > Thanks.
> >
> > These aren't buggy, the spec allows this.

So it doesn't differ from the case when we are lacking sufficient msix
vectors in the case of multiqueue. In that case we just fallback to
share one msix for all queues and another for config and we don't
bother at that time.

Any reason to bother now?

Thanks

> >  So don't fail, but
> > I'm fine with using polling if not enough vectors.
>
> sharing with config interrupt is easier code-wise though, FWIW -
> we don't need to maintain two code-paths.
>
> > > >
> > > > Thanks
> > > >
> > > > >
> > > > > > > So no, you can not just do it unconditionally.
> > > > > > >
> > > > > > > The correct fix probably requires virtio core/API extensions.
> > > > > >
> > > > > > If the introduction of cvq irq causes interrupts to become shared, 
> > > > > > then
> > > > > > ctrlq need to fall back to polling mode and keep the status quo.
> > > > >
> > > > > Having to path sounds a burden.
> > > > >
> > > > > >
> > > > > > Thanks.
> > > > > >
> > > > >
> > > > >
> > > > > Thanks
> > > > >
> > > > > [1] https://www.linux-kvm.org/page/Multiqueue
> > > > >
> > > > > > >
> > > > > > > --
> > > > > > > MST
> > > > > > >
> > > > > >
> > > >
>




Re: [PATCH net-next v4 2/5] virtio_net: enable irq for the control vq

2024-06-24 Thread Michael S. Tsirkin
On Thu, Jun 20, 2024 at 12:19:05AM +0800, Heng Qi wrote:
> If the device does not respond to a request for a long time,
> then control vq polling elevates CPU utilization, a problem that
> exacerbates with more command requests.
> 
> Enabling control vq's irq is advantageous for the guest, and
> this still doesn't support concurrent requests.
> 
> Suggested-by: Jason Wang 
> Signed-off-by: Heng Qi 
> ---
>  drivers/net/virtio_net.c | 22 +-
>  1 file changed, 13 insertions(+), 9 deletions(-)
> 
> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> index b45f58a902e3..ed10084997d3 100644
> --- a/drivers/net/virtio_net.c
> +++ b/drivers/net/virtio_net.c
> @@ -372,6 +372,8 @@ struct virtio_net_ctrl_rss {
>  struct control_buf {
>   struct virtio_net_ctrl_hdr hdr;
>   virtio_net_ctrl_ack status;
> + /* Wait for the device to complete the cvq request. */
> + struct completion completion;
>  };
>  
>  struct virtnet_info {
> @@ -664,6 +666,13 @@ static bool virtqueue_napi_complete(struct napi_struct 
> *napi,
>   return false;
>  }
>  
> +static void virtnet_cvq_done(struct virtqueue *cvq)
> +{
> + struct virtnet_info *vi = cvq->vdev->priv;
> +
> + complete(&vi->ctrl->completion);
> +}
> +
>  static void skb_xmit_done(struct virtqueue *vq)
>  {
>   struct virtnet_info *vi = vq->vdev->priv;

> @@ -2724,14 +2733,8 @@ static bool virtnet_send_command_reply(struct 
> virtnet_info *vi,
>   if (unlikely(!virtqueue_kick(vi->cvq)))
>   goto unlock;
>  
> - /* Spin for a response, the kick causes an ioport write, trapping
> -  * into the hypervisor, so the request should be handled immediately.
> -  */
> - while (!virtqueue_get_buf(vi->cvq, &tmp) &&
> -!virtqueue_is_broken(vi->cvq)) {
> - cond_resched();
> - cpu_relax();
> - }
> + wait_for_completion(&ctrl->completion);
> + virtqueue_get_buf(vi->cvq, &tmp);
>  
>  unlock:
>   ok = ctrl->status == VIRTIO_NET_OK;

Hmm no this is not a good idea, code should be robust in case
of spurious interrupts.

Also suprise removal is now broken ...


> @@ -5312,7 +5315,7 @@ static int virtnet_find_vqs(struct virtnet_info *vi)
>  
>   /* Parameters for control virtqueue, if any */
>   if (vi->has_cvq) {
> - callbacks[total_vqs - 1] = NULL;
> + callbacks[total_vqs - 1] = virtnet_cvq_done;
>   names[total_vqs - 1] = "control";
>   }
>  
> @@ -5832,6 +5835,7 @@ static int virtnet_probe(struct virtio_device *vdev)
>   if (vi->has_rss || vi->has_rss_hash_report)
>   virtnet_init_default_rss(vi);
>  
> + init_completion(&vi->ctrl->completion);
>   enable_rx_mode_work(vi);
>  
>   /* serialize netdev register + virtio_device_ready() with ndo_open() */
> -- 
> 2.32.0.3.g01195cf9f




Re: [PATCH net-next v4 2/5] virtio_net: enable irq for the control vq

2024-06-21 Thread Michael S. Tsirkin
On Fri, Jun 21, 2024 at 03:41:46PM +0800, Xuan Zhuo wrote:
> On Thu, 20 Jun 2024 06:11:40 -0400, "Michael S. Tsirkin"  
> wrote:
> > On Thu, Jun 20, 2024 at 06:10:51AM -0400, Michael S. Tsirkin wrote:
> > > On Thu, Jun 20, 2024 at 05:53:15PM +0800, Heng Qi wrote:
> > > > On Thu, 20 Jun 2024 16:26:05 +0800, Jason Wang  
> > > > wrote:
> > > > > On Thu, Jun 20, 2024 at 4:21 PM Jason Wang  
> > > > > wrote:
> > > > > >
> > > > > > On Thu, Jun 20, 2024 at 3:35 PM Heng Qi  
> > > > > > wrote:
> > > > > > >
> > > > > > > On Wed, 19 Jun 2024 17:19:12 -0400, "Michael S. Tsirkin" 
> > > > > > >  wrote:
> > > > > > > > On Thu, Jun 20, 2024 at 12:19:05AM +0800, Heng Qi wrote:
> > > > > > > > > @@ -5312,7 +5315,7 @@ static int virtnet_find_vqs(struct 
> > > > > > > > > virtnet_info *vi)
> > > > > > > > >
> > > > > > > > > /* Parameters for control virtqueue, if any */
> > > > > > > > > if (vi->has_cvq) {
> > > > > > > > > -   callbacks[total_vqs - 1] = NULL;
> > > > > > > > > +   callbacks[total_vqs - 1] = virtnet_cvq_done;
> > > > > > > > > names[total_vqs - 1] = "control";
> > > > > > > > > }
> > > > > > > > >
> > > > > > > >
> > > > > > > > If the # of MSIX vectors is exactly for data path VQs,
> > > > > > > > this will cause irq sharing between VQs which will degrade
> > > > > > > > performance significantly.
> > > > > > > >
> > > > > >
> > > > > > Why do we need to care about buggy management? I think libvirt has
> > > > > > been teached to use 2N+2 since the introduction of the 
> > > > > > multiqueue[1].
> > > > >
> > > > > And Qemu can calculate it correctly automatically since:
> > > > >
> > > > > commit 51a81a2118df0c70988f00d61647da9e298483a4
> > > > > Author: Jason Wang 
> > > > > Date:   Mon Mar 8 12:49:19 2021 +0800
> > > > >
> > > > > virtio-net: calculating proper msix vectors on init
> > > > >
> > > > > Currently, the default msix vectors for virtio-net-pci is 3 which 
> > > > > is
> > > > > obvious not suitable for multiqueue guest, so we depends on the 
> > > > > user
> > > > > or management tools to pass a correct vectors parameter. In fact, 
> > > > > we
> > > > > can simplifying this by calculating the number of vectors on 
> > > > > realize.
> > > > >
> > > > > Consider we have N queues, the number of vectors needed is 2*N + 2
> > > > > (#queue pairs + plus one config interrupt and control vq). We 
> > > > > didn't
> > > > > check whether or not host support control vq because it was added
> > > > > unconditionally by qemu to avoid breaking legacy guests such as 
> > > > > Minix.
> > > > >
> > > > > Reviewed-by: Philippe Mathieu-Daudé  > > > > Reviewed-by: Stefano Garzarella 
> > > > > Reviewed-by: Stefan Hajnoczi 
> > > > > Signed-off-by: Jason Wang 
> > > >
> > > > Yes, devices designed according to the spec need to reserve an interrupt
> > > > vector for ctrlq. So, Michael, do we want to be compatible with buggy 
> > > > devices?
> > > >
> > > > Thanks.
> > >
> > > These aren't buggy, the spec allows this. So don't fail, but
> > > I'm fine with using polling if not enough vectors.
> >
> > sharing with config interrupt is easier code-wise though, FWIW -
> > we don't need to maintain two code-paths.
> 
> 
> If we do that, we should introduce a new helper, not to add new function to
> find_vqs().
> 
> if ->share_irq_with_config:
>   ->share_irq_with_config(..., vq_idx)
> 
> Thanks.

Generally, I'd like to avoid something very narrow and single-use
in the API. Just telling virtio core that a vq is slow-path
sounds generic enough, and we should be able to extend that
later to support sharing between VQs.

> 
> >
> > > > >
> > > > > Thanks
> > > > >
> > > > > >
> > > > > > > > So no, you can not just do it unconditionally.
> > > > > > > >
> > > > > > > > The correct fix probably requires virtio core/API extensions.
> > > > > > >
> > > > > > > If the introduction of cvq irq causes interrupts to become 
> > > > > > > shared, then
> > > > > > > ctrlq need to fall back to polling mode and keep the status quo.
> > > > > >
> > > > > > Having to path sounds a burden.
> > > > > >
> > > > > > >
> > > > > > > Thanks.
> > > > > > >
> > > > > >
> > > > > >
> > > > > > Thanks
> > > > > >
> > > > > > [1] https://www.linux-kvm.org/page/Multiqueue
> > > > > >
> > > > > > > >
> > > > > > > > --
> > > > > > > > MST
> > > > > > > >
> > > > > > >
> > > > >
> >




Re: [PATCH net-next v4 2/5] virtio_net: enable irq for the control vq

2024-06-21 Thread Xuan Zhuo
On Thu, 20 Jun 2024 06:11:40 -0400, "Michael S. Tsirkin"  
wrote:
> On Thu, Jun 20, 2024 at 06:10:51AM -0400, Michael S. Tsirkin wrote:
> > On Thu, Jun 20, 2024 at 05:53:15PM +0800, Heng Qi wrote:
> > > On Thu, 20 Jun 2024 16:26:05 +0800, Jason Wang  
> > > wrote:
> > > > On Thu, Jun 20, 2024 at 4:21 PM Jason Wang  wrote:
> > > > >
> > > > > On Thu, Jun 20, 2024 at 3:35 PM Heng Qi  
> > > > > wrote:
> > > > > >
> > > > > > On Wed, 19 Jun 2024 17:19:12 -0400, "Michael S. Tsirkin" 
> > > > > >  wrote:
> > > > > > > On Thu, Jun 20, 2024 at 12:19:05AM +0800, Heng Qi wrote:
> > > > > > > > @@ -5312,7 +5315,7 @@ static int virtnet_find_vqs(struct 
> > > > > > > > virtnet_info *vi)
> > > > > > > >
> > > > > > > > /* Parameters for control virtqueue, if any */
> > > > > > > > if (vi->has_cvq) {
> > > > > > > > -   callbacks[total_vqs - 1] = NULL;
> > > > > > > > +   callbacks[total_vqs - 1] = virtnet_cvq_done;
> > > > > > > > names[total_vqs - 1] = "control";
> > > > > > > > }
> > > > > > > >
> > > > > > >
> > > > > > > If the # of MSIX vectors is exactly for data path VQs,
> > > > > > > this will cause irq sharing between VQs which will degrade
> > > > > > > performance significantly.
> > > > > > >
> > > > >
> > > > > Why do we need to care about buggy management? I think libvirt has
> > > > > been teached to use 2N+2 since the introduction of the multiqueue[1].
> > > >
> > > > And Qemu can calculate it correctly automatically since:
> > > >
> > > > commit 51a81a2118df0c70988f00d61647da9e298483a4
> > > > Author: Jason Wang 
> > > > Date:   Mon Mar 8 12:49:19 2021 +0800
> > > >
> > > > virtio-net: calculating proper msix vectors on init
> > > >
> > > > Currently, the default msix vectors for virtio-net-pci is 3 which is
> > > > obvious not suitable for multiqueue guest, so we depends on the user
> > > > or management tools to pass a correct vectors parameter. In fact, we
> > > > can simplifying this by calculating the number of vectors on 
> > > > realize.
> > > >
> > > > Consider we have N queues, the number of vectors needed is 2*N + 2
> > > > (#queue pairs + plus one config interrupt and control vq). We didn't
> > > > check whether or not host support control vq because it was added
> > > > unconditionally by qemu to avoid breaking legacy guests such as 
> > > > Minix.
> > > >
> > > > Reviewed-by: Philippe Mathieu-Daudé  > > > Reviewed-by: Stefano Garzarella 
> > > > Reviewed-by: Stefan Hajnoczi 
> > > > Signed-off-by: Jason Wang 
> > >
> > > Yes, devices designed according to the spec need to reserve an interrupt
> > > vector for ctrlq. So, Michael, do we want to be compatible with buggy 
> > > devices?
> > >
> > > Thanks.
> >
> > These aren't buggy, the spec allows this. So don't fail, but
> > I'm fine with using polling if not enough vectors.
>
> sharing with config interrupt is easier code-wise though, FWIW -
> we don't need to maintain two code-paths.


If we do that, we should introduce a new helper, not to add new function to
find_vqs().

if ->share_irq_with_config:
->share_irq_with_config(..., vq_idx)

Thanks.


>
> > > >
> > > > Thanks
> > > >
> > > > >
> > > > > > > So no, you can not just do it unconditionally.
> > > > > > >
> > > > > > > The correct fix probably requires virtio core/API extensions.
> > > > > >
> > > > > > If the introduction of cvq irq causes interrupts to become shared, 
> > > > > > then
> > > > > > ctrlq need to fall back to polling mode and keep the status quo.
> > > > >
> > > > > Having to path sounds a burden.
> > > > >
> > > > > >
> > > > > > Thanks.
> > > > > >
> > > > >
> > > > >
> > > > > Thanks
> > > > >
> > > > > [1] https://www.linux-kvm.org/page/Multiqueue
> > > > >
> > > > > > >
> > > > > > > --
> > > > > > > MST
> > > > > > >
> > > > > >
> > > >
>



Re: [PATCH net-next v4 2/5] virtio_net: enable irq for the control vq

2024-06-20 Thread Heng Qi
On Thu, 20 Jun 2024 06:11:40 -0400, "Michael S. Tsirkin"  
wrote:
> On Thu, Jun 20, 2024 at 06:10:51AM -0400, Michael S. Tsirkin wrote:
> > On Thu, Jun 20, 2024 at 05:53:15PM +0800, Heng Qi wrote:
> > > On Thu, 20 Jun 2024 16:26:05 +0800, Jason Wang  
> > > wrote:
> > > > On Thu, Jun 20, 2024 at 4:21 PM Jason Wang  wrote:
> > > > >
> > > > > On Thu, Jun 20, 2024 at 3:35 PM Heng Qi  
> > > > > wrote:
> > > > > >
> > > > > > On Wed, 19 Jun 2024 17:19:12 -0400, "Michael S. Tsirkin" 
> > > > > >  wrote:
> > > > > > > On Thu, Jun 20, 2024 at 12:19:05AM +0800, Heng Qi wrote:
> > > > > > > > @@ -5312,7 +5315,7 @@ static int virtnet_find_vqs(struct 
> > > > > > > > virtnet_info *vi)
> > > > > > > >
> > > > > > > > /* Parameters for control virtqueue, if any */
> > > > > > > > if (vi->has_cvq) {
> > > > > > > > -   callbacks[total_vqs - 1] = NULL;
> > > > > > > > +   callbacks[total_vqs - 1] = virtnet_cvq_done;
> > > > > > > > names[total_vqs - 1] = "control";
> > > > > > > > }
> > > > > > > >
> > > > > > >
> > > > > > > If the # of MSIX vectors is exactly for data path VQs,
> > > > > > > this will cause irq sharing between VQs which will degrade
> > > > > > > performance significantly.
> > > > > > >
> > > > >
> > > > > Why do we need to care about buggy management? I think libvirt has
> > > > > been teached to use 2N+2 since the introduction of the multiqueue[1].
> > > > 
> > > > And Qemu can calculate it correctly automatically since:
> > > > 
> > > > commit 51a81a2118df0c70988f00d61647da9e298483a4
> > > > Author: Jason Wang 
> > > > Date:   Mon Mar 8 12:49:19 2021 +0800
> > > > 
> > > > virtio-net: calculating proper msix vectors on init
> > > > 
> > > > Currently, the default msix vectors for virtio-net-pci is 3 which is
> > > > obvious not suitable for multiqueue guest, so we depends on the user
> > > > or management tools to pass a correct vectors parameter. In fact, we
> > > > can simplifying this by calculating the number of vectors on 
> > > > realize.
> > > > 
> > > > Consider we have N queues, the number of vectors needed is 2*N + 2
> > > > (#queue pairs + plus one config interrupt and control vq). We didn't
> > > > check whether or not host support control vq because it was added
> > > > unconditionally by qemu to avoid breaking legacy guests such as 
> > > > Minix.
> > > > 
> > > > Reviewed-by: Philippe Mathieu-Daudé  > > > Reviewed-by: Stefano Garzarella 
> > > > Reviewed-by: Stefan Hajnoczi 
> > > > Signed-off-by: Jason Wang 
> > > 
> > > Yes, devices designed according to the spec need to reserve an interrupt
> > > vector for ctrlq. So, Michael, do we want to be compatible with buggy 
> > > devices?
> > > 
> > > Thanks.
> > 
> > These aren't buggy, the spec allows this. So don't fail, but
> > I'm fine with using polling if not enough vectors.
> 
> sharing with config interrupt is easier code-wise though, FWIW -
> we don't need to maintain two code-paths.

Yes, it works well - config change irq is used less before - and will not fail.

Thanks.

> 
> > > > 
> > > > Thanks
> > > > 
> > > > >
> > > > > > > So no, you can not just do it unconditionally.
> > > > > > >
> > > > > > > The correct fix probably requires virtio core/API extensions.
> > > > > >
> > > > > > If the introduction of cvq irq causes interrupts to become shared, 
> > > > > > then
> > > > > > ctrlq need to fall back to polling mode and keep the status quo.
> > > > >
> > > > > Having to path sounds a burden.
> > > > >
> > > > > >
> > > > > > Thanks.
> > > > > >
> > > > >
> > > > >
> > > > > Thanks
> > > > >
> > > > > [1] https://www.linux-kvm.org/page/Multiqueue
> > > > >
> > > > > > >
> > > > > > > --
> > > > > > > MST
> > > > > > >
> > > > > >
> > > > 
> 



Re: [PATCH net-next v4 2/5] virtio_net: enable irq for the control vq

2024-06-20 Thread Michael S. Tsirkin
On Thu, Jun 20, 2024 at 06:10:51AM -0400, Michael S. Tsirkin wrote:
> On Thu, Jun 20, 2024 at 05:53:15PM +0800, Heng Qi wrote:
> > On Thu, 20 Jun 2024 16:26:05 +0800, Jason Wang  wrote:
> > > On Thu, Jun 20, 2024 at 4:21 PM Jason Wang  wrote:
> > > >
> > > > On Thu, Jun 20, 2024 at 3:35 PM Heng Qi  
> > > > wrote:
> > > > >
> > > > > On Wed, 19 Jun 2024 17:19:12 -0400, "Michael S. Tsirkin" 
> > > > >  wrote:
> > > > > > On Thu, Jun 20, 2024 at 12:19:05AM +0800, Heng Qi wrote:
> > > > > > > @@ -5312,7 +5315,7 @@ static int virtnet_find_vqs(struct 
> > > > > > > virtnet_info *vi)
> > > > > > >
> > > > > > > /* Parameters for control virtqueue, if any */
> > > > > > > if (vi->has_cvq) {
> > > > > > > -   callbacks[total_vqs - 1] = NULL;
> > > > > > > +   callbacks[total_vqs - 1] = virtnet_cvq_done;
> > > > > > > names[total_vqs - 1] = "control";
> > > > > > > }
> > > > > > >
> > > > > >
> > > > > > If the # of MSIX vectors is exactly for data path VQs,
> > > > > > this will cause irq sharing between VQs which will degrade
> > > > > > performance significantly.
> > > > > >
> > > >
> > > > Why do we need to care about buggy management? I think libvirt has
> > > > been teached to use 2N+2 since the introduction of the multiqueue[1].
> > > 
> > > And Qemu can calculate it correctly automatically since:
> > > 
> > > commit 51a81a2118df0c70988f00d61647da9e298483a4
> > > Author: Jason Wang 
> > > Date:   Mon Mar 8 12:49:19 2021 +0800
> > > 
> > > virtio-net: calculating proper msix vectors on init
> > > 
> > > Currently, the default msix vectors for virtio-net-pci is 3 which is
> > > obvious not suitable for multiqueue guest, so we depends on the user
> > > or management tools to pass a correct vectors parameter. In fact, we
> > > can simplifying this by calculating the number of vectors on realize.
> > > 
> > > Consider we have N queues, the number of vectors needed is 2*N + 2
> > > (#queue pairs + plus one config interrupt and control vq). We didn't
> > > check whether or not host support control vq because it was added
> > > unconditionally by qemu to avoid breaking legacy guests such as Minix.
> > > 
> > > Reviewed-by: Philippe Mathieu-Daudé  > > Reviewed-by: Stefano Garzarella 
> > > Reviewed-by: Stefan Hajnoczi 
> > > Signed-off-by: Jason Wang 
> > 
> > Yes, devices designed according to the spec need to reserve an interrupt
> > vector for ctrlq. So, Michael, do we want to be compatible with buggy 
> > devices?
> > 
> > Thanks.
> 
> These aren't buggy, the spec allows this. So don't fail, but
> I'm fine with using polling if not enough vectors.

sharing with config interrupt is easier code-wise though, FWIW -
we don't need to maintain two code-paths.

> > > 
> > > Thanks
> > > 
> > > >
> > > > > > So no, you can not just do it unconditionally.
> > > > > >
> > > > > > The correct fix probably requires virtio core/API extensions.
> > > > >
> > > > > If the introduction of cvq irq causes interrupts to become shared, 
> > > > > then
> > > > > ctrlq need to fall back to polling mode and keep the status quo.
> > > >
> > > > Having to path sounds a burden.
> > > >
> > > > >
> > > > > Thanks.
> > > > >
> > > >
> > > >
> > > > Thanks
> > > >
> > > > [1] https://www.linux-kvm.org/page/Multiqueue
> > > >
> > > > > >
> > > > > > --
> > > > > > MST
> > > > > >
> > > > >
> > > 




Re: [PATCH net-next v4 2/5] virtio_net: enable irq for the control vq

2024-06-20 Thread Michael S. Tsirkin
On Thu, Jun 20, 2024 at 05:53:15PM +0800, Heng Qi wrote:
> On Thu, 20 Jun 2024 16:26:05 +0800, Jason Wang  wrote:
> > On Thu, Jun 20, 2024 at 4:21 PM Jason Wang  wrote:
> > >
> > > On Thu, Jun 20, 2024 at 3:35 PM Heng Qi  wrote:
> > > >
> > > > On Wed, 19 Jun 2024 17:19:12 -0400, "Michael S. Tsirkin" 
> > > >  wrote:
> > > > > On Thu, Jun 20, 2024 at 12:19:05AM +0800, Heng Qi wrote:
> > > > > > @@ -5312,7 +5315,7 @@ static int virtnet_find_vqs(struct 
> > > > > > virtnet_info *vi)
> > > > > >
> > > > > > /* Parameters for control virtqueue, if any */
> > > > > > if (vi->has_cvq) {
> > > > > > -   callbacks[total_vqs - 1] = NULL;
> > > > > > +   callbacks[total_vqs - 1] = virtnet_cvq_done;
> > > > > > names[total_vqs - 1] = "control";
> > > > > > }
> > > > > >
> > > > >
> > > > > If the # of MSIX vectors is exactly for data path VQs,
> > > > > this will cause irq sharing between VQs which will degrade
> > > > > performance significantly.
> > > > >
> > >
> > > Why do we need to care about buggy management? I think libvirt has
> > > been teached to use 2N+2 since the introduction of the multiqueue[1].
> > 
> > And Qemu can calculate it correctly automatically since:
> > 
> > commit 51a81a2118df0c70988f00d61647da9e298483a4
> > Author: Jason Wang 
> > Date:   Mon Mar 8 12:49:19 2021 +0800
> > 
> > virtio-net: calculating proper msix vectors on init
> > 
> > Currently, the default msix vectors for virtio-net-pci is 3 which is
> > obvious not suitable for multiqueue guest, so we depends on the user
> > or management tools to pass a correct vectors parameter. In fact, we
> > can simplifying this by calculating the number of vectors on realize.
> > 
> > Consider we have N queues, the number of vectors needed is 2*N + 2
> > (#queue pairs + plus one config interrupt and control vq). We didn't
> > check whether or not host support control vq because it was added
> > unconditionally by qemu to avoid breaking legacy guests such as Minix.
> > 
> > Reviewed-by: Philippe Mathieu-Daudé  > Reviewed-by: Stefano Garzarella 
> > Reviewed-by: Stefan Hajnoczi 
> > Signed-off-by: Jason Wang 
> 
> Yes, devices designed according to the spec need to reserve an interrupt
> vector for ctrlq. So, Michael, do we want to be compatible with buggy devices?
> 
> Thanks.

These aren't buggy, the spec allows this. So don't fail, but
I'm fine with using polling if not enough vectors.

> > 
> > Thanks
> > 
> > >
> > > > > So no, you can not just do it unconditionally.
> > > > >
> > > > > The correct fix probably requires virtio core/API extensions.
> > > >
> > > > If the introduction of cvq irq causes interrupts to become shared, then
> > > > ctrlq need to fall back to polling mode and keep the status quo.
> > >
> > > Having to path sounds a burden.
> > >
> > > >
> > > > Thanks.
> > > >
> > >
> > >
> > > Thanks
> > >
> > > [1] https://www.linux-kvm.org/page/Multiqueue
> > >
> > > > >
> > > > > --
> > > > > MST
> > > > >
> > > >
> > 




Re: [PATCH net-next v4 2/5] virtio_net: enable irq for the control vq

2024-06-20 Thread Michael S. Tsirkin
On Thu, Jun 20, 2024 at 05:38:22PM +0800, Heng Qi wrote:
> On Thu, 20 Jun 2024 04:32:15 -0400, "Michael S. Tsirkin"  
> wrote:
> > On Thu, Jun 20, 2024 at 03:29:15PM +0800, Heng Qi wrote:
> > > On Wed, 19 Jun 2024 17:19:12 -0400, "Michael S. Tsirkin" 
> > >  wrote:
> > > > On Thu, Jun 20, 2024 at 12:19:05AM +0800, Heng Qi wrote:
> > > > > @@ -5312,7 +5315,7 @@ static int virtnet_find_vqs(struct virtnet_info 
> > > > > *vi)
> > > > >  
> > > > >   /* Parameters for control virtqueue, if any */
> > > > >   if (vi->has_cvq) {
> > > > > - callbacks[total_vqs - 1] = NULL;
> > > > > + callbacks[total_vqs - 1] = virtnet_cvq_done;
> > > > >   names[total_vqs - 1] = "control";
> > > > >   }
> > > > >  
> > > > 
> > > > If the # of MSIX vectors is exactly for data path VQs,
> > > > this will cause irq sharing between VQs which will degrade
> > > > performance significantly.
> > > > 
> > > > So no, you can not just do it unconditionally.
> > > > 
> > > > The correct fix probably requires virtio core/API extensions.
> > > 
> > > If the introduction of cvq irq causes interrupts to become shared, then
> > > ctrlq need to fall back to polling mode and keep the status quo.
> > > 
> > > Thanks.
> > 
> > I don't see that in the code.
> > 
> > I guess we'll need more info in find vqs about what can and what can't 
> > share irqs?
> 
> I mean we should add fallback code, for example if allocating interrupt for 
> ctrlq
> fails, we should clear the callback of ctrlq.

I have no idea how you plan to do that. interrupts are allocated in
virtio core, callbacks enabled in drivers.

> > Sharing between ctrl vq and config irq can also be an option.
> > 
> 
> Not sure if this violates the spec. In the spec, used buffer notification and
> configuration change notification are clearly defined - ctrlq is a virtqueue
> and used buffer notification should be used.
> 
> Thanks.

It is up to driver to choose which msix vector will trigger.
Nothing says same vector can't be reused.
Whether devices made assumptions based on current driver
behaviour is another matter.


> > 
> > 
> > 
> > > > 
> > > > -- 
> > > > MST
> > > > 
> > 
> > 




Re: [PATCH net-next v4 2/5] virtio_net: enable irq for the control vq

2024-06-20 Thread Heng Qi
On Thu, 20 Jun 2024 16:26:05 +0800, Jason Wang  wrote:
> On Thu, Jun 20, 2024 at 4:21 PM Jason Wang  wrote:
> >
> > On Thu, Jun 20, 2024 at 3:35 PM Heng Qi  wrote:
> > >
> > > On Wed, 19 Jun 2024 17:19:12 -0400, "Michael S. Tsirkin" 
> > >  wrote:
> > > > On Thu, Jun 20, 2024 at 12:19:05AM +0800, Heng Qi wrote:
> > > > > @@ -5312,7 +5315,7 @@ static int virtnet_find_vqs(struct virtnet_info 
> > > > > *vi)
> > > > >
> > > > > /* Parameters for control virtqueue, if any */
> > > > > if (vi->has_cvq) {
> > > > > -   callbacks[total_vqs - 1] = NULL;
> > > > > +   callbacks[total_vqs - 1] = virtnet_cvq_done;
> > > > > names[total_vqs - 1] = "control";
> > > > > }
> > > > >
> > > >
> > > > If the # of MSIX vectors is exactly for data path VQs,
> > > > this will cause irq sharing between VQs which will degrade
> > > > performance significantly.
> > > >
> >
> > Why do we need to care about buggy management? I think libvirt has
> > been teached to use 2N+2 since the introduction of the multiqueue[1].
> 
> And Qemu can calculate it correctly automatically since:
> 
> commit 51a81a2118df0c70988f00d61647da9e298483a4
> Author: Jason Wang 
> Date:   Mon Mar 8 12:49:19 2021 +0800
> 
> virtio-net: calculating proper msix vectors on init
> 
> Currently, the default msix vectors for virtio-net-pci is 3 which is
> obvious not suitable for multiqueue guest, so we depends on the user
> or management tools to pass a correct vectors parameter. In fact, we
> can simplifying this by calculating the number of vectors on realize.
> 
> Consider we have N queues, the number of vectors needed is 2*N + 2
> (#queue pairs + plus one config interrupt and control vq). We didn't
> check whether or not host support control vq because it was added
> unconditionally by qemu to avoid breaking legacy guests such as Minix.
> 
> Reviewed-by: Philippe Mathieu-Daudé  Reviewed-by: Stefano Garzarella 
> Reviewed-by: Stefan Hajnoczi 
> Signed-off-by: Jason Wang 

Yes, devices designed according to the spec need to reserve an interrupt
vector for ctrlq. So, Michael, do we want to be compatible with buggy devices?

Thanks.

> 
> Thanks
> 
> >
> > > > So no, you can not just do it unconditionally.
> > > >
> > > > The correct fix probably requires virtio core/API extensions.
> > >
> > > If the introduction of cvq irq causes interrupts to become shared, then
> > > ctrlq need to fall back to polling mode and keep the status quo.
> >
> > Having to path sounds a burden.
> >
> > >
> > > Thanks.
> > >
> >
> >
> > Thanks
> >
> > [1] https://www.linux-kvm.org/page/Multiqueue
> >
> > > >
> > > > --
> > > > MST
> > > >
> > >
> 



Re: [PATCH net-next v4 2/5] virtio_net: enable irq for the control vq

2024-06-20 Thread Heng Qi
On Thu, 20 Jun 2024 04:32:15 -0400, "Michael S. Tsirkin"  
wrote:
> On Thu, Jun 20, 2024 at 03:29:15PM +0800, Heng Qi wrote:
> > On Wed, 19 Jun 2024 17:19:12 -0400, "Michael S. Tsirkin"  
> > wrote:
> > > On Thu, Jun 20, 2024 at 12:19:05AM +0800, Heng Qi wrote:
> > > > @@ -5312,7 +5315,7 @@ static int virtnet_find_vqs(struct virtnet_info 
> > > > *vi)
> > > >  
> > > > /* Parameters for control virtqueue, if any */
> > > > if (vi->has_cvq) {
> > > > -   callbacks[total_vqs - 1] = NULL;
> > > > +   callbacks[total_vqs - 1] = virtnet_cvq_done;
> > > > names[total_vqs - 1] = "control";
> > > > }
> > > >  
> > > 
> > > If the # of MSIX vectors is exactly for data path VQs,
> > > this will cause irq sharing between VQs which will degrade
> > > performance significantly.
> > > 
> > > So no, you can not just do it unconditionally.
> > > 
> > > The correct fix probably requires virtio core/API extensions.
> > 
> > If the introduction of cvq irq causes interrupts to become shared, then
> > ctrlq need to fall back to polling mode and keep the status quo.
> > 
> > Thanks.
> 
> I don't see that in the code.
> 
> I guess we'll need more info in find vqs about what can and what can't share 
> irqs?

I mean we should add fallback code, for example if allocating interrupt for 
ctrlq
fails, we should clear the callback of ctrlq.

> Sharing between ctrl vq and config irq can also be an option.
> 

Not sure if this violates the spec. In the spec, used buffer notification and
configuration change notification are clearly defined - ctrlq is a virtqueue
and used buffer notification should be used.

Thanks.

> 
> 
> 
> > > 
> > > -- 
> > > MST
> > > 
> 
> 



Re: [PATCH net-next v4 2/5] virtio_net: enable irq for the control vq

2024-06-20 Thread Jason Wang
On Thu, Jun 20, 2024 at 4:32 PM Michael S. Tsirkin  wrote:
>
> On Thu, Jun 20, 2024 at 03:29:15PM +0800, Heng Qi wrote:
> > On Wed, 19 Jun 2024 17:19:12 -0400, "Michael S. Tsirkin"  
> > wrote:
> > > On Thu, Jun 20, 2024 at 12:19:05AM +0800, Heng Qi wrote:
> > > > @@ -5312,7 +5315,7 @@ static int virtnet_find_vqs(struct virtnet_info 
> > > > *vi)
> > > >
> > > >   /* Parameters for control virtqueue, if any */
> > > >   if (vi->has_cvq) {
> > > > - callbacks[total_vqs - 1] = NULL;
> > > > + callbacks[total_vqs - 1] = virtnet_cvq_done;
> > > >   names[total_vqs - 1] = "control";
> > > >   }
> > > >
> > >
> > > If the # of MSIX vectors is exactly for data path VQs,
> > > this will cause irq sharing between VQs which will degrade
> > > performance significantly.
> > >
> > > So no, you can not just do it unconditionally.
> > >
> > > The correct fix probably requires virtio core/API extensions.
> >
> > If the introduction of cvq irq causes interrupts to become shared, then
> > ctrlq need to fall back to polling mode and keep the status quo.
> >
> > Thanks.
>
> I don't see that in the code.
>
> I guess we'll need more info in find vqs about what can and what can't share 
> irqs?
> Sharing between ctrl vq and config irq can also be an option.

One way is to allow the driver to specify the group of virtqueues. So
the core can request irq per group instead of pre vq. I used to post a
series like this about 10 years ago (but fail to find it).

It might also help for the case for NAPI.

Thanks

>
>
>
>
> > >
> > > --
> > > MST
> > >
>




Re: [PATCH net-next v4 2/5] virtio_net: enable irq for the control vq

2024-06-20 Thread Michael S. Tsirkin
On Thu, Jun 20, 2024 at 03:29:15PM +0800, Heng Qi wrote:
> On Wed, 19 Jun 2024 17:19:12 -0400, "Michael S. Tsirkin"  
> wrote:
> > On Thu, Jun 20, 2024 at 12:19:05AM +0800, Heng Qi wrote:
> > > @@ -5312,7 +5315,7 @@ static int virtnet_find_vqs(struct virtnet_info *vi)
> > >  
> > >   /* Parameters for control virtqueue, if any */
> > >   if (vi->has_cvq) {
> > > - callbacks[total_vqs - 1] = NULL;
> > > + callbacks[total_vqs - 1] = virtnet_cvq_done;
> > >   names[total_vqs - 1] = "control";
> > >   }
> > >  
> > 
> > If the # of MSIX vectors is exactly for data path VQs,
> > this will cause irq sharing between VQs which will degrade
> > performance significantly.
> > 
> > So no, you can not just do it unconditionally.
> > 
> > The correct fix probably requires virtio core/API extensions.
> 
> If the introduction of cvq irq causes interrupts to become shared, then
> ctrlq need to fall back to polling mode and keep the status quo.
> 
> Thanks.

I don't see that in the code.

I guess we'll need more info in find vqs about what can and what can't share 
irqs?
Sharing between ctrl vq and config irq can also be an option.




> > 
> > -- 
> > MST
> > 




Re: [PATCH net-next v4 2/5] virtio_net: enable irq for the control vq

2024-06-20 Thread Jason Wang
On Thu, Jun 20, 2024 at 4:21 PM Jason Wang  wrote:
>
> On Thu, Jun 20, 2024 at 3:35 PM Heng Qi  wrote:
> >
> > On Wed, 19 Jun 2024 17:19:12 -0400, "Michael S. Tsirkin"  
> > wrote:
> > > On Thu, Jun 20, 2024 at 12:19:05AM +0800, Heng Qi wrote:
> > > > @@ -5312,7 +5315,7 @@ static int virtnet_find_vqs(struct virtnet_info 
> > > > *vi)
> > > >
> > > > /* Parameters for control virtqueue, if any */
> > > > if (vi->has_cvq) {
> > > > -   callbacks[total_vqs - 1] = NULL;
> > > > +   callbacks[total_vqs - 1] = virtnet_cvq_done;
> > > > names[total_vqs - 1] = "control";
> > > > }
> > > >
> > >
> > > If the # of MSIX vectors is exactly for data path VQs,
> > > this will cause irq sharing between VQs which will degrade
> > > performance significantly.
> > >
>
> Why do we need to care about buggy management? I think libvirt has
> been teached to use 2N+2 since the introduction of the multiqueue[1].

And Qemu can calculate it correctly automatically since:

commit 51a81a2118df0c70988f00d61647da9e298483a4
Author: Jason Wang 
Date:   Mon Mar 8 12:49:19 2021 +0800

virtio-net: calculating proper msix vectors on init

Currently, the default msix vectors for virtio-net-pci is 3 which is
obvious not suitable for multiqueue guest, so we depends on the user
or management tools to pass a correct vectors parameter. In fact, we
can simplifying this by calculating the number of vectors on realize.

Consider we have N queues, the number of vectors needed is 2*N + 2
(#queue pairs + plus one config interrupt and control vq). We didn't
check whether or not host support control vq because it was added
unconditionally by qemu to avoid breaking legacy guests such as Minix.

Reviewed-by: Philippe Mathieu-Daudé 
Reviewed-by: Stefan Hajnoczi 
Signed-off-by: Jason Wang 

Thanks

>
> > > So no, you can not just do it unconditionally.
> > >
> > > The correct fix probably requires virtio core/API extensions.
> >
> > If the introduction of cvq irq causes interrupts to become shared, then
> > ctrlq need to fall back to polling mode and keep the status quo.
>
> Having to path sounds a burden.
>
> >
> > Thanks.
> >
>
>
> Thanks
>
> [1] https://www.linux-kvm.org/page/Multiqueue
>
> > >
> > > --
> > > MST
> > >
> >




Re: [PATCH net-next v4 2/5] virtio_net: enable irq for the control vq

2024-06-20 Thread Jason Wang
On Thu, Jun 20, 2024 at 3:35 PM Heng Qi  wrote:
>
> On Wed, 19 Jun 2024 17:19:12 -0400, "Michael S. Tsirkin"  
> wrote:
> > On Thu, Jun 20, 2024 at 12:19:05AM +0800, Heng Qi wrote:
> > > @@ -5312,7 +5315,7 @@ static int virtnet_find_vqs(struct virtnet_info *vi)
> > >
> > > /* Parameters for control virtqueue, if any */
> > > if (vi->has_cvq) {
> > > -   callbacks[total_vqs - 1] = NULL;
> > > +   callbacks[total_vqs - 1] = virtnet_cvq_done;
> > > names[total_vqs - 1] = "control";
> > > }
> > >
> >
> > If the # of MSIX vectors is exactly for data path VQs,
> > this will cause irq sharing between VQs which will degrade
> > performance significantly.
> >

Why do we need to care about buggy management? I think libvirt has
been teached to use 2N+2 since the introduction of the multiqueue[1].

> > So no, you can not just do it unconditionally.
> >
> > The correct fix probably requires virtio core/API extensions.
>
> If the introduction of cvq irq causes interrupts to become shared, then
> ctrlq need to fall back to polling mode and keep the status quo.

Having to path sounds a burden.

>
> Thanks.
>


Thanks

[1] https://www.linux-kvm.org/page/Multiqueue

> >
> > --
> > MST
> >
>




Re: [PATCH net-next v4 2/5] virtio_net: enable irq for the control vq

2024-06-20 Thread Heng Qi
On Wed, 19 Jun 2024 17:19:12 -0400, "Michael S. Tsirkin"  
wrote:
> On Thu, Jun 20, 2024 at 12:19:05AM +0800, Heng Qi wrote:
> > @@ -5312,7 +5315,7 @@ static int virtnet_find_vqs(struct virtnet_info *vi)
> >  
> > /* Parameters for control virtqueue, if any */
> > if (vi->has_cvq) {
> > -   callbacks[total_vqs - 1] = NULL;
> > +   callbacks[total_vqs - 1] = virtnet_cvq_done;
> > names[total_vqs - 1] = "control";
> > }
> >  
> 
> If the # of MSIX vectors is exactly for data path VQs,
> this will cause irq sharing between VQs which will degrade
> performance significantly.
> 
> So no, you can not just do it unconditionally.
> 
> The correct fix probably requires virtio core/API extensions.

If the introduction of cvq irq causes interrupts to become shared, then
ctrlq need to fall back to polling mode and keep the status quo.

Thanks.

> 
> -- 
> MST
> 



Re: [PATCH net-next v4 2/5] virtio_net: enable irq for the control vq

2024-06-19 Thread Michael S. Tsirkin
On Thu, Jun 20, 2024 at 12:19:05AM +0800, Heng Qi wrote:
> @@ -5312,7 +5315,7 @@ static int virtnet_find_vqs(struct virtnet_info *vi)
>  
>   /* Parameters for control virtqueue, if any */
>   if (vi->has_cvq) {
> - callbacks[total_vqs - 1] = NULL;
> + callbacks[total_vqs - 1] = virtnet_cvq_done;
>   names[total_vqs - 1] = "control";
>   }
>  

If the # of MSIX vectors is exactly for data path VQs,
this will cause irq sharing between VQs which will degrade
performance significantly.

So no, you can not just do it unconditionally.

The correct fix probably requires virtio core/API extensions.

-- 
MST