Re: [PATCH v1 0/2] virtio-mmio: support multiple interrupt vectors

2019-07-30 Thread
Hi Sergio,

Considering your implementing virtio-mmio v2 in Qemu, please help to give some
suggestions on this patch series. Thanks :)

For web, this link:
https://www.spinics.net/lists/kernel/msg3195667.html may help.

Have a nice day
Fei

On Fri, Jul 19, 2019 at 9:31 PM Fei Li  wrote:
>
> Hi,
>
> This patch series implements multiple interrupt vectors support for
> virtio-mmio device. This is especially useful for multiqueue vhost-net
> device when using firecracker micro-vms as the guest.
>
> Test result:
> With 8 vcpus & 8 net queues set, one vhost-net device with 8 irqs can
> receive 9 times more pps comparing with only one irq:
> - 564830.38 rxpck/s for 8 irqs on
> - 67665.06 rxpck/s for 1 irq on
>
> Please help to review, thanks!
>
> Have a nice day
> Fei
>
>
> Fam Zheng (1):
>   virtio-mmio: Process vrings more proactively
>
> Fei Li (1):
>   virtio-mmio: support multiple interrupt vectors
>
>  drivers/virtio/virtio_mmio.c | 238 
> +++
>  1 file changed, 196 insertions(+), 42 deletions(-)
>
> --
> 2.11.0
>


Re: [External Email] Re: [PATCH v1 0/2] virtio-mmio: support multiple interrupt vectors

2019-07-30 Thread
On Wed, Jul 31, 2019 at 4:26 AM Michael S. Tsirkin  wrote:
>
> On Mon, Jul 22, 2019 at 09:43:18PM +0800, 李菲 wrote:
> > On Mon, Jul 22, 2019 at 4:39 PM Michael S. Tsirkin  wrote:
> > >
> > > On Mon, Jul 22, 2019 at 11:22:02AM +0800, 李菲 wrote:
> > > > On Fri, Jul 19, 2019 at 11:14 PM Michael S. Tsirkin  
> > > > wrote:
> > > > >
> > > > > On Fri, Jul 19, 2019 at 09:31:33PM +0800, Fei Li wrote:
> > > > > > Hi,
> > > > > >
> > > > > > This patch series implements multiple interrupt vectors support for
> > > > > > virtio-mmio device. This is especially useful for multiqueue 
> > > > > > vhost-net
> > > > > > device when using firecracker micro-vms as the guest.
> > > > > >
> > > > > > Test result:
> > > > > > With 8 vcpus & 8 net queues set, one vhost-net device with 8 irqs 
> > > > > > can
> > > > > > receive 9 times more pps comparing with only one irq:
> > > > > > - 564830.38 rxpck/s for 8 irqs on
> > > > > > - 67665.06 rxpck/s for 1 irq on
> > > > > >
> > > > > > Please help to review, thanks!
> > > > > >
> > > > > > Have a nice day
> > > > > > Fei
> > > > >
> > > > >
> > > > > Interesting. The spec says though:
> > > > >
> > > > > 4.2.3.4
> > > > > Notifications From The Device
> > > > > The memory mapped virtio device is using a single, dedicated 
> > > > > interrupt signal, which is asserted when at
> > > > > least one of the bits described in the description of 
> > > > > InterruptStatus is set. This is how the device sends a
> > > > > used buffer notification or a configuration change 
> > > > > notification to the device.
> > > > >
> > > > Yes, the spec needs to be updated if we want to use mult-irqs.
> > > > >
> > > > > So I'm guessing we need to change the host/guest interface?
> > > > Just to confirm, does the "the host/guest interface" you mentioned mean 
> > > > how to
> > > > pass the irq information from the user space tool to guest kernel?
> > > > In this patch, we do this by passing the [irq_start, irq_end]
> > > > interface via setting guest
> > > > kernel command line, that is done in vm_cmdline_set().
> > > > Also there is another way to do this: add two new registers describing 
> > > > irq info
> > > > (irq_start & irq_end OR irq_start & irq_numbers) to the virtio config 
> > > > space.
> > > >
> > > > Which one do you prefer?
> > >
> > > I'm not sure - so far irq was passed on the command line, right?
> > Yes.
> > >
> > > The first step in implementing any spec change would be to update qemu
> > > code to virtio 1. Which is not a huge project but so far no one
> > > bothered.
> > Emm, actually I only did the test with using firecracker to start a
> > micro-vm, but without qemu.
> > To be honest, the reason why implementing multi-irq on virtio-mmio is
> > mainly because the
> > current firecracker using virtio-mmio device and it has no pci thing,
> > thus no msi/msix to
> > handle the interruptions.
> > On the other hand, considering pci is well supported in qemu, I am
> > wondering whether we
> > still need this. If needed, we would like to do this. :)
> >
> > Have a nice day, thanks
> > Fei
>
>
> Sergio Lopez is now working on updating mmio to v1 support in qemu.
> Maybe get in touch with him on how he looks at this extension.
Thanks for the info! :)

Hi Sergio Lopez,
I saw your [virtio-mmio: modern (v2)] patch series in Qemu mailing
list, thanks for moving this on.
And long Story Short, these two kernel patches is to add the multi-irq
support for virtio-mmio driver.
As this involves the spec change and you are now implementing
virtio-mmio v2, could you help to
give some suggestions on this extension?
I will cc you the original patch soon, thanks.

> Not asking you to work on qemu, but it makes sense
> to get an ack for guest bits from another popular hypervisor.
I agree, absolutely right. And I once work on Qemu development, hope
the combined
background could help to move this multi-irq feature forward. :)


Have a nice day, many thanks
Fei
>
>
> > >
> > >
> > > > > If true pls cc virtio-dev.
> > > > Sure.
> > > > >
> > > > > Also, do we need to update dt bindings documentation?
> > > > You mean the following doc? Sure. :)
> > > > https://github.com/torvalds/linux/blob/master/Documentation/devicetree/bindings/virtio/mmio.txt
> > > >
> > > > Thanks for the review!
> > > >
> > > > Have a nice day
> > > > Fei
> > > >
> > > >
> > > > >
> > > > > >
> > > > > > Fam Zheng (1):
> > > > > >   virtio-mmio: Process vrings more proactively
> > > > > >
> > > > > > Fei Li (1):
> > > > > >   virtio-mmio: support multiple interrupt vectors
> > > > > >
> > > > > >  drivers/virtio/virtio_mmio.c | 238 
> > > > > > +++
> > > > > >  1 file changed, 196 insertions(+), 42 deletions(-)
> > > > > >
> > > > > > --
> > > > > > 2.11.0


Re: [External Email] Re: [PATCH v1 0/2] virtio-mmio: support multiple interrupt vectors

2019-07-22 Thread
On Mon, Jul 22, 2019 at 4:39 PM Michael S. Tsirkin  wrote:
>
> On Mon, Jul 22, 2019 at 11:22:02AM +0800, 李菲 wrote:
> > On Fri, Jul 19, 2019 at 11:14 PM Michael S. Tsirkin  wrote:
> > >
> > > On Fri, Jul 19, 2019 at 09:31:33PM +0800, Fei Li wrote:
> > > > Hi,
> > > >
> > > > This patch series implements multiple interrupt vectors support for
> > > > virtio-mmio device. This is especially useful for multiqueue vhost-net
> > > > device when using firecracker micro-vms as the guest.
> > > >
> > > > Test result:
> > > > With 8 vcpus & 8 net queues set, one vhost-net device with 8 irqs can
> > > > receive 9 times more pps comparing with only one irq:
> > > > - 564830.38 rxpck/s for 8 irqs on
> > > > - 67665.06 rxpck/s for 1 irq on
> > > >
> > > > Please help to review, thanks!
> > > >
> > > > Have a nice day
> > > > Fei
> > >
> > >
> > > Interesting. The spec says though:
> > >
> > > 4.2.3.4
> > > Notifications From The Device
> > > The memory mapped virtio device is using a single, dedicated 
> > > interrupt signal, which is asserted when at
> > > least one of the bits described in the description of 
> > > InterruptStatus is set. This is how the device sends a
> > > used buffer notification or a configuration change notification 
> > > to the device.
> > >
> > Yes, the spec needs to be updated if we want to use mult-irqs.
> > >
> > > So I'm guessing we need to change the host/guest interface?
> > Just to confirm, does the "the host/guest interface" you mentioned mean how 
> > to
> > pass the irq information from the user space tool to guest kernel?
> > In this patch, we do this by passing the [irq_start, irq_end]
> > interface via setting guest
> > kernel command line, that is done in vm_cmdline_set().
> > Also there is another way to do this: add two new registers describing irq 
> > info
> > (irq_start & irq_end OR irq_start & irq_numbers) to the virtio config space.
> >
> > Which one do you prefer?
>
> I'm not sure - so far irq was passed on the command line, right?
Yes.
>
> The first step in implementing any spec change would be to update qemu
> code to virtio 1. Which is not a huge project but so far no one
> bothered.
Emm, actually I only did the test with using firecracker to start a
micro-vm, but without qemu.
To be honest, the reason why implementing multi-irq on virtio-mmio is
mainly because the
current firecracker using virtio-mmio device and it has no pci thing,
thus no msi/msix to
handle the interruptions.
On the other hand, considering pci is well supported in qemu, I am
wondering whether we
still need this. If needed, we would like to do this. :)

Have a nice day, thanks
Fei
>
>
> > > If true pls cc virtio-dev.
> > Sure.
> > >
> > > Also, do we need to update dt bindings documentation?
> > You mean the following doc? Sure. :)
> > https://github.com/torvalds/linux/blob/master/Documentation/devicetree/bindings/virtio/mmio.txt
> >
> > Thanks for the review!
> >
> > Have a nice day
> > Fei
> >
> >
> > >
> > > >
> > > > Fam Zheng (1):
> > > >   virtio-mmio: Process vrings more proactively
> > > >
> > > > Fei Li (1):
> > > >   virtio-mmio: support multiple interrupt vectors
> > > >
> > > >  drivers/virtio/virtio_mmio.c | 238 
> > > > +++
> > > >  1 file changed, 196 insertions(+), 42 deletions(-)
> > > >
> > > > --
> > > > 2.11.0


Re: [External Email] Re: [PATCH v1 0/2] virtio-mmio: support multiple interrupt vectors

2019-07-21 Thread
On Fri, Jul 19, 2019 at 11:14 PM Michael S. Tsirkin  wrote:
>
> On Fri, Jul 19, 2019 at 09:31:33PM +0800, Fei Li wrote:
> > Hi,
> >
> > This patch series implements multiple interrupt vectors support for
> > virtio-mmio device. This is especially useful for multiqueue vhost-net
> > device when using firecracker micro-vms as the guest.
> >
> > Test result:
> > With 8 vcpus & 8 net queues set, one vhost-net device with 8 irqs can
> > receive 9 times more pps comparing with only one irq:
> > - 564830.38 rxpck/s for 8 irqs on
> > - 67665.06 rxpck/s for 1 irq on
> >
> > Please help to review, thanks!
> >
> > Have a nice day
> > Fei
>
>
> Interesting. The spec says though:
>
> 4.2.3.4
> Notifications From The Device
> The memory mapped virtio device is using a single, dedicated 
> interrupt signal, which is asserted when at
> least one of the bits described in the description of InterruptStatus 
> is set. This is how the device sends a
> used buffer notification or a configuration change notification to 
> the device.
>
Yes, the spec needs to be updated if we want to use mult-irqs.
>
> So I'm guessing we need to change the host/guest interface?
Just to confirm, does the "the host/guest interface" you mentioned mean how to
pass the irq information from the user space tool to guest kernel?
In this patch, we do this by passing the [irq_start, irq_end]
interface via setting guest
kernel command line, that is done in vm_cmdline_set().
Also there is another way to do this: add two new registers describing irq info
(irq_start & irq_end OR irq_start & irq_numbers) to the virtio config space.

Which one do you prefer?

> If true pls cc virtio-dev.
Sure.
>
> Also, do we need to update dt bindings documentation?
You mean the following doc? Sure. :)
https://github.com/torvalds/linux/blob/master/Documentation/devicetree/bindings/virtio/mmio.txt

Thanks for the review!

Have a nice day
Fei


>
> >
> > Fam Zheng (1):
> >   virtio-mmio: Process vrings more proactively
> >
> > Fei Li (1):
> >   virtio-mmio: support multiple interrupt vectors
> >
> >  drivers/virtio/virtio_mmio.c | 238 
> > +++
> >  1 file changed, 196 insertions(+), 42 deletions(-)
> >
> > --
> > 2.11.0


Re: [External Email] Re: [PATCH net v2] tun: wake up waitqueues after IFF_UP is set

2019-06-19 Thread
Thanks.

Have a nice day
Fei

On Wed, Jun 19, 2019 at 1:55 AM David Miller  wrote:
>
> From: Fei Li 
> Date: Mon, 17 Jun 2019 21:26:36 +0800
>
> > Currently after setting tap0 link up, the tun code wakes tx/rx waited
> > queues up in tun_net_open() when .ndo_open() is called, however the
> > IFF_UP flag has not been set yet. If there's already a wait queue, it
> > would fail to transmit when checking the IFF_UP flag in tun_sendmsg().
> > Then the saving vhost_poll_start() will add the wq into wqh until it
> > is waken up again. Although this works when IFF_UP flag has been set
> > when tun_chr_poll detects; this is not true if IFF_UP flag has not
> > been set at that time. Sadly the latter case is a fatal error, as
> > the wq will never be waken up in future unless later manually
> > setting link up on purpose.
> >
> > Fix this by moving the wakeup process into the NETDEV_UP event
> > notifying process, this makes sure IFF_UP has been set before all
> > waited queues been waken up.
> >
> > Signed-off-by: Fei Li 
> > Acked-by: Jason Wang 
>
> Applied and queued up for -stable, thanks.


Re: [External Email] Re: [PATCH] Fix tun: wake up waitqueues after IFF_UP is set

2019-06-17 Thread
Hi all,

On Mon, Jun 17, 2019 at 5:32 PM 李菲  wrote:
>
> Ok, thanks for detail suggestion! :)
>
> On Mon, Jun 17, 2019 at 4:16 PM Jason Wang  wrote:
>>
>>
>> On 2019/6/17 下午4:10, Jason Wang wrote:
>> >
>> > On 2019/6/17 下午3:33, Fei Li wrote:
>> >> Currently after setting tap0 link up, the tun code wakes tx/rx waited
>> >> queues up in tun_net_open() when .ndo_open() is called, however the
>> >> IFF_UP flag has not been set yet. If there's already a wait queue, it
>> >> would fail to transmit when checking the IFF_UP flag in tun_sendmsg().
>> >> Then the saving vhost_poll_start() will add the wq into wqh until it
>> >> is waken up again. Although this works when IFF_UP flag has been set
>> >> when tun_chr_poll detects; this is not true if IFF_UP flag has not
>> >> been set at that time. Sadly the latter case is a fatal error, as
>> >> the wq will never be waken up in future unless later manually
>> >> setting link up on purpose.
>> >>
>> >> Fix this by moving the wakeup process into the NETDEV_UP event
>> >> notifying process, this makes sure IFF_UP has been set before all
>> >> waited queues been waken up.
>>
>>
>> Btw, the title needs some tweak. E.g you need use "net" as prefix since
>> it's a fix for net.git and "Fix" could be removed like:
>>
>> [PATCH net] tun: wake up waitqueues after IFF_UP is set.
>>
>> Thanks
>>
>>
>> >>
>> >> Signed-off-by: Fei Li 
>> >> ---
>> >>   drivers/net/tun.c | 17 +
>> >>   1 file changed, 9 insertions(+), 8 deletions(-)
>> >>
>> >> diff --git a/drivers/net/tun.c b/drivers/net/tun.c
>> >> index c452d6d831dd..a3c9cab5a4d0 100644
>> >> --- a/drivers/net/tun.c
>> >> +++ b/drivers/net/tun.c
>> >> @@ -1015,17 +1015,9 @@ static void tun_net_uninit(struct net_device
>> >> *dev)
>> >>   static int tun_net_open(struct net_device *dev)
>> >>   {
>> >>   struct tun_struct *tun = netdev_priv(dev);
Will remove the above unused struct in next version.
Sorry for the negligence!

Have a nice day, thanks
Fei


>> >> -int i;
>> >> netif_tx_start_all_queues(dev);
>> >>   -for (i = 0; i < tun->numqueues; i++) {
>> >> -struct tun_file *tfile;
>> >> -
>> >> -tfile = rtnl_dereference(tun->tfiles[i]);
>> >> - tfile->socket.sk->sk_write_space(tfile->socket.sk);
>> >> -}
>> >> -
>> >>   return 0;
>> >>   }
>> >>   @@ -3634,6 +3626,7 @@ static int tun_device_event(struct
>> >> notifier_block *unused,
>> >>   {
>> >>   struct net_device *dev = netdev_notifier_info_to_dev(ptr);
>> >>   struct tun_struct *tun = netdev_priv(dev);
>> >> +int i;
>> >> if (dev->rtnl_link_ops != _link_ops)
>> >>   return NOTIFY_DONE;
>> >> @@ -3643,6 +3636,14 @@ static int tun_device_event(struct
>> >> notifier_block *unused,
>> >>   if (tun_queue_resize(tun))
>> >>   return NOTIFY_BAD;
>> >>   break;
>> >> +case NETDEV_UP:
>> >> +for (i = 0; i < tun->numqueues; i++) {
>> >> +struct tun_file *tfile;
>> >> +
>> >> +tfile = rtnl_dereference(tun->tfiles[i]);
>> >> + tfile->socket.sk->sk_write_space(tfile->socket.sk);
>> >> +}
>> >> +break;
>> >>   default:
>> >>   break;
>> >>   }
>> >
>> >
>> > Acked-by: Jason Wang > >


Re: [External Email] Re: [PATCH] Fix tun: wake up waitqueues after IFF_UP is set

2019-06-17 Thread
Ok, thanks for the detail suggestion! :)


On Mon, Jun 17, 2019 at 4:16 PM Jason Wang  wrote:
>
>
> On 2019/6/17 下午4:10, Jason Wang wrote:
> >
> > On 2019/6/17 下午3:33, Fei Li wrote:
> >> Currently after setting tap0 link up, the tun code wakes tx/rx waited
> >> queues up in tun_net_open() when .ndo_open() is called, however the
> >> IFF_UP flag has not been set yet. If there's already a wait queue, it
> >> would fail to transmit when checking the IFF_UP flag in tun_sendmsg().
> >> Then the saving vhost_poll_start() will add the wq into wqh until it
> >> is waken up again. Although this works when IFF_UP flag has been set
> >> when tun_chr_poll detects; this is not true if IFF_UP flag has not
> >> been set at that time. Sadly the latter case is a fatal error, as
> >> the wq will never be waken up in future unless later manually
> >> setting link up on purpose.
> >>
> >> Fix this by moving the wakeup process into the NETDEV_UP event
> >> notifying process, this makes sure IFF_UP has been set before all
> >> waited queues been waken up.
>
>
> Btw, the title needs some tweak. E.g you need use "net" as prefix since
> it's a fix for net.git and "Fix" could be removed like:
>
> [PATCH net] tun: wake up waitqueues after IFF_UP is set.
>
> Thanks
>
>
> >>
> >> Signed-off-by: Fei Li 
> >> ---
> >>   drivers/net/tun.c | 17 +
> >>   1 file changed, 9 insertions(+), 8 deletions(-)
> >>
> >> diff --git a/drivers/net/tun.c b/drivers/net/tun.c
> >> index c452d6d831dd..a3c9cab5a4d0 100644
> >> --- a/drivers/net/tun.c
> >> +++ b/drivers/net/tun.c
> >> @@ -1015,17 +1015,9 @@ static void tun_net_uninit(struct net_device
> >> *dev)
> >>   static int tun_net_open(struct net_device *dev)
> >>   {
> >>   struct tun_struct *tun = netdev_priv(dev);
> >> -int i;
> >> netif_tx_start_all_queues(dev);
> >>   -for (i = 0; i < tun->numqueues; i++) {
> >> -struct tun_file *tfile;
> >> -
> >> -tfile = rtnl_dereference(tun->tfiles[i]);
> >> - tfile->socket.sk->sk_write_space(tfile->socket.sk);
> >> -}
> >> -
> >>   return 0;
> >>   }
> >>   @@ -3634,6 +3626,7 @@ static int tun_device_event(struct
> >> notifier_block *unused,
> >>   {
> >>   struct net_device *dev = netdev_notifier_info_to_dev(ptr);
> >>   struct tun_struct *tun = netdev_priv(dev);
> >> +int i;
> >> if (dev->rtnl_link_ops != _link_ops)
> >>   return NOTIFY_DONE;
> >> @@ -3643,6 +3636,14 @@ static int tun_device_event(struct
> >> notifier_block *unused,
> >>   if (tun_queue_resize(tun))
> >>   return NOTIFY_BAD;
> >>   break;
> >> +case NETDEV_UP:
> >> +for (i = 0; i < tun->numqueues; i++) {
> >> +struct tun_file *tfile;
> >> +
> >> +tfile = rtnl_dereference(tun->tfiles[i]);
> >> + tfile->socket.sk->sk_write_space(tfile->socket.sk);
> >> +}
> >> +break;
> >>   default:
> >>   break;
> >>   }
> >
> >
> > Acked-by: Jason Wang  >