Regards _Sugesh
> -----Original Message----- > From: Jan Scheurich [mailto:[email protected]] > Sent: Friday, October 13, 2017 11:57 PM > To: Chandran, Sugesh <[email protected]> > Cc: [email protected] > Subject: RE: [ovs-dev] [PATCH] vhost: Expose virtio interrupt requirement on > rte_vhos API > > Hi Sugesh, > > Actually the new API call in DPDK is not needed. A reply by Zhiyong Yang > (http://dpdk.org/ml/archives/dev/2017-September/076504.html) pointed out > that an existing API call provides access to the vring data structure that > contains > the interrupt flag. So I will abandon the DPDK patch. [Sugesh] Sure. That make sense. > > Using the existing API I have created a patch on top of Ilya's output > batching v4 > series that automatically enables time-based batching on ports that should > benefit from it most: vhostuser(client) using virtio interrupts as well as > internal > ports on the Linux (or BSD) host. > > I still need to do careful testing that the interrupt detection works > reliable. The > performance should be the baseline performance of Ilya's patch. [Sugesh] Make sense. I would like to see the performance improvement offered with this. Will surely have a look at the patch when you release it in the ML > > BR, Jan > > > -----Original Message----- > > From: [email protected] > > [mailto:[email protected]] On Behalf Of Chandran, Sugesh > > Sent: Friday, 13 October, 2017 17:34 > > To: Jan Scheurich <[email protected]> > > Cc: [email protected] > > Subject: Re: [ovs-dev] [PATCH] vhost: Expose virtio interrupt > > requirement on rte_vhos API > > > > Hi Jan, > > The DPDK changes are looks OK to me and will be useful. I am > > interested in testing this patch to see the impact on performance. Are you > planning to share the changes in OVS for these APIs? > > > > > > > > > > Performance tests with the OVS DPDK datapath have shown that the tx > > throughput over a vhostuser port into a VM with an interrupt- based > > virtio driver is limited by the overhead incurred by virtio > > interrupts. The OVS PMD spends up to 30% of its cycles in system calls > > kicking > the eventfd. Also the core running the vCPU is heavily loaded with generating > the virtio interrupts in KVM on the host and handling these interrupts in the > virtio-net driver in the guest. This limits the throughput to about 500-700 > Kpps > with a single vCPU. > > > > OVS is trying to address this issue by batching packets to a vhostuser > > port for some time to limit the virtio interrupt frequency. With a > > 50 us batching period we have measured an iperf3 throughput increase by > 15% and a PMD utilization decrease from 45% to 30%. > > > > On the other hand, guests using virtio PMDs do not profit from > > time-based tx batching. Instead they experience a 2-3% performance > > penalty and an average latency increase of 30-40 us. OVS therefore intends > > to > apply time-based tx batching only for vhostuser tx queues that need to trigger > virtio interrupts. > > > > Today this information is hidden inside the rte_vhost library and not > > accessible to users of the API. This patch adds a function to the API to > > query it. > > > > Signed-off-by: Jan Scheurich > > <[email protected]<mailto:[email protected]>> > > > > --- > > > > lib/librte_vhost/rte_vhost.h | 12 ++++++++++++ > > lib/librte_vhost/vhost.c | 19 +++++++++++++++++++ > > 2 files changed, 31 insertions(+) > > > > diff --git a/lib/librte_vhost/rte_vhost.h > > b/lib/librte_vhost/rte_vhost.h index 8c974eb..d62338b 100644 > > --- a/lib/librte_vhost/rte_vhost.h > > +++ b/lib/librte_vhost/rte_vhost.h > > @@ -444,6 +444,18 @@ int rte_vhost_get_vhost_vring(int vid, uint16_t > vring_idx, > > */ > > uint32_t rte_vhost_rx_queue_count(int vid, uint16_t qid); > > > > +/** > > + * Does the virtio driver request interrupts for a vhost tx queue? > > + * > > + * @param vid > > + * vhost device ID > > + * @param qid > > + * virtio queue index in mq case > > + * @return > > + * 1 if true, 0 if false > > + */ > > +int rte_vhost_tx_interrupt_requested(int vid, uint16_t qid); > > + > > #ifdef __cplusplus > > } > > #endif > > diff --git a/lib/librte_vhost/vhost.c b/lib/librte_vhost/vhost.c index > > 0b6aa1c..bd1ebf9 100644 > > --- a/lib/librte_vhost/vhost.c > > +++ b/lib/librte_vhost/vhost.c > > @@ -503,3 +503,22 @@ struct virtio_net * > > > > return *((volatile uint16_t *)&vq->avail->idx) - > > vq->last_avail_idx; } > > + > > +int rte_vhost_tx_interrupt_requested(int vid, uint16_t qid) { > > + struct virtio_net *dev; > > + struct vhost_virtqueue *vq; > > + > > + dev = get_device(vid); > > + if (dev == NULL) > > + return 0; > > + > > + vq = dev->virtqueue[qid]; > > + if (vq == NULL) > > + return 0; > > + > > + if (unlikely(vq->enabled == 0 || vq->avail == NULL)) > > + return 0; > > + > > + return !(vq->avail->flags & VRING_AVAIL_F_NO_INTERRUPT); } > > > > _______________________________________________ > > dev mailing list > > [email protected]<mailto:[email protected]> > > https://mail.openvswitch.org/mailman/listinfo/ovs-dev > > > > _______________________________________________ > > dev mailing list > > [email protected] > > https://mail.openvswitch.org/mailman/listinfo/ovs-dev _______________________________________________ dev mailing list [email protected] https://mail.openvswitch.org/mailman/listinfo/ovs-dev
