Hi Todd and Don,

Thanks for your reply.

As for virtualization, every interrupt delivered to the VM through
passthrough VF driver incurs two VM exits: one from the external interrupt
of the MSI/MSI-X, and the other is caused by completion of the interrupt
when VM is writing to EOI (End-of-Interrupt) register. Under high interrupt
rates, the VM keeps exiting to the hypervisor due to these two events,
resulting in lower performance.

As for busy poll socket, I took a look at Intel's low latency socket poll:
http://www.intel.com/content/dam/www/public/us/en/documents/white-papers/open-source-kernel-enhancements-paper.pdf
I think it's useful to reduce latency, however I'm not sure under this
design, how many interrupts is generated under intensive I/O workload.

John pointed out that Intel's DPDK has implemented similar idea in its PMD
(polling mode driver)
http://dpdk.org/browse/dpdk/tree/lib/librte_pmd_ixgbe
However, as far as I know, when applying DPDK's ixgbe driver, the existing
applications do not work and must change to use also DPDK's library.


Regards,
William


On Mon, Feb 10, 2014 at 11:17 PM, Skidmore, Donald C <
[email protected]> wrote:

> I not sure exactly what you are trying to show for this class, but if you
> want to the driver in polling mode you should try  a busy poll socket which
> will pretty much give you what you're asking for.
>
> Thanks,
> -Don
>
> > -----Original Message-----
> > From: William Tu [mailto:[email protected]]
> > Sent: Monday, February 10, 2014 12:35 AM
> > To: [email protected]
> > Subject: [E1000-devel] A fully polling mode ixgbevf driver
> >
> > Hi,
> >
> > I'm a student from Stony Brook University. I'm thinking about modifying
> the
> > ixgbevf driver so that it could work in fully polling mode. That is, when
> > network receiving packet rate is higher than a threshold, the ixgbevf
> driver
> > could disable the interrupt for a period of time.
> >
> > A few observations motivate the idea:
> > 1. Even with Linux's NAPI, under my iperf 8~9G experiment, the interrupt
> > rate is still very high, around 80k interrupts per second.
> > 2. Under KVM, every interrupt delivered by ixgbevf will trigger at least
> two
> > VM exits, which incurs high overhead under I/O intensive workload.
> >
> > Due to these two facts, I plan to modify ixgbevf driver to support 1.
> Set-up a
> > packet receiving threshold. When receiving rate is higher than this
> threshold,
> > the driver is in polling mode.
> > 2. Set-up the polling rate. When driver is in polling mode, the polling
> rate
> > determine the frequency for Linux network stack to get the packets.
> >
> > Is it worth doing and does this idea make sense? or how do I leverage
> > existing code / kernel's feature to support this?
> >
> > Thank you and any comments are appreciated!
> > Regards,
> > William
>
------------------------------------------------------------------------------
Android apps run on BlackBerry 10
Introducing the new BlackBerry 10.2.1 Runtime for Android apps.
Now with support for Jelly Bean, Bluetooth, Mapview and more.
Get your Android app in front of a whole new audience.  Start now.
http://pubads.g.doubleclick.net/gampad/clk?id=124407151&iu=/4140/ostg.clktrk
_______________________________________________
E1000-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/e1000-devel
To learn more about Intel&#174; Ethernet, visit 
http://communities.intel.com/community/wired

Reply via email to