Thanks much for the reply. With InterruptThrottleRate=0 setting I still get
40us latency. The latency to remote host as I mentioned is 30us.

Is it expected that the DMA transfer and the interrupts for a communication
between two VFs on the same host communicating via an internal switch adds
upto 10+ us latency overhead?

I was wondering what expectations should I set for these measurements? Is
40us the lowest I can go for inter-VM latency using the internal switch on
the NIC?

On Tue, Oct 23, 2012 at 9:14 AM, Greg Rose <[email protected]> wrote:

> On Mon, 22 Oct 2012 21:26:27 -0700
> Radhika Niranjan <[email protected]> wrote:
>
> > Hi,
> >
> > I've been using the X520-DA2 NIC and am using the latest ixgbe
> > (3.11.33) and ixgbevf (2.7.12) drivers. I have two VMs on the same
> > host, using 1 VF each. Both the VFs are on the same PF, and I'd
> > expect that the internal switch on the X520-DA2 is used for
> > communications between these two VMs.
>
> Yes, so long as they're on the same subnet.
>
> >
> > When I measure the latency that I am seeing for communications from
> > these VMs to a remote host, the latencies I see are around 30us.
> >
> > But when I measure the latency for communication between these two
> > VMs on the same host, I see latencies of upto 50 us.I have tried
> > older versions of ixgbe and ixgbevf and I have never seen lower than
> > 50us for communication between 2 VMs using VFs on the same host. I've
> > pasted the measurements below for reference.
> >
> > I would expect to see much lower latencies for inter VM traffic thats
> > using the NIC's internal switch. I was wondering if people on this
> > list had encountered a similar issue, and if someone had suggestions
> > on how I could get better inter-VM latencies with X520-DA2 NIC.
>
> It wouldn't be unexpected for latencies between a VF and an external
> test client and inter-VF clients to be different.  The PCIe data
> transfers generated between two VFs and interrupt generation by the two
> VF devices would be different than in the case for a single VF and an
> external client.  We are after all looking at a single physical device
> even though we have created virtual devices.
>
> That said, I suggest you experiment with the interrupt throttle rate
> since you're using the out of tree VF driver.  There is a module
> parameter you can use to set the interrupt throttle rate, modinfo
> should give you the details.  You can try different values and see if
> you get results more in line with your expectations and/or requirements.
>
> - Greg
>
> >
> > Please let me know what specific information I can give to help debug
> > this issue. Also. I am a newbie here, so please let me know if there
> > are other lists that I should be sending this query to.
> >
> > Thanks much in advance.
> >
> > Radhika
> > PS: measurements follow:
> >
> > I am using netperf, with netserver running on the two VMs.
> >
> > *Latency to the remote host: (~30us)*
> >
> > *VM1 to remote host:*
> > netperf -H 10.0.85.7 -i 30,3 -t TCP_RR
> > MIGRATED TCP REQUEST/RESPONSE TEST from 0.0.0.0 (0.0.0.0) port 0
> > AF_INET to 10.0.85.7 () port 0 AF_INET : +/-2.500% @ 99% conf.  :
> > first burst 0 Local /Remote
> > Socket Size   Request  Resp.   Elapsed  Trans.
> > Send   Recv   Size     Size    Time     Rate
> > bytes  Bytes  bytes    bytes   secs.    per sec
> >
> > 16384  87380  1        1       10.00    16618.24
> > 16384  87380
> >
> > *VM2 to remote host:*
> > netperf -H 10.0.85.5 -i 30,3 -t TCP_RR
> > MIGRATED TCP REQUEST/RESPONSE TEST from 0.0.0.0 (0.0.0.0) port 0
> > AF_INET to 10.0.85.5 () port 0 AF_INET : +/-2.500% @ 99% conf.  :
> > first burst 0 Local /Remote
> > Socket Size   Request  Resp.   Elapsed  Trans.
> > Send   Recv   Size     Size    Time     Rate
> > bytes  Bytes  bytes    bytes   secs.    per sec
> >
> > 16384  87380  1        1       10.00    17057.09
> > 16384  87380
> >
> > *Inter VM latencies: (VM1-->VM2): (~50us)*
> > netperf -H 10.0.85.5 -i 30,3 -t TCP_RR
> > MIGRATED TCP REQUEST/RESPONSE TEST from 0.0.0.0 (0.0.0.0) port 0
> > AF_INET to 10.0.85.5 () port 0 AF_INET : +/-2.500% @ 99% conf.  :
> > first burst 0 Local /Remote
> > Socket Size   Request  Resp.   Elapsed  Trans.
> > Send   Recv   Size     Size    Time     Rate
> > bytes  Bytes  bytes    bytes   secs.    per sec
> >
> > 16384  87380  1        1       10.00    9971.04
> > 16384  87380
> >
>
------------------------------------------------------------------------------
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_sfd2d_oct
_______________________________________________
E1000-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/e1000-devel
To learn more about Intel&#174; Ethernet, visit 
http://communities.intel.com/community/wired

Reply via email to