I'll ask the Lanconf team about the numbers being half for the 82580 device.  
I'll also have the docs team look at the datasheet values you point to for 
RNBC.  It's not being explained correctly.

Cheers,
John

From: Alexandre Desnoyers [mailto:a...@qtec.com]
Sent: Wednesday, November 02, 2011 4:51 AM
To: Ronciak, John
Cc: <ricardo.riba...@gmail.com>; e1000-de...@lists.sf.net
Subject: Re: [E1000-devel] Intel GigE NIC - Missing lots of packets

Hi John,

See my comments in-line below.

Thanks

Alex


Ronciak, John wrote:
I'll pass this along to the Lanconf team but I have some comments in-line below.

Cheers,
John

From: Alexandre Desnoyers [mailto:a...@qtec.com]
Sent: Tuesday, November 01, 2011 2:24 PM
To: Ronciak, John
Cc: <ricardo.riba...@gmail.com><mailto:ricardo.riba...@gmail.com>; 
e1000-de...@lists.sf.net<mailto:e1000-de...@lists.sf.net>
Subject: Re: [E1000-devel] Intel GigE NIC - Missing lots of packets

Hi John,

Regarding Lanconf, I have four issues/questions:

1) I want to point them to the bandwidth calculation/display for the 82580.  
For example, the 82580 report ~1020Mbps and the partner (LOM) reports 
2046Mbps....  It seems that all the bandwidth values are half of what they 
should be on the 82580.
It's reporting both send and receive numbers aggregated.  This is a 1 gigabit 
(1000Mbit) part so the only way above that would be an aggregated number.

Sorry, I may have not explained the problem properly...  This value was only an 
example. Here is the details:

On the 82580, running Lanconf
[Transmit and Receive Bandwidth]
  Transmit    0511 Mbps   <--- Why??
  Receive     0511 Mbps   <--- Why??
  Total         1022 Mbps   <--- Total of the RX and TX, ok

On the LOM, running Lanconf
[Transmit and Receive Bandwidth]
  Transmit    1022 Mbps
  Receive     1024 Mbps
  Total          2046 Mbps

Those two systems are linked together.....  So why is the 82580 reporting half 
of the TX and RX bandwidth compared to the LOM?
BTW, doing the same test between the T60p and the LOM, both computer display 
the same values (1022 Mbps TX, 1024 Mbps and 2046 Mbps total).



2) Is there a way to force a NIC to output 100Mbps IDLE pattern, even when 
there is no link partner?
According to Intel's PHY test compliance document, one required test equipment 
is:
"Second PC with Ethernet network interface card (NIC) that can be forced to 
transmit 100BASE-TX scrambled idle signals"
No link is needed to output signal. Lanconf is not intented for this type of 
use.  More expensive types of Ethernet test equipment would be needed for this.


OK, thanks




How can I use Lanconf to provide such pattern?  Does all Intel NIC support this?
A search in the Lanconf User Manual PDF for the keyword "idle" do not yield any 
result.


3) Can I expect zero packet dropped or missed (Mpc/Rnbc) with the following 
setup?
Two desktop computer, Intel Pentium 4 ~1.6GHz+   (don't have access to real 
server class PC)
Two "server" class GigE NIC, connected to the PCIe 16x connector on each 
motherboard
~2 meter long Cat5E cable between the PCs
No other PCIe cards in the PCs
Rnbc is not a dropped packet.  This number is used to indicate to the user that 
it was close to be being dropped only.  The packets that show up in this count 
are actually received and processed by the stack.  So please don't include 
these in error counts.  They really mean that your system isn't processing the 
packets fast enough for some reason.

OK, thanks for the precision.

I was confused with the following from the datasheet:

Table 7-68. IEEE 802.3 Recommended Package Statistics
    FramesLostDueToIntMACRcvError     82580 counter :    RNBC


Table 7-71. RMON Statistics
    etherStatsDropEvents  82580 counters:  MPC + RNBC


Both of those statistic names sound like the packets with RNBC are actually 
dropped.




You should be able to get to this but it all depends on what kind of traffic 
you are sending.  64 byte frames are harder to process than full sized frames.  
It's also harder to get to full wire bandwidth with small frames.  I'm guessing 
that if you rerun the tests using full sized frames your drops would all go 
away.  Even with your current set up.


---> My idea is to obtain a baseline with zero error, but maybe it's not 
possible due to clock tolerance or other??
---> What operating system do they recommend for such test?
Linux is fine.  Just a really new kernel.  Maybe something like Ubuntu 11.4, 
Fedora 15.  You said you are using Debian 3.0?  Really?  That kernel is from 
2002 so that can't be right.  Support for the 82580 device wasn't added until 
much later.  Please do a 'uname -a' as root on system with will tell you what 
kernel you are running.


We're not using Debian 3.0.... we're using one of the latest Debian 
(wheezy/sid).  The kernel version is 3.0.0-1-amd64





4) Which OS is normally used to run Lanconf at Intel? EFI, DOS, Linux, 
Windows???
We don't use it for much as I said.  Definitely not for the kind of testing you 
are trying to do.


Ok




Thanks


Alex



Ronciak, John wrote:
Comments below.

Cheers,
John

From: Alexandre Desnoyers [mailto:a...@qtec.com]
Sent: Tuesday, November 01, 2011 12:38 PM
To: Ronciak, John
Cc: <ricardo.riba...@gmail.com><mailto:ricardo.riba...@gmail.com>; 
e1000-de...@lists.sf.net<mailto:e1000-de...@lists.sf.net>
Subject: Re: [E1000-devel] Intel GigE NIC - Missing lots of packets

Hello John,

Thanks for the tip about pktgen.  I'll look into it more deeply tomorrow.

We're using the stock igb driver from kernel.org which is included in the 
Debian kernel package.  Do you recommend using the one from Sourceforge?  Any 
critical issues fixed lately?
No, just that you should be using a recent version.  In your case a recent 
Debian release.  That's all.


Does Intel use mostly "pktgen" instead of Lanconf for silicon/board validation?
I need to build myself a reliable test bench for testing NIC, and I would like 
it to be as close as possible to "industry standard"... whatever that is...  So 
which one should I concentrate my efforts on?
It depends on the use case.  For just testing packet performance of our Si the 
LAD Linux team uses mostly pktgen or a real HW traffic generator.  We rarely 
use Lanconf for that type of testing.  Lanconf has many uses however and is a 
very useful tool for doing other type things, like looking at or changing 
EEPROM values.  Or run the diag tools to test a questionable NIC device.  For 
packet type testing tools like pktgen or netperf or iperf are probably 
preferable as they offer many more options for how packets will be received or 
transmitted.


Do you have any email contact for people developing Lanconf?  You can send me a 
personal email if you do.
Lanconf come from another team here in my group.  What's the question?



Regards,


Alexandre Desnoyers
Electronic Design Engineer
Qtechnology A/S
Valby Langgade 142, 1.sal - 2500 Valby - Denmark - 
www.qtec.com<http://www.qtec.com>




Ronciak, John wrote:

Hi Alexandre,



We (at least the Linux team) don't use the DOS version of Lanconf at all.  So I 
can't really say if it works as a driving test platform.   We even rarely use 
the Linux version of Lanconf as pktgen (Packet Gen) is much more configurable.



The graphics should not be a problem, neither should the mouse can keyboard.



In the Debian release, are you using the stock igb driver or are you using the 
latest one from our Sourceforge site (e1000.sf.net)?  Not that it really 
matters it's just that we don't test against Debian very often.



The I350 NIC will be a good packet generator though you should look at pktgen 
instead of Lanconf I think.



Cheers,

John



From: Alexandre Desnoyers [mailto:a...@qtec.com]

Sent: Tuesday, November 01, 2011 11:06 AM

To: Ronciak, John

Cc: e1000-de...@lists.sf.net<mailto:e1000-de...@lists.sf.net>; 
<ricardo.riba...@gmail.com><mailto:ricardo.riba...@gmail.com>

Subject: Re: [E1000-devel] Intel GigE NIC - Missing lots of packets



Hi John,



Thanks for your answer.



On the software side, we did many test using ping, iperf and netperf under 
Linux... But all the screen-shots have been made by booting "Plain-Old-DOS" 
(Win98SE), with no TSR at all in config.sys+autoexec.bat, and just running 
LANCONF from there on both PC.  So there shouldn't be anything software-wise to 
create any problems.





I'm planning to buy an Intel I350 NIC, 2 ports, for the packet generator PC.  
Please let me know if you recommend this setup and if you see any compatibility 
issues with LANCONF.





Your comment regarding bus bandwidth made me think about the integrated video 
card...  Do you know or have you seen that an integrated video card sharing the 
CPU DRAM could create such bandwidth limitation?  Even when running pure DOS in 
text mode?  This is pretty much the only bandwidth-intensive device in the test 
system.  The other devices are the USB ports for the keyboard+mouse+USB-Key.  
Nothing else.





FYI, we're using Debian with a 3.0.0 kernel (maybe 3.0.1... not sure) when 
running ping/iperf/netperf.  We haven't run LANCONF under Linux.





Thanks again,





Alexandre Desnoyers

Electronic Design Engineer

Qtechnology A/S

Valby Langgade 142, 1.sal - 2500 Valby - Denmark - 
www.qtec.com<http://www.qtec.com><http://www.qtec.com><http://www.qtec.com>







Ronciak, John wrote:



Alexandre,







You have a lot of questions here.  Let's start with some level setting.  Using 
UDP there may be some packet drops depending on how fast the packets are being 
processed.  I think you may need to set the volume to packets (throughput) you 
are going to be sending/receiving on you device and see if packets are still 
being dropped on that device.  From what you say below it doesn't look like the 
82580 device/system is dropping any packets.  It's the other systems/devices 
with the drops right?







What is happening when you see RNBC (receive no buffer count) is that the 
packets aren't being processed fast enough by the system and if it continues 
you see the MPC (missed packet count) increase when the packets are actually 
dropped.  This could be happening due to the a slow system, slow bus, something 
else on the system taking up CPU or PCI bus bandwidth, etc.







So I would recommend testing this again with the your device with the 82580 and 
one other higher-end system as the link partner. Set the test up for you 
desired throughput and see what your systems do with that throughput.  I 
wouldn't use you laptop as a partner, use a server type system if possible.  
Limit what is running on it while you are testing.  Disable other PCI/PCIe 
devices in the system, if possible, if they are taking up lots of bus bandwidth.







You also did not say what Linux version you are using on your system.  This can 
also have an effect on what you are seeing.  You talk about your HW and Lanconf 
but not the system SW.







Please let us know.







BTW, the packet drops you are seeing is not excessive at all but possible can 
be reduced if the test is correct for your environment.







Cheers,



John















-----Original Message-----



From: Alexandre Desnoyers [mailto:a...@qtec.com]



Sent: Tuesday, November 01, 2011 5:50 AM



To: 
e1000-de...@lists.sf.net<mailto:e1000-de...@lists.sf.net><mailto:e1000-de...@lists.sf.net><mailto:e1000-de...@lists.sf.net>



Cc: 
<ricardo.riba...@gmail.com><mailto:ricardo.riba...@gmail.com><mailto:ricardo.riba...@gmail.com><mailto:ricardo.riba...@gmail.com>



Subject: [E1000-devel] Intel GigE NIC - Missing lots of packets







Hello everyone,







I've just registered to the mailing list following the recommendation



of Peter Waskiewicz from Intel.











Here is the situation:



I've designed an embedded x86 board with a Intel 82580 NIC (both dual



and quad GigE).  We're seeing some packet loss when using either "ping



-f", "iperf" and "netperf" under Linux.  After investigating a little



more, I found the DOS LANConf tool that can be used to test the NIC



according to IEEE standards.







Our embedded application is a real-time image analysis system, and the



image frames are sent via UDP packets to another computer



(point-to-point).  We're seeing some packet losses and are trying to



debug this.







Sorry for using Tinypic links for the screenshot... but I believe that



the server do not support attachment.  Feel free to request a personal



email with the properly named attachment.







Questions:







-----------------------------------------------------------------------



-----







1)  Can you recommend an Intel PCIe NIC "reference" card that can be



used as traffic generator?  I was thinking about the "Intel PRO/1000 PT



Server Adapter", but maybe you have a better recommendation.







Right now, I'm using the LOM 82578DC NIC from an Intel DH55TC



motherboard as the traffic generator for LANConf.







According to the Intel "1000BASE-T/100BASE-TX/10BASE-T Physical Layer



Compliance Tests Manual", version 4.3, the recommended NIC is a 82543,



but it's now listed as "end of life" on Intel's website... And it only



has a PCI interface which is not fast enough for full speed GigE



testing



(confirmed with an old Intel 82541 based PCI card)











What is the architectural differences between the client and server



NICs?  I need to convince my boss to spend the money on the correct



card :)







-----------------------------------------------------------------------



-----







2)  When doing the Send/Receive LANConf test between my 82580 and the



82578DC LOM, I get "Mpc" and "Rnbc" errors on the 82578DC, but no error



on the 82580. There is about 5 Mpc errors per seconds, and about 15



Rnbc



errors per seconds.







Embedded 82580 screenshot - with link to Intel LOM.jpg



http://i40.tinypic.com/35inm7b.jpg







Intel LOM screenshot - with link to 82580.jpg



http://i44.tinypic.com/2q1f34o.jpg







Is this somehow normal??







I also get Mpc and Rnbc errors on both my T60p laptop and the LOM NIC



when testing between them.  Check those two screenshot:



   Lenovo T60p screenshot - with link to Intel LOM.jpg



     http://i41.tinypic.com/11l6k5t.jpg







   Intel LOM screenshot - with link to T60p.jpg



     http://i40.tinypic.com/sfvswk.jpg











-----------------------------------------------------------------------



-----







3) When looking at the bandwidth values, they seems half of what they



should be on the 82580.



The 82580 total report ~1020Mbps and the partner (LOM) reports



2046Mbps....







Embedded 82580 screenshot - with link to Intel LOM.jpg



http://i40.tinypic.com/35inm7b.jpg











Intel LOM screenshot - with link to 82580.jpg



http://i44.tinypic.com/2q1f34o.jpg







Any explanation??







-----------------------------------------------------------------------



-----







4) The BER test between the 82580 and the LOM is failing in one



direction.







BER screenshot between 82580 and Intel LOM - 82580 is RX.jpg



http://i44.tinypic.com/k9b86s.jpg











BER screenshot between 82580 and Intel LOM - LOM is RX.jpg



http://i42.tinypic.com/15qclfl.jpg











Tested at GigE speed







Any suggestion of what to check??







Here is what I've checked so far:



- Mounted a 25MHz 2ppm TCXO oscillator (not xtal) on the 82580



- 1000BASE-T Peak Differential Output Voltage and Level Accuracy



- 1000BASE-T Maximum Output Droop



- 100BASE-TX Differential Output Voltage (UTP)



- 100BASE-TX Waveform Overshoot



- 100Base-TX Rise and Fall Times



- 100Base-TX Duty Cycle Distortion (DCD)



- 100Base-TX Transmit Jitter



All of the test above at well within the limits specified in



"1000BASE-T/100BASE-TX/10BASE-T Physical Layer Compliance Tests Manual"







Using LANConf v1.18.8.1 under Win98SE DOS boot disk.



Cable is Cat5E STP (not UTP), 2 meter long, point-to-point







-----------------------------------------------------------------------



-----











Thank you very much for you time,











Alexandre Desnoyers



Electronic Design Engineer



Qtechnology A/S



Valby Langgade 142, 1.sal - 2500 Valby - Denmark - 
www.qtec.com<http://www.qtec.com><http://www.qtec.com><http://www.qtec.com>



















-----------------------------------------------------------------------



-------



RSA&reg; Conference 2012



Save &#36;700 by Nov 18



Register now



http://p.sf.net/sfu/rsa-sfdev2dev1



_______________________________________________



E1000-devel mailing list



E1000-devel@lists.sourceforge.net<mailto:E1000-devel@lists.sourceforge.net><mailto:E1000-devel@lists.sourceforge.net><mailto:E1000-devel@lists.sourceforge.net>



https://lists.sourceforge.net/lists/listinfo/e1000-devel



To learn more about Intel&#174; Ethernet, visit



http://communities.intel.com/community/wired











------------------------------------------------------------------------------



RSA&reg; Conference 2012



Save &#36;700 by Nov 18



Register now



http://p.sf.net/sfu/rsa-sfdev2dev1



_______________________________________________



E1000-devel mailing list



E1000-devel@lists.sourceforge.net<mailto:E1000-devel@lists.sourceforge.net><mailto:E1000-devel@lists.sourceforge.net><mailto:E1000-devel@lists.sourceforge.net>



https://lists.sourceforge.net/lists/listinfo/e1000-devel



To learn more about Intel&#174; Ethernet, visit 
http://communities.intel.com/community/wired













________________________________



------------------------------------------------------------------------------

RSA&#174; Conference 2012

Save $700 by Nov 18

Register now&#33;

http://p.sf.net/sfu/rsa-sfdev2dev1



________________________________



_______________________________________________

E1000-devel mailing list

E1000-devel@lists.sourceforge.net<mailto:E1000-devel@lists.sourceforge.net>

https://lists.sourceforge.net/lists/listinfo/e1000-devel

To learn more about Intel&#174; Ethernet, visit 
http://communities.intel.com/community/wired





------------------------------------------------------------------------------
RSA&#174; Conference 2012
Save $700 by Nov 18
Register now&#33;
http://p.sf.net/sfu/rsa-sfdev2dev1
_______________________________________________
E1000-devel mailing list
E1000-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/e1000-devel
To learn more about Intel&#174; Ethernet, visit 
http://communities.intel.com/community/wired

Reply via email to