On 05/18/2011 07:18 AM, Lynch, Jonathan wrote:
> Hi Alex,
>
> If there is only 1 RX buffer being used (DCB not enabled), when there 
> are packets dropped I should just see missed packets for mpc0 like 
> what I see below?
> This 1 Rx buffer uses all the space available to it - up to 512Kb, 
> depending on the features enabled such as flow director?
>
> 0x03FA0: mpc0        (Missed Packets Count 0)         0x0723CE36
> 0x03FA4: mpc1        (Missed Packets Count 1)         0x00000000
> 0x03FA8: mpc2        (Missed Packets Count 2)         0x00000000
> 0x03FAC: mpc3        (Missed Packets Count 3)         0x00000000
> 0x03FB0: mpc4        (Missed Packets Count 4)         0x00000000
> 0x03FB4: mpc5        (Missed Packets Count 5)         0x00000000
> 0x03FB8: mpc6        (Missed Packets Count 6)         0x00000000
> 0x03FBC: mpc7        (Missed Packets Count 7)         0x00000000
>
> According to the 82599 data sheet
>
> *8.2.3.23.4 Rx Missed Packets Count — RXMPC[n] (0x03FA0 + 4*n, 
> n=0...7; RC) DBU-Rx*
> Register ‘n’ counts the number of missed packets per packet buffer ‘n’.
> Packets are missed when the receive FIFO has insufficient space to 
> store the incoming
> packet. This may be caused due to insufficient buffers allocated, or 
> because there is
> insufficient bandwidth on the IO bus. Events setting this counter also 
> set the receiver
> overrun interrupt (RXO). These registers do not increment if receive 
> is not enabled and
> count only packets that would have been posted to the SW driver.
>
> Jonathan

I'm slighly confused by what you are asking here.  With only one packet 
buffer enabled you will only see MPC increment for one FIFO since it is 
the only one in use.  So if you are asking if the behaviour is normal 
then yes, the info you have above is correct.

However one thing that does concern me is why you might be seeing the 
missed packet counts.  Normally something like this will occur when you 
do not have sufficient PCIe bandwidth to flush the RX FIFO as packets 
arrive.  Would it be possible to provide an "lspci -vvv" dump for the 
device.  What we would specifically want to verify is that the link 
status register reports a 5GT/s link with a lane width of x8.  This is 
the optimal configuration for allowing enough PCIe bandwidth to empty 
the RX FIFO.

Thanks,

Alex

------------------------------------------------------------------------------
What Every C/C++ and Fortran developer Should Know!
Read this article and learn how Intel has extended the reach of its 
next-generation tools to help Windows* and Linux* C/C++ and Fortran 
developers boost performance applications - including clusters. 
http://p.sf.net/sfu/intel-dev2devmay
_______________________________________________
E1000-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/e1000-devel
To learn more about Intel® Ethernet, visit 
http://communities.intel.com/community/wired

Reply via email to