On Tue, Jan 28, 2014 at 01:21:46PM +1000, David Gwynne wrote:
> 
> On 26 Jan 2014, at 11:31 am, Brad Smith <b...@comstyle.com> wrote:
> 
> > On 31/12/13 5:50 AM, Mike Belopuhov wrote:
> >> On 31 December 2013 09:46, Brad Smith <b...@comstyle.com> wrote:
> >>> On 31/12/13 3:14 AM, Mark Kettenis wrote:
> >>>>> 
> >>>>> Date: Tue, 31 Dec 2013 01:28:04 -0500
> >>>>> From: Brad Smith <b...@comstyle.com>
> >>>>> 
> >>>>> Don't count RX overruns and missed packets as inputs errors. They're
> >>>>> expected to increment when using MCLGETI.
> >>>>> 
> >>>>> OK?
> >>>> 
> >>>> 
> >>>> These may be "expected", but they're still packets that were not
> >>>> received.  And it is useful to know about these, for example when
> >>>> debugging TCP performance issues.
> >>> 
> >>> 
> >>> Well do we want to keep just the missed packets or both? Part of the
> >>> diff was inspired by this commit when I was looking at what counters
> >>> were incrementing..
> >>> 
> >>> for bge(4)..
> >>> 
> >>> revision 1.334
> >>> date: 2013/06/06 00:05:30;  author: dlg;  state: Exp;  lines: +2 -4;
> >>> dont count rx ring overruns as input errors. with MCLGETI controlling the
> >>> ring we expect to run out of rx descriptors as a matter of course, its not
> >>> an error.
> >>> 
> >>> ok mikeb@
> >>> 
> >>> 
> >> 
> >> it does screws up statistics big time.  does mpc counter follow 
> >> rx_overruns?
> >> why did we add them up both previously?
> > 
> > Yes, it does. I can't say why exactly but before MCLGETI for most 
> > environments
> > it was unlikely to have RX overruns.
> 
> its not obvious?
> 
> rx rings are usually massively over provisioned. eg, my myx has 512 entries 
> in its
> rx ring. however, its interrupt mitigation is currently configured for 
> approximately
> 16k interrupts a second. if you're sustaining 1m pps, then you can divide 
> that by the
> interrupt rate to figure out the average usage of the rx ring. so 1000 / 16 
> is about
> 60-65 descriptors per interrupt. 512 is roughly an order of magnitude more 
> than what
> you need for that workload.
> 
> if you were hitting the ring limits before MCLGETI, then that means you dont 
> have
> enough cpu to process the pps. increasing the ring size would make it worse 
> cos you'd
> spend more time freeing mbufs because you were too far behind on the pps you 
> could
> deal with.

Ya, I don't know why I ultimately said I can't say why exactly as I was 
thinking about
what you said regaring having a lot of buffers allocated and that's why I said 
it was
unlikely to have RX overruns.

Since this was changed for bge(4) then the other drivers using MCLGETI should 
be changed
as well if there is consensus about not adding the RX overruns to the 
interfaces input
errors counter. But I think kettenis has a point as well that this information 
is useful
its just we don't have any way of exposing that info to userland.

-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.

Reply via email to