> On 2 Mar 2016, at 8:04 PM, Martin Pieuchot <[email protected]> wrote:
> 
> On 02/03/16(Wed) 10:33, Mike Belopuhov wrote:
>> On 2 March 2016 at 05:50, David Gwynne <[email protected]> wrote:
>>> the max_link hdr is basically how much space to reserve before an
>>> ip packet for link headers, eg, ethernet.
>>> 
>>> 16 bytes was a good choice when everything was just ip inside
>>> ethernet, but now we deal with a bunch of encapsulations which blow
>>> that out of the water. 16 bytes isnt even enough if we have to
>>> inject a vlan tag ourselves.
>>> 
>>> im suggesting 64 because it will comfortably allow you to encap an
>>> ethernet header inside of an ip protocol. i think it is even enough
>>> to accomodate vxlan overhead.
>>> 
>>> the caveat to this is that it changes the watermark for what is the
>>> smallest packet that can go into an mbuf. currently the space in
>>> an mbuf with headers is about 184 bytes. with a max_linkhdr of 16
>>> that would allow a 168 byte ip packet. after this you can put a 120
>>> byte packet in.
>>> 
>>> however, most ip packets are either small (think keystrokes over
>>> ssh, ACKs, or dns lookups) or full sized (always greater than MHLEN
>>> anyway). this change therefore has minimal impact on the majority
>>> of traffic, except to make prepending encap headers a lot cheaper.
>>> 
>>> ok?
>>> 
>> 
>> I'm fine with the change, but why are you replicating the XXX comment?
>> Why is it there?  Is it a wrong place to put it?
> 
> I agree.  I'd kill the XXX but add a comment explaining that 64 is an
> educated guess.  Because that's what it is right?  Or did you do some
> real analysis?
> 
> I'm ok with the change either way.

i had it at 32 on my stuff and got a drop in the number of mbufs moving through 
the system. eg, on a laptop with iwm it gets rid of an extra mbuf allocation on 
every ACK i send while downloading stuff.

i went to 64 after looking at what stuff like etherip does.

dlg

Reply via email to