Re: bump the max_linkhdr size from 16 bytes to 64

2016-03-02 Thread David Gwynne

> On 2 Mar 2016, at 8:04 PM, Martin Pieuchot  wrote:
> 
> On 02/03/16(Wed) 10:33, Mike Belopuhov wrote:
>> On 2 March 2016 at 05:50, David Gwynne  wrote:
>>> the max_link hdr is basically how much space to reserve before an
>>> ip packet for link headers, eg, ethernet.
>>> 
>>> 16 bytes was a good choice when everything was just ip inside
>>> ethernet, but now we deal with a bunch of encapsulations which blow
>>> that out of the water. 16 bytes isnt even enough if we have to
>>> inject a vlan tag ourselves.
>>> 
>>> im suggesting 64 because it will comfortably allow you to encap an
>>> ethernet header inside of an ip protocol. i think it is even enough
>>> to accomodate vxlan overhead.
>>> 
>>> the caveat to this is that it changes the watermark for what is the
>>> smallest packet that can go into an mbuf. currently the space in
>>> an mbuf with headers is about 184 bytes. with a max_linkhdr of 16
>>> that would allow a 168 byte ip packet. after this you can put a 120
>>> byte packet in.
>>> 
>>> however, most ip packets are either small (think keystrokes over
>>> ssh, ACKs, or dns lookups) or full sized (always greater than MHLEN
>>> anyway). this change therefore has minimal impact on the majority
>>> of traffic, except to make prepending encap headers a lot cheaper.
>>> 
>>> ok?
>>> 
>> 
>> I'm fine with the change, but why are you replicating the XXX comment?
>> Why is it there?  Is it a wrong place to put it?
> 
> I agree.  I'd kill the XXX but add a comment explaining that 64 is an
> educated guess.  Because that's what it is right?  Or did you do some
> real analysis?
> 
> I'm ok with the change either way.

i had it at 32 on my stuff and got a drop in the number of mbufs moving through 
the system. eg, on a laptop with iwm it gets rid of an extra mbuf allocation on 
every ACK i send while downloading stuff.

i went to 64 after looking at what stuff like etherip does.

dlg


Re: bump the max_linkhdr size from 16 bytes to 64

2016-03-02 Thread Martin Pieuchot
On 02/03/16(Wed) 10:33, Mike Belopuhov wrote:
> On 2 March 2016 at 05:50, David Gwynne  wrote:
> > the max_link hdr is basically how much space to reserve before an
> > ip packet for link headers, eg, ethernet.
> >
> > 16 bytes was a good choice when everything was just ip inside
> > ethernet, but now we deal with a bunch of encapsulations which blow
> > that out of the water. 16 bytes isnt even enough if we have to
> > inject a vlan tag ourselves.
> >
> > im suggesting 64 because it will comfortably allow you to encap an
> > ethernet header inside of an ip protocol. i think it is even enough
> > to accomodate vxlan overhead.
> >
> > the caveat to this is that it changes the watermark for what is the
> > smallest packet that can go into an mbuf. currently the space in
> > an mbuf with headers is about 184 bytes. with a max_linkhdr of 16
> > that would allow a 168 byte ip packet. after this you can put a 120
> > byte packet in.
> >
> > however, most ip packets are either small (think keystrokes over
> > ssh, ACKs, or dns lookups) or full sized (always greater than MHLEN
> > anyway). this change therefore has minimal impact on the majority
> > of traffic, except to make prepending encap headers a lot cheaper.
> >
> > ok?
> >
> 
> I'm fine with the change, but why are you replicating the XXX comment?
> Why is it there?  Is it a wrong place to put it?

I agree.  I'd kill the XXX but add a comment explaining that 64 is an
educated guess.  Because that's what it is right?  Or did you do some
real analysis?

I'm ok with the change either way.



Re: bump the max_linkhdr size from 16 bytes to 64

2016-03-02 Thread David Gwynne

> On 2 Mar 2016, at 7:33 PM, Mike Belopuhov  wrote:
> 
> On 2 March 2016 at 05:50, David Gwynne  wrote:
>> the max_link hdr is basically how much space to reserve before an
>> ip packet for link headers, eg, ethernet.
>> 
>> 16 bytes was a good choice when everything was just ip inside
>> ethernet, but now we deal with a bunch of encapsulations which blow
>> that out of the water. 16 bytes isnt even enough if we have to
>> inject a vlan tag ourselves.
>> 
>> im suggesting 64 because it will comfortably allow you to encap an
>> ethernet header inside of an ip protocol. i think it is even enough
>> to accomodate vxlan overhead.
>> 
>> the caveat to this is that it changes the watermark for what is the
>> smallest packet that can go into an mbuf. currently the space in
>> an mbuf with headers is about 184 bytes. with a max_linkhdr of 16
>> that would allow a 168 byte ip packet. after this you can put a 120
>> byte packet in.
>> 
>> however, most ip packets are either small (think keystrokes over
>> ssh, ACKs, or dns lookups) or full sized (always greater than MHLEN
>> anyway). this change therefore has minimal impact on the majority
>> of traffic, except to make prepending encap headers a lot cheaper.
>> 
>> ok?
>> 
> 
> I'm fine with the change, but why are you replicating the XXX comment?
> Why is it there?  Is it a wrong place to put it?

some deraadt guy committed the XXX, so maybe he knows. it was only 20 years ago.

more seriously, these vars seem like spaghetti to me. dropping the XXX wouldnt 
make it either better or worse, so ill get rid of it if i put this in.


Re: bump the max_linkhdr size from 16 bytes to 64

2016-03-02 Thread Mike Belopuhov
On 2 March 2016 at 05:50, David Gwynne  wrote:
> the max_link hdr is basically how much space to reserve before an
> ip packet for link headers, eg, ethernet.
>
> 16 bytes was a good choice when everything was just ip inside
> ethernet, but now we deal with a bunch of encapsulations which blow
> that out of the water. 16 bytes isnt even enough if we have to
> inject a vlan tag ourselves.
>
> im suggesting 64 because it will comfortably allow you to encap an
> ethernet header inside of an ip protocol. i think it is even enough
> to accomodate vxlan overhead.
>
> the caveat to this is that it changes the watermark for what is the
> smallest packet that can go into an mbuf. currently the space in
> an mbuf with headers is about 184 bytes. with a max_linkhdr of 16
> that would allow a 168 byte ip packet. after this you can put a 120
> byte packet in.
>
> however, most ip packets are either small (think keystrokes over
> ssh, ACKs, or dns lookups) or full sized (always greater than MHLEN
> anyway). this change therefore has minimal impact on the majority
> of traffic, except to make prepending encap headers a lot cheaper.
>
> ok?
>

I'm fine with the change, but why are you replicating the XXX comment?
Why is it there?  Is it a wrong place to put it?



bump the max_linkhdr size from 16 bytes to 64

2016-03-01 Thread David Gwynne
the max_link hdr is basically how much space to reserve before an
ip packet for link headers, eg, ethernet.

16 bytes was a good choice when everything was just ip inside
ethernet, but now we deal with a bunch of encapsulations which blow
that out of the water. 16 bytes isnt even enough if we have to
inject a vlan tag ourselves.

im suggesting 64 because it will comfortably allow you to encap an
ethernet header inside of an ip protocol. i think it is even enough
to accomodate vxlan overhead.

the caveat to this is that it changes the watermark for what is the
smallest packet that can go into an mbuf. currently the space in
an mbuf with headers is about 184 bytes. with a max_linkhdr of 16
that would allow a 168 byte ip packet. after this you can put a 120
byte packet in.

however, most ip packets are either small (think keystrokes over
ssh, ACKs, or dns lookups) or full sized (always greater than MHLEN
anyway). this change therefore has minimal impact on the majority
of traffic, except to make prepending encap headers a lot cheaper.

ok?

Index: uipc_domain.c
===
RCS file: /cvs/src/sys/kern/uipc_domain.c,v
retrieving revision 1.43
diff -u -p -r1.43 uipc_domain.c
--- uipc_domain.c   4 Sep 2015 08:43:39 -   1.43
+++ uipc_domain.c   2 Mar 2016 03:49:33 -
@@ -89,8 +89,8 @@ domaininit(void)
(*pr->pr_init)();
}
 
-   if (max_linkhdr < 16)   /* XXX */
-   max_linkhdr = 16;
+   if (max_linkhdr < 64)   /* XXX */
+   max_linkhdr = 64;
max_hdr = max_linkhdr + max_protohdr;
timeout_set(_timeout, pffasttimo, _timeout);
timeout_set(_timeout, pfslowtimo, _timeout);