On both sides I use em(4) with MTU 9000. Then tried to set the same value to the pfsync with success (ifconfig pfsync0 mtu 9000), but the actual value I see is 2048.
pfsync0: flags=41<UP,RUNNING> mtu 2048 priority: 0 pfsync: syncdev: em0 maxupd: 128 defer: off groups: carp pfsync em0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 9000 lladdr 00:1b:21:bb:7f:4b priority: 0 media: Ethernet autoselect (1000baseT full-duplex) status: active inet 10.10.10.10 netmask 0xffffff00 broadcast 10.10.10.255 inet6 fe80::21b:21ff:febb:7f4b%em0 prefixlen 64 scopeid 0x1 tcpdump output: 20:11:08.498241 10.10.10.10: PFSYNCv6 len 2012 act TDB UPD count 2 spi: 0e64afa5 rpl: 18271 cur_bytes: 213800 ... (DF) [tos 0x10] 20:11:08.672741 10.10.10.10: PFSYNCv6 len 544 act UPD ST COMP count 3 ... (DF) [tos 0x10] 20:11:09.348900 10.10.10.11: PFSYNCv6 len 124 act UPD ST COMP count 1 ... (DF) [tos 0x10] 20:11:09.362771 10.10.10.10: PFSYNCv6 len 2024 act TDB UPD count 2 spi: 0e64afa5 rpl: 18273 cur_bytes: 214032 On Oct 22, 2011, at 5:36 PM, Mike Belopuhov wrote: > On Thu, Oct 20, 2011 at 10:40 +0200, Maxim Bourmistrov wrote: >> Hi list, >> is there any reason for MTU on pfsync0 to be limited to 2048? > > yes, when pfsync(4) was written, there was only one mbuf cluster > pool: MCLBYTES (2048) sized one. now we have several. > >> Any benefit from having it lager, say up to 9000? >> > > it should be possible to send out more updates at once therefore > calling output routines less often. there might be delay concerns > though -- this should be investigated. > >> I enabled MTU 9000 on syncdev and tried on pfsync0. > > but does it work? are you getting state updates? you should have > lots of states to verify that huge packets actually get sent out. > > btw, what's mtu size of your syncdev interface? > >> As seen in tcpdump now, sync pkts are large but not as large as >> 9000(2048 limit). >> >> //maxim