ur own, so do it.
and had following setup:
fw1: em0 mtu 9000, pfsync0 mtu 2048
fw2: em0 mtu 9000, pfsync0 mtu 9000
This produced "pfsync: failed to receive bulk update".
If I change back to mtu 2048 states get propagated.
I also changed hardmtu as dlg@ suggested.
it's not immedia
Bourmistrov
wrote:
Hi,
I patched on side of this tandem
do you mean 'one'? then you should obviously patch both.
i mean, come on, you wanted to do some research on
your own, so do it.
and had following setup:
fw1: em0 mtu 9000, pfsync0 mtu 2048
fw2: em0 mtu 9000, pfsync0 mtu 9000
Thi
On Mon, Oct 24, 2011 at 12:18 PM, Maxim Bourmistrov
wrote:
>
> Hi,
>
> I patched on side of this tandem
do you mean 'one'? then you should obviously patch both.
i mean, come on, you wanted to do some research on
your own, so do it.
> and had following setup:
>
>
Hi,
I patched on side of this tandem
and had following setup:
fw1: em0 mtu 9000, pfsync0 mtu 2048
fw2: em0 mtu 9000, pfsync0 mtu 9000
This produced "pfsync: failed to receive bulk update".
If I change back to mtu 2048 states get propagated.
I also changed hardmtu as dlg@ suggested.
ith success (ifconfig
pfsync0 mtu 9000), but the actual value I see is 2048.
>>
>
> ugh. i thought you've fixed up the source code.
> i'm curious if it'll still work with a smaller mtu on the physical
> interface :-)
>
> Index: net/if_pfsync.c
> ==
On Sat, Oct 22, 2011 at 20:14 +0200, Maxim Bourmistrov wrote:
>
> On both sides I use em(4) with MTU 9000.
> Then tried to set the same value to the pfsync with success (ifconfig pfsync0
> mtu 9000), but the actual value I see is 2048.
>
ugh. i thought you've fixed up
On both sides I use em(4) with MTU 9000.
Then tried to set the same value to the pfsync with success (ifconfig pfsync0
mtu 9000), but the actual value I see is 2048.
pfsync0: flags=41 mtu 2048
priority: 0
pfsync: syncdev: em0 maxupd: 128 defer: off
groups: carp pfsync
On Thu, Oct 20, 2011 at 10:40 +0200, Maxim Bourmistrov wrote:
> Hi list,
> is there any reason for MTU on pfsync0 to be limited to 2048?
yes, when pfsync(4) was written, there was only one mbuf cluster
pool: MCLBYTES (2048) sized one. now we have several.
> Any benefit from having it lager, say
Hi list,
is there any reason for MTU on pfsync0 to be limited to 2048?
Any benefit from having it lager, say up to 9000?
I enabled MTU 9000 on syncdev and tried on pfsync0.
As seen in tcpdump now, sync pkts are large but not as large as
9000(2048 limit).
//maxim
9 matches
Mail list logo