Re: AF_UNIX socketpair dgram queue sizes

2021-11-09 Thread Jan Schaumann via freebsd-net
Mark Johnston  wrote:

> There is an additional factor: wasted space.  When writing data to a
> socket, the kernel buffers that data in mbufs.  All mbufs have some
> amount of embedded storage, and the kernel accounts for that storage,
> whether or not it's used.  With small byte datagrams there can be a lot
> of overhead;

I'm observing two mbufs being allocated for each
datagram for small datagrams, but only one mbuf for
larger datagrams.

That seems counter-intuitive to me?

> The kern.ipc.sockbuf_waste_factor sysctl controls the upper limit on
> total bytes (used or not) that may be enqueued in a socket buffer.  The
> default value of 8 means that we'll waste up to 7 bytes per byte of
> data, I think.  Setting it higher should let you enqueue more messages.

Ah, this looks like something relevant.

Setting kern.ipc.sockbuf_waste_factor=1, I can only
write 8 1-byte datagrams.  For any increase of the
waste factor by one, I get another 8 1-byte datagrams,
up until waste factor > 29, at which point we hit
recvspace: 30 * 8 = 240, so 240 1-byte datagrams with
16 bytes dgram overhead means we get 240*17 = 4080
bytes, which just fits (well, with room for one empty
16-byte dgram) into the recvspace = 4096.

But I still don't get the direct relationship between
the waste factor and the recvspace / buffer queue:
with a waste_factor of 1 and a datagram with 1972
bytes, I'm able to write one dgram with 1972 bytes +
1 dgram with 1520 bytes = 3492 bytes (plus 2 * 16
bytes overhead = 3524 bytes).  There'd still have been
space for 572 more bytes in the second dgram.

Liekwise, trying to write a single 1973 dgram fills
the queue and no additional bytes can be written in a
second dgram, but I can write a single 2048 byte
dgram.

Still confused...

-Jan



Re: AF_UNIX socketpair dgram queue sizes

2021-11-09 Thread Mark Johnston
On Tue, Nov 09, 2021 at 08:57:20PM -0500, Jan Schaumann via freebsd-net wrote:
> Hello,
> 
> I'm trying to wrap my head around the buffer sizes
> relevant to AF_UNIX/PF_LOCAL dgram socketpairs.
> 
> On a FreeBSD/amd64 13.0 system, creating a socketpair
> and simply writing a single byte in a loop to the
> non-blocking write end without reading the data, I can
> perform 64 writes before causing EAGAIN, yielding 1088
> bytes in FIONREAD on the read end (indicating 16 bytes
> per datagram overhead).

When transmitting on a unix dgram socket, each message will include a
copy of the sender's address, represented by a dummy 16-byte sockaddr in
this case.  This is stripped by the kernel when receiving, but still
incurs overhead with respect to socket buffer accounting.

> This is well below the total net.local.dgram.recvspace
> = 4096 bytes.  I would have expected to be able to
> perform 240 1 byte writes (240 + 240*16 = 4080).
> 
> Now if I try to write SO_SNDBUF = 2048 bytes on each
> iteration (or subsequently as many as I can until
> EAGAIN), then I can send one datagram with 2048 bytes
> and one datagram with 2016 bytes, filling recvspace as
> (2 * 16) + (2048 + 2016) = 4096.
> 
> But at smaller sizes, it looks like the recvspace is
> not filled completely: writes in chunks of > 803 bytes
> will fill recvspace up to 4096 bytes, but below 803
> bytes, recvspace is not maxed out.
> 
> Does anybody know why smaller datagrams can't fill
> recvspace?  Or what I'm missing / misunderstanding
> about the recvspace here?

There is an additional factor: wasted space.  When writing data to a
socket, the kernel buffers that data in mbufs.  All mbufs have some
amount of embedded storage, and the kernel accounts for that storage,
whether or not it's used.  With small byte datagrams there can be a lot
of overhead; with stream sockets the problem is mitigated somewhat by
compression, but for datagrams we don't have a smarter mechanism to
maintain message boundaries.

The kern.ipc.sockbuf_waste_factor sysctl controls the upper limit on
total bytes (used or not) that may be enqueued in a socket buffer.  The
default value of 8 means that we'll waste up to 7 bytes per byte of
data, I think.  Setting it higher should let you enqueue more messages.
As far as I know this limit can't be modified directly, it's a function
of the waste factor and the socket buffer size.



AF_UNIX socketpair dgram queue sizes

2021-11-09 Thread Jan Schaumann via freebsd-net
Hello,

I'm trying to wrap my head around the buffer sizes
relevant to AF_UNIX/PF_LOCAL dgram socketpairs.

On a FreeBSD/amd64 13.0 system, creating a socketpair
and simply writing a single byte in a loop to the
non-blocking write end without reading the data, I can
perform 64 writes before causing EAGAIN, yielding 1088
bytes in FIONREAD on the read end (indicating 16 bytes
per datagram overhead).

This is well below the total net.local.dgram.recvspace
= 4096 bytes.  I would have expected to be able to
perform 240 1 byte writes (240 + 240*16 = 4080).

Now if I try to write SO_SNDBUF = 2048 bytes on each
iteration (or subsequently as many as I can until
EAGAIN), then I can send one datagram with 2048 bytes
and one datagram with 2016 bytes, filling recvspace as
(2 * 16) + (2048 + 2016) = 4096.

But at smaller sizes, it looks like the recvspace is
not filled completely: writes in chunks of > 803 bytes
will fill recvspace up to 4096 bytes, but below 803
bytes, recvspace is not maxed out.

Does anybody know why smaller datagrams can't fill
recvspace?  Or what I'm missing / misunderstanding
about the recvspace here?

-Jan



Re: Bug in MAC filter on IGB/if_vlan?

2021-11-09 Thread Rozhuk Ivan
On Tue, 9 Nov 2021 13:30:46 -0500
Ryan Stone  wrote:

> You're trying to use different MACs with each vlan.  I don't believe
> that this is really a support configuration in our stack, although
> putting the interface into promiscuous I guess works.

I try to look into if_vlan code and if I understand correct it try to set
MAC to parent adapter and read parent adapter MAC change.

But it must add/remove MAC addr to/from H/W MAC filter, not change only one.
It done for multicast join/leave, but not done for if_vlan and probably some 
other cases.




Re: Bug in MAC filter on IGB/if_vlan?

2021-11-09 Thread Rozhuk Ivan
On Tue, 9 Nov 2021 08:39:12 -0600
Kevin Bowling  wrote:

> > I have in rc.conf:
> >
> > =
> > vlans_igb0="vlan77 vlan86 vlan87"
> > create_args_vlan87="vlan 87"
> > create_args_vlan86="vlan 86"
> > create_args_vlan77="vlan 77"
> > ifconfig_vlan87="inet 185.44.68.92 netmask 255.255.252.0 mtu 1500
> > down up" ifconfig_vlan87_alias0="link 00:aa:fa:dd:44:55"
> > ifconfig_vlan86="DHCP mtu 1500"
> > ifconfig_vlan86_alias0="link 00:ff:fa:dd:44:55"
> > ifconfig_vlan77="inet 192.168.0.254 netmask 255.255.255.0"
> > ifconfig_vlan77_alias0="link 00:0f:43:48:67:fe"
> > ifconfig_vlan77_ipv6="inet6 2001:470:2345:555::1/64 prefixlen 64
> > auto_linklocal"
> > ifconfig_igb0="-lro -tso -vlanhwtso mtu 9000 down up"
> >
> > =
> >
> > There is 4 different MAC addresses.
> > System is unavailable after boot until: ifconfig igb0 promisc down
> > up
> >
> > FreeBSD 13 build from fresh sources.
> >
> >
> > Is this a bug or this is normal?  
> 
> 
> Which chip?  Please post uname -a or git rev.

igb0@pci0:8:0:0:class=0x02 rev=0x03 hdr=0x00 vendor=0x8086 
device=0x1539 subvendor=0x1849 subdevice=0x1539
vendor = 'Intel Corporation'
device = 'I211 Gigabit Network Connection'
class  = network
subclass   = ethernet

514a095250b178ad81319bf8c30938c3f08dd16f
few days old build from 13 branch.


> Why do you have “down up” in the stateful config, please remove
> “down” and the MTU change and report back findings

It from some posts that recomend to use
ifconfig em0 mtu 2345 down up
to force mtu apply.




Re: Bug in MAC filter on IGB/if_vlan?

2021-11-09 Thread Ryan Stone
You're trying to use different MACs with each vlan.  I don't believe
that this is really a support configuration in our stack, although
putting the interface into promiscuous I guess works.



[Bug 253328] add IPv6 6rd supported in if_stf

2021-11-09 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=253328

depeo  changed:

   What|Removed |Added

 CC||dan...@flygpost.com

--- Comment #1 from depeo  ---
This patch only seem to allow setting /32 as a netmask.

If this patch is being added it would be great if it could be modified to allow
other netmasks other than /32.

If you have a /64 network from the ISP, and want to subnet that net to smaller
networks, and use nat66 in pf, you need to set a lower netmask than 32 for the
6rd interface, for example /128.

-- 
You are receiving this mail because:
You are the assignee for the bug.


Re: Bug in MAC filter on IGB/if_vlan?

2021-11-09 Thread Kevin Bowling
On Sat, Nov 6, 2021 at 5:03 PM Rozhuk Ivan  wrote:

> Hi!
>
> I have in rc.conf:
>
> =
> vlans_igb0="vlan77 vlan86 vlan87"
> create_args_vlan87="vlan 87"
> create_args_vlan86="vlan 86"
> create_args_vlan77="vlan 77"
> ifconfig_vlan87="inet 185.44.68.92 netmask 255.255.252.0 mtu 1500 down up"
> ifconfig_vlan87_alias0="link 00:aa:fa:dd:44:55"
> ifconfig_vlan86="DHCP mtu 1500"
> ifconfig_vlan86_alias0="link 00:ff:fa:dd:44:55"
> ifconfig_vlan77="inet 192.168.0.254 netmask 255.255.255.0"
> ifconfig_vlan77_alias0="link 00:0f:43:48:67:fe"
> ifconfig_vlan77_ipv6="inet6 2001:470:2345:555::1/64 prefixlen 64
> auto_linklocal"
> ifconfig_igb0="-lro -tso -vlanhwtso mtu 9000 down up"
>
> =
>
> There is 4 different MAC addresses.
> System is unavailable after boot until: ifconfig igb0 promisc down up
>
> FreeBSD 13 build from fresh sources.
>
>
> Is this a bug or this is normal?


Which chip?  Please post uname -a or git rev.

Why do you have “down up” in the stateful config, please remove “down” and
the MTU change and report back findings

>