Re: PF queue bandwidth limited to 32bit value

2023-09-14 Thread Andy Lemin
Hi Stuart,Seeing as it seems like everyone is too busy, and my workaround (not queue some flows on interfaces with queue defined) seems of no interest, and my current hack to use queuing on Vlan interfaces is a very incomplete and restrictive workaround;Would you please be so kind as to provide me with a starting point in the source code and variable names to concentrate on, where I can start tracing from beginning to end for changing the scale from bits to bytes?Thanks :)AndyOn 14 Sep 2023, at 19:34, Andrew Lemin  wrote:On Thu, Sep 14, 2023 at 7:23 PM Andrew Lemin  wrote:On Wed, Sep 13, 2023 at 8:35 PM Stuart Henderson  wrote:On 2023-09-13, Andrew Lemin  wrote:
> I have noticed another issue while trying to implement a 'prio'-only
> workaround (using only prio ordering for inter-VLAN traffic, and HSFC
> queuing for internet traffic);
> It is not possible to have internal inter-vlan traffic be solely priority
> ordered with 'set prio', as the existence of 'queue' definitions on the
> same internal vlan interfaces (required for internet flows), demands one
> leaf queue be set as 'default'. Thus forcing all inter-vlan traffic into
> the 'default' queue despite queuing not being wanted, and so
> unintentionally clamping all internal traffic to 4294M just because full
> queuing is needed for internet traffic.

If you enable queueing on an interface all traffic sent via that
interface goes via one queue or another.Yes, that is indeed the very problem. Queueing is enabled on the inside interfaces, with bandwidth values set slightly below the ISP capacities (multiple ISP links as well), so that all things work well for all internal users.However this means that inter-vlan traffic from client networks to server networks are restricted to 4294Mbps for no reason.. It would make a huge difference to be able to allow local traffic to flow without being queued/restircted. 

(also, AIUI the correct place for queues is on the physical interface
not the vlan, since that's where the bottleneck is... you can assign
traffic to a queue name as it comes in on the vlan but I believe the
actual queue definition should be on the physical iface).Hehe yes I know. Thanks for sharing though.I actually have very specific reasons for doing this (queues on the VLAN ifaces rather than phy) as there are multiple ISP connections for multiple VLANs, so the VLAN queues are set to restrict for the relevant ISP link etc.Also separate to the multiple ISPs (I wont bore you with why as it is not relevant here), the other reason for queueing on the VLANs is because it allows you to get closer to the 10Gbps figure..Ie, If you have queues on the 10Gbps PHY, you can only egress 4294Mbps to _all_ VLANs. But if you have queues per-VLAN iface, you can egress multiple times 4294Mbps on aggregate.Eg, vlans 10,11,12,13 on single mcx0 trunk. 10->11 can do 4294Mbps and 12->13 can do 4294Mbps, giving over 8Gbps egress in total on the PHY. It is dirty, but like I said, desperate for workarounds... :(  

"required for internet flows" - depends on your network layout.. the
upstream feed doesn't have to go via the same interface as inter-vlan
traffic.I'm not sure what you mean. All the internal networks/vlans are connected to local switches, and the switches have trunk to the firewall which hosts the default gateway for the VLANs and does inter-vlan routing.So all the clients go through the same VLANs/trunk/gateway for inter-vlan as they do for internet. Strict L3/4 filtering is required on inter-vlan traffic.I am honestly looking for support to recognise that this is a correct, valid and common setup, and so there is a genuine need to allow flows to not be queued on interfaces that have queues (which has many potential applications for many use cases, not just mine - so should be of interest to the developers?).Do you know why there has to be a default queue? Yes I know that traffic excluded from queues would take from the same interface the queueing is trying to manage, and potentially causes congestion. However with 10Gbps networking which is beyond common now, this does not matter when the queues are stuck at 4294MbpsDesperately trying to find workarounds that appeal.. Surely the need is a no brainer, and it is just a case of trying to encourage interest from a developer?Thanks :)



Re: Autoinstall + FDE

2023-09-14 Thread Stuart Henderson
On 2023-09-14, prodejna-radian...@icloud.com  
wrote:
> I was able to auto-install OpenBSD/amd64 except full disk encryption
> (FDE). Is FDE supported in autoinstall?

No, it is not.



-- 
Please keep replies on the mailing list.



autoinstall with full disk encryption

2023-09-14 Thread mipam
Hello,

I was able to auto-install OpenBSD/amd64 except full disk encryption
(FDE). Is FDE supported in autoinstall?

Thanks much!
Boj



Autoinstall + FDE

2023-09-14 Thread prodejna-radian . 09
Hello,

I was able to auto-install OpenBSD/amd64 except full disk encryption
(FDE). Is FDE supported in autoinstall?

Thanks much! J.



My fix for pf.conf after a "block in all"

2023-09-14 Thread Daniele B.
Hello,

I just want to share my solution taken from "Building Linux and OpenBSD
firewalls" (av. on the Internet Archive) to solve the no traffic prb 
caused the block "block in all" statement.

I moved the following statements:

# dns
pass in quick on $all_ifs proto udp from any port domain to any
pass out quick on $all_ifs proto udp from any to any port domain

# icmp
pass in quick inet proto icmp all icmp-type 0 max-pkt-rate 100/10
pass in quick inet proto icmp all icmp-type 3 max-pkt-rate 100/10
pass in quick inet proto icmp all icmp-type 11 max-pkt-rate 100/10

(underlining icmp-type 3)

setting them just after "block in all" and before anything else and this
solved to me.

Hope this can help anyone.


-- Daniele Bonini



Re: ipsec hardware recommendation

2023-09-14 Thread Marko Cupać
Hi,

thank you for suggestions, took me some time to think about them and
reply here.

On Fri, 11 Aug 2023 14:19:44 - (UTC)
Stuart Henderson  wrote:

> If you post your IPsec configuration, perhaps someone can suggest
> whether the choice of ciphers etc could be improved. It can make
> quite a difference.

I have just recently bumped quick enc from aes-128-gcm to aes-256-gcm,
as well as group from modp3072 to ecp256:

ike passive esp transport proto gre from $me to $peer \
  main auth hmac-sha2-256 enc aes-256 group ecp256 lifetime 24h \
  quick enc aes-256-gcm group ecp256 lifetime 8h

I have also increased lifetime from default values because I was
getting quite a lot of INVALID COOKIE messages from isakmpd:

isakmpd[51306]: message_recv: invalid cookie(s) cookiea cookieb
isakmpd[51306]: dropped message from $peer port 500 due to notification
type INVALID_COOKIE


On Sat, 12 Aug 2023 12:17:36 +1000
David Gwynne  wrote:

> The things you can do Right Now(tm) are:
> 
> - upgrade to -current
> 
> the pf purge code has been taken out from under the big kernel lock.
> if you have a lot of pf states, this will give more time to crypto.

I have ~50,000 states during peak time. I can't go -current, but I will
look forward to 7.4. I also read the following articles on undeadly.org:

https://undeadly.org/cgi?action=article;sid=20230807094305
https://undeadly.org/cgi?action=article;sid=20230706115843

Once 7.4 hits, is it expected that changing gre/ipsec to sec(4) could
make positive difference in throughput on same hardware?

> - pick faster crypto algorithms

I posted mine above, I would be thankful to get latest recommendation.

> - try wireguard?

I am testing replacing a few of gre/ipsec with wg interfaces on 7.3 at
the moment. Main problem I am encountering so far is the fact that
`ospfctl reload` does not seem to pick newly added (to ospfd.conf) wg
interfaces. `ospfctl sh int` shows them in DOWN state after reload, and
no OSPFv2-hello packets are being sent until `rcctl restart ospfd`.

It is quite unmaintainable to have to restart ospfd every time
wg interfaces are added or removed from ospfd.conf. Any way around it?
Perhaps on some later releases this will improve? Or am I doing it
wrong?

I have more questions about wireguard but I guess I should better ask
them in another topic.

Thank you in advance,

-- 
Before enlightenment - chop wood, draw water.
After  enlightenment - chop wood, draw water.

Marko Cupać
https://www.mimar.rs/



Re: Change userland core dump location

2023-09-14 Thread Eric Wong
Stuart Henderson wrote:
> On 2023-09-13, Eric Wong  wrote:
> > Theo de Raadt wrote:
> >> There isn't a way.  And I will argue there shouldn't be a way to do that.
> >> I don't see a need to invent such a scheme for one user, when half a 
> >> century
> >> of Unix has no way to do this.
> >> Sorry.
> >
> > I have a different use case than Johannes but looking for a similar feature.
> > Maybe I can convince you :>
> >
> > For background, I develop multi-process daemons and OpenBSD is
> > the only platform I'm noticing segfaults on[1].
> >
> > The lack of PIDs in the core filenames means they can get
> > clobbered in parallel scenarios and I lose useful information.
> >
> > Sometimes, daemons run in / (or another unwritable directory);
> > and the core dump can't get written, at all.
> 
> If the daemons are changing uid, read about kern.nosuidcoredump
> in sysctl(8) (set the sysctl, mkdir /var/crash/progname, and
> it will write to $pid.core).

They aren't, they're all per-user.  I'm seeing core files from a
heavily-parallelized test suite[1].  Some processes can chdir to
/, some stay in their current dir, and some chdir into
short-lived temporary directories.

Thanks.

[1] The good news is the test suite passes; but the lone core dump
sometimes get tells me it's in the Perl destructor sequence.
I've been adding `END {}' blocks and explicit undefs but still
occasionally see a perl.core file after a run.  And even if
I don't see that file after a run, I wouldn't know if a core
dump failed in / or a temporary directory.



Re: PF queue bandwidth limited to 32bit value

2023-09-14 Thread Andrew Lemin
On Thu, Sep 14, 2023 at 7:23 PM Andrew Lemin  wrote:

>
>
> On Wed, Sep 13, 2023 at 8:35 PM Stuart Henderson <
> stu.li...@spacehopper.org> wrote:
>
>> On 2023-09-13, Andrew Lemin  wrote:
>> > I have noticed another issue while trying to implement a 'prio'-only
>> > workaround (using only prio ordering for inter-VLAN traffic, and HSFC
>> > queuing for internet traffic);
>> > It is not possible to have internal inter-vlan traffic be solely
>> priority
>> > ordered with 'set prio', as the existence of 'queue' definitions on the
>> > same internal vlan interfaces (required for internet flows), demands one
>> > leaf queue be set as 'default'. Thus forcing all inter-vlan traffic into
>> > the 'default' queue despite queuing not being wanted, and so
>> > unintentionally clamping all internal traffic to 4294M just because full
>> > queuing is needed for internet traffic.
>>
>> If you enable queueing on an interface all traffic sent via that
>> interface goes via one queue or another.
>>
>
> Yes, that is indeed the very problem. Queueing is enabled on the inside
> interfaces, with bandwidth values set slightly below the ISP capacities
> (multiple ISP links as well), so that all things work well for all internal
> users.
> However this means that inter-vlan traffic from client networks to server
> networks are restricted to 4294Mbps for no reason.. It would make a huge
> difference to be able to allow local traffic to flow without being
> queued/restircted.
>
>
>>
>> (also, AIUI the correct place for queues is on the physical interface
>> not the vlan, since that's where the bottleneck is... you can assign
>> traffic to a queue name as it comes in on the vlan but I believe the
>> actual queue definition should be on the physical iface).
>>
>
> Hehe yes I know. Thanks for sharing though.
> I actually have very specific reasons for doing this (queues on the VLAN
> ifaces rather than phy) as there are multiple ISP connections for multiple
> VLANs, so the VLAN queues are set to restrict for the relevant ISP link etc.
>

Also separate to the multiple ISPs (I wont bore you with why as it is not
relevant here), the other reason for queueing on the VLANs is because it
allows you to get closer to the 10Gbps figure..
Ie, If you have queues on the 10Gbps PHY, you can only egress 4294Mbps to
_all_ VLANs. But if you have queues per-VLAN iface, you can egress multiple
times 4294Mbps on aggregate.
Eg, vlans 10,11,12,13 on single mcx0 trunk. 10->11 can do 4294Mbps and
12->13 can do 4294Mbps, giving over 8Gbps egress in total on the PHY. It is
dirty, but like I said, desperate for workarounds... :(


>
>
>>
>> "required for internet flows" - depends on your network layout.. the
>> upstream feed doesn't have to go via the same interface as inter-vlan
>> traffic.
>
>
> I'm not sure what you mean. All the internal networks/vlans are connected
> to local switches, and the switches have trunk to the firewall which hosts
> the default gateway for the VLANs and does inter-vlan routing.
> So all the clients go through the same VLANs/trunk/gateway for inter-vlan
> as they do for internet. Strict L3/4 filtering is required on inter-vlan
> traffic.
> I am honestly looking for support to recognise that this is a correct,
> valid and common setup, and so there is a genuine need to allow flows to
> not be queued on interfaces that have queues (which has many potential
> applications for many use cases, not just mine - so should be of interest
> to the developers?).
>
> Do you know why there has to be a default queue? Yes I know that traffic
> excluded from queues would take from the same interface the queueing is
> trying to manage, and potentially causes congestion. However with 10Gbps
> networking which is beyond common now, this does not matter when the queues
> are stuck at 4294Mbps
>
> Desperately trying to find workarounds that appeal.. Surely the need is a
> no brainer, and it is just a case of trying to encourage interest from a
> developer?
>
> Thanks :)
>


Re: PF queue bandwidth limited to 32bit value

2023-09-14 Thread Andrew Lemin
On Wed, Sep 13, 2023 at 8:35 PM Stuart Henderson 
wrote:

> On 2023-09-13, Andrew Lemin  wrote:
> > I have noticed another issue while trying to implement a 'prio'-only
> > workaround (using only prio ordering for inter-VLAN traffic, and HSFC
> > queuing for internet traffic);
> > It is not possible to have internal inter-vlan traffic be solely priority
> > ordered with 'set prio', as the existence of 'queue' definitions on the
> > same internal vlan interfaces (required for internet flows), demands one
> > leaf queue be set as 'default'. Thus forcing all inter-vlan traffic into
> > the 'default' queue despite queuing not being wanted, and so
> > unintentionally clamping all internal traffic to 4294M just because full
> > queuing is needed for internet traffic.
>
> If you enable queueing on an interface all traffic sent via that
> interface goes via one queue or another.
>

Yes, that is indeed the very problem. Queueing is enabled on the inside
interfaces, with bandwidth values set slightly below the ISP capacities
(multiple ISP links as well), so that all things work well for all internal
users.
However this means that inter-vlan traffic from client networks to server
networks are restricted to 4294Mbps for no reason.. It would make a huge
difference to be able to allow local traffic to flow without being
queued/restircted.


>
> (also, AIUI the correct place for queues is on the physical interface
> not the vlan, since that's where the bottleneck is... you can assign
> traffic to a queue name as it comes in on the vlan but I believe the
> actual queue definition should be on the physical iface).
>

Hehe yes I know. Thanks for sharing though.
I actually have very specific reasons for doing this (queues on the VLAN
ifaces rather than phy) as there are multiple ISP connections for multiple
VLANs, so the VLAN queues are set to restrict for the relevant ISP link etc.


>
> "required for internet flows" - depends on your network layout.. the
> upstream feed doesn't have to go via the same interface as inter-vlan
> traffic.


I'm not sure what you mean. All the internal networks/vlans are connected
to local switches, and the switches have trunk to the firewall which hosts
the default gateway for the VLANs and does inter-vlan routing.
So all the clients go through the same VLANs/trunk/gateway for inter-vlan
as they do for internet. Strict L3/4 filtering is required on inter-vlan
traffic.
I am honestly looking for support to recognise that this is a correct,
valid and common setup, and so there is a genuine need to allow flows to
not be queued on interfaces that have queues (which has many potential
applications for many use cases, not just mine - so should be of interest
to the developers?).

Do you know why there has to be a default queue? Yes I know that traffic
excluded from queues would take from the same interface the queueing is
trying to manage, and potentially causes congestion. However with 10Gbps
networking which is beyond common now, this does not matter when the queues
are stuck at 4294Mbps

Desperately trying to find workarounds that appeal.. Surely the need is a
no brainer, and it is just a case of trying to encourage interest from a
developer?

Thanks :)


Re: PF queue bandwidth limited to 32bit value

2023-09-14 Thread Andrew Lemin
On Wed, Sep 13, 2023 at 8:22 PM Stuart Henderson 
wrote:

> On 2023-09-12, Andrew Lemin  wrote:
> > A, thats clever! Having bandwidth queues up to 34,352M would
> definitely
> > provide runway for the next decade :)
> >
> > Do you think your idea is worth circulating on tech@ for further
> > discussion? Queueing at bps resolution is rather redundant nowadays, even
> > on the very slowest links.
>
> tech@ is more for diffs or technical questions rather than not-fleshed-out
> quick ideas. Doing this would solve some problems with the "just change it
> to 64-bit" mooted on the freebsd-pf list (not least with 32-bit archs),
> but would still need finding all the places where the bandwidth values are
> used and making sure they're updated to cope.
>
>
Yes good point :) I am not in a position to undertake this myself at the
moment.
If none of the generous developers feel included to do this despite the
broad value, I might have a go myself at some point (probably not able
until next year sadly).

"just change it to 64-bit" mooted on the freebsd-pf list - I have been
unable to find this conversation. Do you have a link?


>
> --
> Please keep replies on the mailing list.
>
>