Thinkpad x260 not connecting to network
Ifconfig output for iwm0: iwm0: flags=808847 mtu 1500 lladdr 44:85:00:14:a4:06 index 1 priority 4 llprio 3 groups: wlan media: IEEE802.11 autoselect status: no network ieee80211: nwid sharynmikealbie wpakey wpaprotos wpa2 wpaakms psk wpaciphers ccmp wpagroupcipher ccmp contents of hostname.iwm0: join sharynmikealbie wpakey i-triple-checked-the-pw-i-swear inet autoconf I've run fw_update and done many searches and can't figure out why this doesn't work. Any help would be greatly appreciated. Thanks in advance.
Re: efiboot: change default partition from hd0a
Hello, On Fri, 1 Dec 2023 11:21:22 -0800 Johnathan Cobden-Nolan wrote: > I have installed OpenBSD on hd0l: in my case it is for multi-booting, > but I imagine there are other use cases where boot and/or root are > installed on partitions other than 'a'. > > This is a UEFI system so I've installed the efi bootloader which I am > able to execute. The bootloader first complains that there is no > hd0a:/etc/boot.conf. This is expected, since my install is at hd0l. > Since there is no boot.conf being read, it doesn't know where to try > booting: I am only able to boot the OS by typing "boot hd0l:/bsd". If you just want to avoid the typing, you can put set device hd0l on /etc/boot.conf in 'a' parition. > This is not the end of the world, but it feels like it should be possible > to have a boot.conf somewhere other than the 'a' partition. Is it? Is > it possible to have a conf file in the EFI partition alongside the > bootloader itself? > > Thanks, > Johnathan >
wired rdiff-backup doc
Hello, 7.4, rdiff-backup After the upgrade to 7.4 I have been invited to update my outdated command line to *the new one* by rdiff-backup. The puzzle was not so easy to solve as "rdiff-backup --new --help" suggested a good mix of options; "man rdiff-backup" gave out an other set of options and two examples, one with the [kind of operation] declared just after rdiff-backup, the other one with the [kind of operation] declared just after the option lists; a little overwhelming: when you make a mistake the shell show off the *good options* suggesting among the others --new, --nonew, etc (not recognized) and missing to list all the various --except options among the others. I save you from quoting the options listed by "rdiff-backup backup --help". In the end after 10min of tries I was able to launch my backup.. == Nowarez Market
efiboot: change default partition from hd0a
I have installed OpenBSD on hd0l: in my case it is for multi-booting, but I imagine there are other use cases where boot and/or root are installed on partitions other than 'a'. This is a UEFI system so I've installed the efi bootloader which I am able to execute. The bootloader first complains that there is no hd0a:/etc/boot.conf. This is expected, since my install is at hd0l. Since there is no boot.conf being read, it doesn't know where to try booting: I am only able to boot the OS by typing "boot hd0l:/bsd". This is not the end of the world, but it feels like it should be possible to have a boot.conf somewhere other than the 'a' partition. Is it? Is it possible to have a conf file in the EFI partition alongside the bootloader itself? Thanks, Johnathan
Re: termtypes.master glitch in building -current
This is not new. >From time to time, manual crossover build steps occur. We don't build them into the tree, because that turn into future burden. Eric Grosse wrote: > When I've built -current on several machines recently, the procedure dies at > ===> share/termtypes > /usr/bin/tic -C -x /usr/src/share/termtypes/termtypes.master > termcap > /usr/bin/tic -x -o terminfo /usr/src/share/termtypes/termtypes.master > "/usr/src/share/termtypes/termtypes.master", line 4429, >terminal 'mintty': error writing > /usr/obj/share/termtypes/terminfo/m/mintty > because at that point in the build tic has been recompiled but still lives > in /usr/obj/usr.bin/tic/tic. > > A manual workaround is to install that new tic and restart the build. > A better fix would be to change /usr/src/share/termtypes/Makefile from > TIC=/usr/bin/tic to TIC=/usr/obj/usr.bin/tic/tic. > Or set PATH in the Makefile to have /usr/obj/usr.bin/tic before /usr/bin. > Or change the build sequence for when tic is installed. > Or just expect anyone building -current is competent to debug for themselves. >
termtypes.master glitch in building -current
When I've built -current on several machines recently, the procedure dies at ===> share/termtypes /usr/bin/tic -C -x /usr/src/share/termtypes/termtypes.master > termcap /usr/bin/tic -x -o terminfo /usr/src/share/termtypes/termtypes.master "/usr/src/share/termtypes/termtypes.master", line 4429, terminal 'mintty': error writing /usr/obj/share/termtypes/terminfo/m/mintty because at that point in the build tic has been recompiled but still lives in /usr/obj/usr.bin/tic/tic. A manual workaround is to install that new tic and restart the build. A better fix would be to change /usr/src/share/termtypes/Makefile from TIC=/usr/bin/tic to TIC=/usr/obj/usr.bin/tic/tic. Or set PATH in the Makefile to have /usr/obj/usr.bin/tic before /usr/bin. Or change the build sequence for when tic is installed. Or just expect anyone building -current is competent to debug for themselves.
7.4 pfsync possible state update loop?
Hi List, I just updated two carp/pfsync firewalls from 7.3 to 7.4. After updating the second box I see a massive increase in traffic on the sync interface. I now reproduced this with another pair of firewalls - same thing. Both firewall have three physical interfaces: external, internal and sync. Sync interface is connected via ethernet cable directly. Syncinterface has an ip address. Configuration of hostname.pfsync0: syncdev em2 up The way I updated these boxes, lets call them primary and secondary: 1. update secondary to 7.4, including the change in hostname.pfsync0 2. change hostname.carp0 to promote to master - reboot 3. secondary is now master 4. update primary to 7.4 => traffic on syncif increases I tried so far - without any improvements: - reboot both machines after another - promote primary again - ifconfig pfsync0 down; pfctl -F states; ifconfig pfsync0 up I think they might see some kind of loop updating the states between each other. Could someone point me to how I could diagnose further? Kind Regards, Christian
Re: pf queues
> On Thu, Nov 30, 2023 at 03:55:49PM +0300, 4 wrote: >> >> "cbq can entirely be expressed in it" ok. so how do i set priorities for >> queues in hfsc for my local(not for a router above that knows nothing about >> my existence. tos is an absolutely unviable concept in the real world) >> pf-router? i don't see a word about it in man pf.conf >> > In my reply to the initial message in this thread, I gave you the references > that spell this out fairly clearly. > And you're dead wrong about the pf.conf man page. Unless of course you > are trying to look this up on a system that still runs something that > is by now roughly a decade out of date. i don't understand what you're pointing at, because "prio" and "hfsc" are different independent mechanisms, not two parts of one whole. in cbq these were two parts of the same mechanism, cbq could simultaneously slice and priotize traffic
Re: pf queues
On 2023/12/01 15:57, 4 wrote: > >But CBQ doesn't help anyway, you still have this same problem. > the problem when both from below and from above can be told to you "go and > fuck yourself" can't be solved, but cbq gives us two mechanisms we need- > priorities and traffic restriction. nothing more can be done. but and less > will not suit us If you still don't see how priorities in CBQ can't help, there's no point me replying any more.
Re: pf queues
> On 2023-12-01, 4 wrote: >I don't know why you are going on about SMT here. i'm talking about not sacrificing functionality for the sake of hypothetical performance. the slides say that using queues degrades performance by 10%. and you're saying there won't be anything in the queues until an overload event occurs. as i understand it, these are interrelated things ;) >And there is no way to tell when the upstream router has forwarded the packets. and we don't need to know that. the only way to find out when an overload "occurred" is to set some threshold value lower than the theoretical bandwidth of the interface and look when the actual speed on the interface exceeds this threshold. and then we will put packets in queues, but not early(so that our slaves don't get too tired, right?). but this has nothing with when overload actually happens but not in our imagination. in the most cases there is no bond between what we have assumed and what is actually happening(because there is no feedback. yes, there is ecn, but it doesn't work). i don't like this algorithm because it's a non-working algorithm. but an algorithm with priorities, when we ALWAYS(and not only when an imaginary overload occurred) put a packets in the queues, when we ALWAYS send packets with a higher priority first, and all the others only when there are no packets with a higher priority in the queue, this algorithm is working. i.e. we always use queues, despite the loss of 10% performance. what will happen on the overloaded upstream router is no our problem. our area of responsibility is to put more important for us packets into the our network card. but this requires a constantly working(and not only when an imaginary overload has occurred) priority mechanism. that's why i say that "prio" is much more useful than "hsfc". but it is also possible that traffic as important to us as ssh can take our entire channel, and we don't want that. and that's exactly where we need to limit the maximum queue speed. there may also be a situation where at least some number of packets should be guaranteed to go through some queue, for icmp as example, and here we need hsfc, since priorities alone cannot solve this problem. or we need cbq that could do it all at once. and i exist for all this to work well, it is i who must plan all this competently and prudently- this is my area of responsibility. and look, i need priorities and speed limits for this, but i don’t need to know how the upstream router is doing. if he has problems, he will send me confirmation of receipt less often or he will simply discard my packets. but that's his business, not mine. and in the same way my router will deal with clients on my local network. >BTW, HFSC with bandwidth and max set to the same values should be the same >thing as CBQ. except that the hfsc does not have a priority mechanism. ps: >But CBQ doesn't help anyway, you still have this same problem. the problem when both from below and from above can be told to you "go and fuck yourself" can't be solved, but cbq gives us two mechanisms we need- priorities and traffic restriction. nothing more can be done. but and less will not suit us
relayd checks and uses disabled hosts
Hi, I have a strange behavior on my relayd servers. Relayd continues checking disabled hosts. I see it on backend server's logs. If relayd detects a down -> up of the service it re-adds the hosts in the table and passes traffic to the disabled hosts. Status remains disabled. Setup is with redirects. table { ldap1 retry 2, ldap2 retry 2 } redirect ldap { listen on $ldap_addr port ldaps pftag RELAYD_ldap forward to port 1636 mode least-states check icmp check script "/usr/local/sbin/check_ldap_c" demote 0relay timeout 2000 session timeout 432600 } On load balancer hosts I see: pfctl -a 'relayd/ldap' -t ldap -Tshow ldap1_IP ldap2_IP If I do relayctl host dis ldap2 I see in logs Dec 1 13:11:24 relayd[59724]: table ldap: 0 added, 1 deleted, 0 changed, 0 killed # relayctl show sum|grep ldap 1 redirect ldap active 1 table ldap:1636 active (1 hosts) 1 host ldap1 100.00% up 2 host ldap2 disabled # pfctl -a 'relayd/ldap' -t ldap -Tshow ldap1_IP (only) So far, so good. However... However, when I actually close the service on server ldap2 I see: Dec 1 13:12:27 relayd[42873]: host ldap2, check script (766ms,script failed), state up -> down, availability 98.29% Dec 1 13:12:27 relayd[71859]: table ldap: 0 added, 0 deleted, 0 changed, 0 killed Now, when I restart the server or the service on ldap2: Dec 1 13:17:08 relayd[42873]: host ldap2, check script (987ms,script ok), state down -> up, availability 98.28% Dec 1 13:17:12 relayd[71859]: table ldap: 1 added, 0 deleted, 0 changed, 0 killed # relayctl show sum|grep ldap2 2 host ldap2 disabled Hosts is shown as disabled, but it's added the table. # pfctl -a 'relayd/ldap' -t ldap -Tshow ldap1_IP ldap2_IP again: # relayctl host dis ldap2 command succeeded. # pfctl -a 'relayd/ldap' -t ldap -Tshow ldap1_IP ldap2_IP During this whole time while ldap2 is disabled I keep seeing in ldap2's logs connects from the load balancer although it's disabled. Logs from the check script. When the check sees the service down->up, it re-enables the host although in summary it's still stated as disabled. Clients are also coming now apart from the check script. If I re-enable the disabled host: # relayctl host en ldap2 command succeeded Dec 1 13:24:35 relayd[99810]: host ldap2, check script (796ms,script ok), state unknown -> up, availability 100.00% Dec 1 13:24:39 relayd[59724]: table ldap: 0 added, 0 deleted, 0 changed, 0 killed I checked web csv but can't see any related change on relayd... On August and 7.3 this didn't happen. Giannis
Re: relayd checks and uses disabled hosts
On 01/12/2023 13:30, Kapetanakis Giannis wrote: > I checked web csv but can't see any related change on relayd... > > On August and 7.3 this didn't happen. Not relevant. I'm not on current, I run release. G
Re: pf queues
On Fri, 1 Dec 2023 04:56:40 +0300 4 wrote: > match proto icmp set prio(6 7) queue(6-fly 7-ack) > how is this supposed to work at all? i.e. packets are placed both in > prio's queues 6/7(in theory priorities and queues are the same > thing), and in hsfc's queues 6-fly/7-ack at once? I am not sure I understand what you don't understand here. Straight from manpage: https://man.openbsd.org/pf.conf#set~2 If two priorities are given, TCP ACKs with no data payload and packets which have a TOS of lowdelay will be assigned to the second one. https://man.openbsd.org/pf.conf#set~3 If two queues are given, packets which have a TOS of lowdelay and TCP ACKs with no data payload will be assigned to the second one. ICMP is not the best example, but syntax works. I guess the rule you quoted results in behaviour where all the ICMP packets get priority of 6 and get assigned to queue 6-fly, even though the idea was to have requests with priority of 6 assigned to queue 6-fly, and replies with priority of 7 to queue 7-ack. But then again perhaps it works the latter way, if icmp replies have TOS of lowdelay. If this was TCP, payload would get priority of 6 and assigned to queue 6-fly, while ACKs would get priority of 7 and assigned to queue 7-ack. Anyway, after years of usage, and lot of frustration in the beginning, I find current approach more flexible, because in HFSC queue and priority have to be the same, while in current pf we can set it to be exactly like HFSC, but also to have different priorities within the same queue, or different queue for same priority. At this point I only miss the ability to see prio values somewhere in monitoring tools like systat. The only way to get the answers is to test, write ruleset wisely, and observe systat. If someone knows of some others please let me know, I am by no means "an expert on pf queueing", just a guy who tries to tame his employer's network for quite some time now. Regards, -- Before enlightenment - chop wood, draw water. After enlightenment - chop wood, draw water. Marko Cupać https://www.mimar.rs/
Re: pf queues
On 2023-12-01, 4 wrote: >> On 2023-11-30, 4 wrote: >>> we can simply calculate such a basic thing as the flow rate by dividing the >>> number of bytes in the past packets by the time. we can control the speed >>> through delays in sending packets. this is one side of the question. as for >>> the sequence, priorities work here. yes, we will send packets with a higher >>> priority until there are no such packets left in a queue, and then we will >>> send packets from queues with a lower priority. priorities are a sequence, >>> not a share of the total piece of the pie, and we don't need to know >>> anything about the pie. > >> But unless you are sending more traffic than the *interface* speed, >> you will be sending it out on receipt, there won't be any delays in >> sending packets to the next-hop modem/router. > >> There won't *be* any packets in the queue on the PF machine to send in >> priority order. > > ok. that is, for the sake of some 10% performance(not so long ago Theo turned > off smt, and wanted to remove its support altogether. but smt it's > significantly more than 10% of performance) you use queues only when the > channel overload, that you are not able to reliably detect, but only assume > about its occurrence? there's nothing easier! just put packets in the queue > at all times :D I don't know why you are going on about SMT here. But some workloads are demonstrably *slower* if SMT is used (the scheduler just treats them as full cores, when it would probably be better to only permit threads of the same process to share SMTs on the same core). And of course there are the known problems that became very apparent with the CPU vulnerabilities that became widely known *after* OpenBSD disabled SMT by default. But anyway back to packets. The only constraint on transmitting packets from the OpenBSD machine is the network interface facing the next-hop router. Say that is a 1Gbps interface. Say you have 200Mbps of traffic to forward from other interfaces. And that the upstream connection can handle something between 100Mbps and 200Mbps but you don't know how much. And there is no way to tell when the upstream router has forwarded the packets. BTW, HFSC with bandwidth and max set to the same values should be the same thing as CBQ. But CBQ doesn't help anyway, you still have this same problem. The only thing I can think of that might possibly help is to delay all packets ("set delay") and use prio. I haven't tested to see if that actually works but maybe. If you want real controls on the PF box you need to cap to the *minimum* bandwidth and lose anything above that. Or cap somewhere between the two picked as a trade-off between lost capacity and not always doing anything useful.
Re: pf queues
> On 2023-11-30, 4 wrote: >> we can simply calculate such a basic thing as the flow rate by dividing the >> number of bytes in the past packets by the time. we can control the speed >> through delays in sending packets. this is one side of the question. as for >> the sequence, priorities work here. yes, we will send packets with a higher >> priority until there are no such packets left in a queue, and then we will >> send packets from queues with a lower priority. priorities are a sequence, >> not a share of the total piece of the pie, and we don't need to know >> anything about the pie. > But unless you are sending more traffic than the *interface* speed, > you will be sending it out on receipt, there won't be any delays in > sending packets to the next-hop modem/router. > There won't *be* any packets in the queue on the PF machine to send in > priority order. ok. that is, for the sake of some 10% performance(not so long ago Theo turned off smt, and wanted to remove its support altogether. but smt it's significantly more than 10% of performance) you use queues only when the channel overload, that you are not able to reliably detect, but only assume about its occurrence? there's nothing easier! just put packets in the queue at all times :D
Re: pf queues
On 2023-11-30, 4 wrote: > we can simply calculate such a basic thing as the flow rate by dividing the > number of bytes in the past packets by the time. we can control the speed > through delays in sending packets. this is one side of the question. as for > the sequence, priorities work here. yes, we will send packets with a higher > priority until there are no such packets left in a queue, and then we will > send packets from queues with a lower priority. priorities are a sequence, > not a share of the total piece of the pie, and we don't need to know anything > about the pie. But unless you are sending more traffic than the *interface* speed, you will be sending it out on receipt, there won't be any delays in sending packets to the next-hop modem/router. There won't *be* any packets in the queue on the PF machine to send in priority order.