Re: mount -a ingores NFS record in /etc/fstab
Thank you! I had the same issue and could not understand what I was missing. On Wed Sep 11 00:59:24 2024, Kirill A. Korinsky wrote: > On Tue, 10 Sep 2024 23:29:58 +0200, > Kirill A. Korinsky wrote: > > > > 10.36.25.1:/usr/src /usr/src nfs nodev,nosuid 0 0 > > Here the issue. This line misses fs_type. It requires rw, ro, or something. > > -- > wbr, Kirill > -- Best regards Maksim Rodin
Re: Options to have relayd add IP to pf?
> We're straying from the original problem, but have you considered sshguard? No. I initially looked for something like fail2ban, but after some working with it on ubuntu and regular problems with its sqlite database I decided to just try something primitive and clear. All I have to do is to change the oneliner to give me the right ip addresses. If I want to do something special with these addresses I only have to change my script.sh file. I am just testing all this and may be missing something critical but the results look good for me. On Mon Aug 26 11:06:12 2024, Zé Loff wrote: > On Mon, Aug 26, 2024 at 11:27:02AM +0300, Maksim Rodin wrote: > > Hello, > > Here is my ugly script in testing which uses a postgres table to track bad > > guys in > > authlog and pf to lock them forever. > > --- > > #! /bin/ksh > > MAX_RETRIES=2 > > function finish_serving { > >echo "Finish serving"; > >exit 0; > > } > > function add_entry { > >psql -U ecounter -d ecounter_db -q -c "merge into entry_counter \ > >as ec using (select '$1' as e) on ec.entry = \ > >e when matched and ec.count < $MAX_RETRIES then \ > >update set count = count + 1 when not matched then \ > >insert (entry, count) values ('$1', 1);"; > >RESULT=$(psql -U ecounter -d ecounter_db -t -c "select entry from \ > >entry_counter where entry = '$1' and count >= $MAX_RETRIES;"); > >if [[ -n $RESULT ]]; then > >echo "pfctl add to table $RESULT"; > >/sbin/pfctl -vvt bad_ips -T add $RESULT; > >/sbin/pfctl -vvk $RESULT; > >NET=$(echo $RESULT | awk -F. '{print $1 "." $2 ".0.0/16"}'); > >echo "pfctl add to table $NET"; > >/sbin/pfctl -vvt bad_ips -T add $NET; > >/sbin/pfctl -vvk $NET; > >RESULT=""; > >NET=""; > >fi > > } > > trap finish_serving SIGINT > > echo Start serving... > > while read line; > >do add_entry $line; > > done > > --- > > > > And an ugly oneliner to make it do the job in real time: > > --- > > tail -fn0 /var/log/authlog | grep -E \ > > --line-buffered 'Failed password' | grep -Eo \ > > --line-buffered '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}'\ > > | ksh script.sh > > --- > > > > On Sat Aug 24 00:38:11 2024, Joel Carnat wrote: > > > > > > > > > > > Le 23 août 2024 à 17:12, Peter N. M. Hansteen a écrit > > > > : > > > > > > > > On Fri, Aug 23, 2024 at 12:54:20PM +0200, Joel Carnat wrote: > > > >> I have a server which gets flooded with unsolicited HTTP requests. So > > > >> far, I use relayd filters to identify those requests and block them, > > > >> at relayd level. It works as they never reach the web server but > > > >> relayd is still working to block them. > > > >> > > > >> I thought of parsing relayd logs to get those IPs and add them to a pf > > > >> block table, using an automated script. > > > > > > > > If the problem is that there are a lot of requests from the same hosts > > > > coming in rapid-fire, it is > > > > possible that state tracking rules with overloading could be the thing > > > > to try. > > > > > > > > The other thing that comes to mind is to put together something that > > > > parses the logs > > > > and adds offenders to a table of addresses that PF will block. > > > > > > > > Something along the lines of what is described in > > > > https://nxdomain.no/~peter/forcing_the_password_gropers_through_a_smaller_hole.html > > > > (also prettified but tracked at > > > > https://bsdly.blogspot.com/2017/04/forcing-password-gropers-through.html) > > > > could be what you need (some assembly required, obviously). > > > > > > > > - Peter > > > > > > Unfortunately, those are not single IP spamming. It looks more like > > > infected computers and/or computer farms sending individual requests at > > > "normal" rate. There are just thousands of them. > > > > > > The only way to identify them is by looking at User-Agent and/ou HTTP > > > requests body. So pf only won’t be enough there. > > > > > > I thought I could use some matching relayd rules that would tag the > > > connections so that pf blocks them. But it seems pftag is not made for > > > this. > > > > > > Writing a script and feed it using syslog is doable. But I hoped I could > > > use only relayd and pf. > > > > -- > > Best regards > > Maksim Rodin > > > > С уважением, > > Родин Максим > > > > We're straying from the original problem, but have you considered sshguard? > > > -- > -- Best regards Maksim Rodin
Re: Options to have relayd add IP to pf?
Hello, Here is my ugly script in testing which uses a postgres table to track bad guys in authlog and pf to lock them forever. --- #! /bin/ksh MAX_RETRIES=2 function finish_serving { echo "Finish serving"; exit 0; } function add_entry { psql -U ecounter -d ecounter_db -q -c "merge into entry_counter \ as ec using (select '$1' as e) on ec.entry = \ e when matched and ec.count < $MAX_RETRIES then \ update set count = count + 1 when not matched then \ insert (entry, count) values ('$1', 1);"; RESULT=$(psql -U ecounter -d ecounter_db -t -c "select entry from \ entry_counter where entry = '$1' and count >= $MAX_RETRIES;"); if [[ -n $RESULT ]]; then echo "pfctl add to table $RESULT"; /sbin/pfctl -vvt bad_ips -T add $RESULT; /sbin/pfctl -vvk $RESULT; NET=$(echo $RESULT | awk -F. '{print $1 "." $2 ".0.0/16"}'); echo "pfctl add to table $NET"; /sbin/pfctl -vvt bad_ips -T add $NET; /sbin/pfctl -vvk $NET; RESULT=""; NET=""; fi } trap finish_serving SIGINT echo Start serving... while read line; do add_entry $line; done --- And an ugly oneliner to make it do the job in real time: --- tail -fn0 /var/log/authlog | grep -E \ --line-buffered 'Failed password' | grep -Eo \ --line-buffered '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}'\ | ksh script.sh --- On Sat Aug 24 00:38:11 2024, Joel Carnat wrote: > > > > > Le 23 août 2024 à 17:12, Peter N. M. Hansteen a écrit : > > > > On Fri, Aug 23, 2024 at 12:54:20PM +0200, Joel Carnat wrote: > >> I have a server which gets flooded with unsolicited HTTP requests. So far, > >> I use relayd filters to identify those requests and block them, at relayd > >> level. It works as they never reach the web server but relayd is still > >> working to block them. > >> > >> I thought of parsing relayd logs to get those IPs and add them to a pf > >> block table, using an automated script. > > > > If the problem is that there are a lot of requests from the same hosts > > coming in rapid-fire, it is > > possible that state tracking rules with overloading could be the thing to > > try. > > > > The other thing that comes to mind is to put together something that parses > > the logs > > and adds offenders to a table of addresses that PF will block. > > > > Something along the lines of what is described in > > https://nxdomain.no/~peter/forcing_the_password_gropers_through_a_smaller_hole.html > > (also prettified but tracked at > > https://bsdly.blogspot.com/2017/04/forcing-password-gropers-through.html) > > could be what you need (some assembly required, obviously). > > > > - Peter > > Unfortunately, those are not single IP spamming. It looks more like infected > computers and/or computer farms sending individual requests at "normal" rate. > There are just thousands of them. > > The only way to identify them is by looking at User-Agent and/ou HTTP > requests body. So pf only won’t be enough there. > > I thought I could use some matching relayd rules that would tag the > connections so that pf blocks them. But it seems pftag is not made for this. > > Writing a script and feed it using syslog is doable. But I hoped I could use > only relayd and pf. -- Best regards Maksim Rodin С уважением, Родин Максим
doveadm index segfaults after upgrade to 7.5
Hello After upgrading the machine to 7.5 amd64 doveadm command used for indexing mailboxes does not work anymore: # doveadm -Dvv index -u somemail...@somedom.com '*' ... some usual diagnostic messages... May 31 06:33:12 doveadm(somemail...@somedom.com): \ Debug: Mailbox INBOX: UID 1048: Opened mail because: fts indexing Segmentation fault There is also an entry in dovecot.log when mail indexing was to be done automatically: May 31 01:30:07 mail dovecot: indexer-worker(somemail...@somedom.com)\ <18492>:\ Fatal: master: service(indexer-worker): child 18492 killed \ with signal 11 (core not dumped -\ https://dovecot.org/bugreport.html#coredumps - set service \ indexer-worker { drop_priv_before_exec=yes }) # pkg_info -m | grep dovecot dovecot-2.3.21v0compact IMAP/POP3 server dovecot-fts-xapian-1.7.0 full text search plugin for Dovecot using Xapian dovecot-ldap-2.3.21v0 LDAP authentication / dictionary support for Dovecot dovecot-pigeonhole-0.5.21v1 Sieve mail filtering for Dovecot Last configuration changes in dovecot were made long before upgrade and I did not have problems with that configuration on 7.4 -- Best regards Maksim Rodin
Re: packet filter silently ignores a rule
Hello! This was the first thing I checked. But I think there was a deadly combo of two factors: 1) the continuation character 2) The nuance described in man pf.conf: "Care should be taken when commenting out multi-line text: the comment is effective until the end of the entire block." After continuous experimenting with the rules there are too many commented lines mixed with real config blocks in my pf.conf. I really have to do some cleaning. Thank you everybody for all your help! On Tue May 21 16:49:00 2024, Steve Williams wrote: > A lot of Unix configuration files have an issue with the continuation > character "\" IF THERE IS A SPACE AFTER IT!! > > Make sure that the \ is the last character on the line! > > S. > > On 20/05/2024 11:01 p.m., Maksim Rodin wrote: > > I solved the problem by copying the entire rule block right after > > the old one and commenting out the old one. > > > > New: > > pass in on egress inet proto tcp to (egress) port $mail_ports \ > > keep state (max-src-conn 20, \ > > max-src-conn-rate 35/300, overload \ > > flush global) \ > > rdr-to $mail_server > > > > Old: > > pass in on egress inet proto tcp to (egress) \ > > port $mail_ports \ > > keep state (max-src-conn 20, \ > > max-src-conn-rate 35/300, overload \ > > flush global) rdr-to $mail_server > > > > I only split one line and merged two other lines into one > > but I think I did it correctly and I do not see any logical > > changes in the block. > > > > I still cannot understand what happened because there were no > > uncommented excess lines within the old block. > > > > Before copying the entire rule block I even occasionally made > > a typo in the old rule and checked it with pfctl -nf /etc/pf.conf. > > PF still did as if there were no block with the typo at all: > > > > pass in on egress inet proto tcp to (egress) \ > > ort $mail_ports \ > > keep state (max-src-conn 20, \ > > max-src-conn-rate 35/300, overload \ > > flush global) rdr-to $mail_server > > > > > > > > On Mon May 20 11:43:21 2024, Maksim Rodin wrote: > > > Hello, > > > I use OpenBSD 7.5 stable amd64. > > > I uncommented an old rule and the corresponding macro in pf.conf > > > which definitely worked when the > > > machine was on version 7.3 and possibly 7.4. > > > > > > After that: > > > pfctl -nf /etc/pf.conf shows nothing > > > pfctl -f /etc/pf.conf shows nothing > > > So Packet Filter seems to be happy with the config as a whole. > > > > > > pfctl -vvsr shows the old rules WITHOUT the uncommented one. > > > pfctl -vvnf /etc/pf.conf warns that the uncommented macro > > > used in the uncommented rule is NOT used. > > > > > > The output of pfctl -vvnf /etc/pf.conf is appended as > > > pfctl_vvnf file > > > The output of pfctl -vvsr is appended as > > > pfctl_vvsr file > > > > > > > > > Did I miss something when changing the configuration? > > > > > > The uncommented section 1 is: > > > mail_ports = "{ submission imaps }" > > > > > > The uncommented section 2 is: > > > pass in on egress inet proto tcp to (egress) \ > > > port $mail_ports \ > > > keep state (max-src-conn 20, \ > > > max-src-conn-rate 35/300, overload \ > > > flush global) rdr-to $mail_server > > > > > > > > > My whole pf.conf (all uncommented lines): > > > int_if = "{ vether1 em1 em2 em3 }" > > > table { 0.0.0.0/8 10.0.0.0/8 127.0.0.0/8 \ > > > 169.254.0.0/16 172.16.0.0/12 192.0.2.0/24 \ > > > 192.168.0.0/16 198.18.0.0/15 198.51.100.0/24 \ > > > } > > > table persist > > > table persist file "/etc/mail/nospamd" > > > table persist file "/etc/pf/bad_ips" > > > > > > transmission_server = "192.168.1.65" > > > mail_server = "192.168.1.171" > > > > > > mail_ports = "{ submission imaps }" > > > > > > block log all > > > set limit table-entries 100 > > > set block-policy drop > > > set syncookies adaptive (start 29%, end 15%) > > > set skip on lo > > > > > > match in all scrub (no-df random-id max-mss 1440) > > > match out on egress inet from (vether1:network) \ > > > to any nat-to (egress:0) > > > > > > block in quick on egress
Re: packet filter silently ignores a rule
I solved the problem by copying the entire rule block right after the old one and commenting out the old one. New: pass in on egress inet proto tcp to (egress) port $mail_ports \ keep state (max-src-conn 20, \ max-src-conn-rate 35/300, overload \ flush global) \ rdr-to $mail_server Old: pass in on egress inet proto tcp to (egress) \ port $mail_ports \ keep state (max-src-conn 20, \ max-src-conn-rate 35/300, overload \ flush global) rdr-to $mail_server I only split one line and merged two other lines into one but I think I did it correctly and I do not see any logical changes in the block. I still cannot understand what happened because there were no uncommented excess lines within the old block. Before copying the entire rule block I even occasionally made a typo in the old rule and checked it with pfctl -nf /etc/pf.conf. PF still did as if there were no block with the typo at all: pass in on egress inet proto tcp to (egress) \ ort $mail_ports \ keep state (max-src-conn 20, \ max-src-conn-rate 35/300, overload \ flush global) rdr-to $mail_server On Mon May 20 11:43:21 2024, Maksim Rodin wrote: > Hello, > I use OpenBSD 7.5 stable amd64. > I uncommented an old rule and the corresponding macro in pf.conf > which definitely worked when the > machine was on version 7.3 and possibly 7.4. > > After that: > pfctl -nf /etc/pf.conf shows nothing > pfctl -f /etc/pf.conf shows nothing > So Packet Filter seems to be happy with the config as a whole. > > pfctl -vvsr shows the old rules WITHOUT the uncommented one. > pfctl -vvnf /etc/pf.conf warns that the uncommented macro > used in the uncommented rule is NOT used. > > The output of pfctl -vvnf /etc/pf.conf is appended as > pfctl_vvnf file > The output of pfctl -vvsr is appended as > pfctl_vvsr file > > > Did I miss something when changing the configuration? > > The uncommented section 1 is: > mail_ports = "{ submission imaps }" > > The uncommented section 2 is: > pass in on egress inet proto tcp to (egress) \ > port $mail_ports \ > keep state (max-src-conn 20, \ > max-src-conn-rate 35/300, overload \ > flush global) rdr-to $mail_server > > > My whole pf.conf (all uncommented lines): > int_if = "{ vether1 em1 em2 em3 }" > table { 0.0.0.0/8 10.0.0.0/8 127.0.0.0/8 \ >169.254.0.0/16 172.16.0.0/12 192.0.2.0/24 \ >192.168.0.0/16 198.18.0.0/15 198.51.100.0/24 \ > } > table persist > table persist file "/etc/mail/nospamd" > table persist file "/etc/pf/bad_ips" > > transmission_server = "192.168.1.65" > mail_server = "192.168.1.171" > > mail_ports = "{ submission imaps }" > > block log all > set limit table-entries 100 > set block-policy drop > set syncookies adaptive (start 29%, end 15%) > set skip on lo > > match in all scrub (no-df random-id max-mss 1440) > match out on egress inet from (vether1:network) \ > to any nat-to (egress:0) > > block in quick on egress from to any > block return out quick on egress from any to > block quick from > > pass out quick inet > pass in on $int_if inet > > pass in on egress inet proto tcp \ > to (egress) port 22 keep state \ > (max-src-conn 2, max-src-conn-rate 2/300, \ > overload flush global) > > pass in on egress inet proto { tcp udp } \ > to (egress) port domain keep state \ > (max-src-states 10) \ > rdr-to 127.0.0.1 port 8053 > > pass in on $int_if inet proto { tcp udp } from \ > (vether1:network) to (egress) port domain > > pass in on egress inet proto { tcp udp } \ > to (egress) port 5 \ > rdr-to $transmission_server > > pass in on egress inet proto tcp to (egress) \ > port $mail_ports \ > keep state (max-src-conn 20, \ > max-src-conn-rate 35/300, overload \ > flush global) rdr-to $mail_server > > pass in on egress proto tcp to (egress) \ > port smtp divert-to 127.0.0.1 port spamd > pass in on egress proto tcp from to (egress) \ > port smtp rdr-to $mail_server > pass in log on egress proto tcp from \ > to (egress) port smtp \ > rdr-to $mail_server > pass out on egress proto tcp to (egress) port smtp > > > -- > Best regards > Maksim Rodin > warning: macro 'mail_ports' not used > Loaded 714 passive OS fingerprints > int_if = "{ vether1 em1 em2 em3 }" > table { 0.0.0.0/8 10.0.0.0/8 127.0.0.0/8 169.254.0.0/16 > 172.16.0.0/12 192.0.2.0/24 192.168.0.0/16 198.18.0.0/15 198.51.100.0/24 } > table persist > table persist file "/e
packet filter silently ignores a rule
Hello, I use OpenBSD 7.5 stable amd64. I uncommented an old rule and the corresponding macro in pf.conf which definitely worked when the machine was on version 7.3 and possibly 7.4. After that: pfctl -nf /etc/pf.conf shows nothing pfctl -f /etc/pf.conf shows nothing So Packet Filter seems to be happy with the config as a whole. pfctl -vvsr shows the old rules WITHOUT the uncommented one. pfctl -vvnf /etc/pf.conf warns that the uncommented macro used in the uncommented rule is NOT used. The output of pfctl -vvnf /etc/pf.conf is appended as pfctl_vvnf file The output of pfctl -vvsr is appended as pfctl_vvsr file Did I miss something when changing the configuration? The uncommented section 1 is: mail_ports = "{ submission imaps }" The uncommented section 2 is: pass in on egress inet proto tcp to (egress) \ port $mail_ports \ keep state (max-src-conn 20, \ max-src-conn-rate 35/300, overload \ flush global) rdr-to $mail_server My whole pf.conf (all uncommented lines): int_if = "{ vether1 em1 em2 em3 }" table { 0.0.0.0/8 10.0.0.0/8 127.0.0.0/8 \ 169.254.0.0/16 172.16.0.0/12 192.0.2.0/24 \ 192.168.0.0/16 198.18.0.0/15 198.51.100.0/24 \ } table persist table persist file "/etc/mail/nospamd" table persist file "/etc/pf/bad_ips" transmission_server = "192.168.1.65" mail_server = "192.168.1.171" mail_ports = "{ submission imaps }" block log all set limit table-entries 100 set block-policy drop set syncookies adaptive (start 29%, end 15%) set skip on lo match in all scrub (no-df random-id max-mss 1440) match out on egress inet from (vether1:network) \ to any nat-to (egress:0) block in quick on egress from to any block return out quick on egress from any to block quick from pass out quick inet pass in on $int_if inet pass in on egress inet proto tcp \ to (egress) port 22 keep state \ (max-src-conn 2, max-src-conn-rate 2/300, \ overload flush global) pass in on egress inet proto { tcp udp } \ to (egress) port domain keep state \ (max-src-states 10) \ rdr-to 127.0.0.1 port 8053 pass in on $int_if inet proto { tcp udp } from \ (vether1:network) to (egress) port domain pass in on egress inet proto { tcp udp } \ to (egress) port 5 \ rdr-to $transmission_server pass in on egress inet proto tcp to (egress) \ port $mail_ports \ keep state (max-src-conn 20, \ max-src-conn-rate 35/300, overload \ flush global) rdr-to $mail_server pass in on egress proto tcp to (egress) \ port smtp divert-to 127.0.0.1 port spamd pass in on egress proto tcp from to (egress) \ port smtp rdr-to $mail_server pass in log on egress proto tcp from \ to (egress) port smtp \ rdr-to $mail_server pass out on egress proto tcp to (egress) port smtp -- Best regards Maksim Rodin warning: macro 'mail_ports' not used Loaded 714 passive OS fingerprints int_if = "{ vether1 em1 em2 em3 }" table { 0.0.0.0/8 10.0.0.0/8 127.0.0.0/8 169.254.0.0/16 172.16.0.0/12 192.0.2.0/24 192.168.0.0/16 198.18.0.0/15 198.51.100.0/24 } table persist table persist file "/etc/mail/nospamd" table persist file "/etc/pf/bad_ips" transmission_server = "192.168.1.65" mail_server = "192.168.1.171" mail_ports = "{ submission imaps }" set limit table-entries 100 set block-policy drop set syncookies adaptive (start 29%, end 15%) set skip on { lo } @0 block drop log all @1 match in all scrub (no-df random-id max-mss 1440) @2 match out on egress inet from (vether1:network:*) to any nat-to (egress:0:*) round-robin @3 block drop in quick on egress from to any @4 block return out quick on egress from any to @5 block drop quick from to any @6 pass out quick inet all flags S/SA @7 pass in on vether1 inet all flags S/SA @8 pass in on em1 inet all flags S/SA @9 pass in on em2 inet all flags S/SA @10 pass in on em3 inet all flags S/SA @11 pass in on egress inet proto tcp from any to (egress:*) port = 22 flags S/SA keep state (source-track rule, max-src-conn 2, max-src-conn-rate 2/300, overload flush global, src.track 300) @12 pass in on egress inet proto tcp from any to (egress:*) port = 53 flags S/SA keep state (source-track global, max-src-states 10) rdr-to 127.0.0.1 port 8053 @13 pass in on egress inet proto udp from any to (egress:*) port = 53 keep state (source-track global, max-src-states 10) rdr-to 127.0.0.1 port 8053 @14 pass in on vether1 inet proto tcp from (vether1:network:*) to (egress:*) port = 53 flags S/SA @15 pass in on em1 inet proto tcp from (vether1:network:*) to (egress:*) port = 53 flags S/SA @16 pass in on em2 inet proto tcp from (vether1:network:*) to (egress:*) port = 53 flags S/SA @17 pass in on em3 inet proto tcp from (vether1:network:*) to (egress:*) port = 53 flags S/SA @18 pass in on ve
unwind. entry is marked as invalid
Hello, I often have to deal with unwind refusing to serve dns queries. When it happens I see an entry like this in the daemon log: "May 6 13:15:22 main unwind[42415]: validation failure : key for validation mangolassi.it. is marked as invalid because of a previous no DNSSEC records" Reading misc archives I came to the following solution: https://marc.info/?l=openbsd-misc&m=164534272713803&w=2 "force accept bogus forwarder { fritz.box }" After I was tired of adding every single domain to my unwind.conf every time unwind was refusing to serve a query I began to add entries like this: ... force accept bogus autoconf { co } force accept bogus autoconf { be } force accept bogus autoconf { org } ... This solution helps longer... till I get to the site with a new tld which is still not listed in my config like the previous entries. One more interesting thing is: the new site might work well for weeks and then suddenly stop working with the same message in the log. Is there a better way to deal with name resolution using unwind? The most irritating thing is that when a site is partially working (it might be fetching many additional resources from other hosts) I have hard time to understand if it is the problem with the site or it is a problem with name resolution on my desktop. Here is my whole config. The forwarders are used when I connect to VPN to query internal resources by their internal IP: fwd1=10.24.2.11 fwd2=10.24.2.101 forwarder { $fwd1 $fwd2 } preference { autoconf forwarder } force accept bogus forwarder { internal_domain1 } force accept bogus forwarder { internal_domain2 } force accept bogus autoconf { co } force accept bogus autoconf { be } force accept bogus autoconf { org } force accept bogus autoconf { by } force accept bogus autoconf { ru } force accept bogus autoconf { com } force accept bogus autoconf { net } force accept bogus autoconf { nu } force accept bogus autoconf { io } force accept bogus autoconf { no } force accept bogus autoconf { cafe } force accept bogus autoconf { cc } force accept bogus autoconf { wiki } force accept bogus autoconf { us } force accept bogus autoconf { es } force accept bogus autoconf { market } force accept bogus autoconf { cloud } force accept bogus autoconf { got } force accept bogus autoconf { ca } force accept bogus autoconf { club } force accept bogus autoconf { site } force accept bogus autoconf { fans } force accept bogus autoconf { one } force accept bogus autoconf { gift } force accept bogus autoconf { xyz } force accept bogus autoconf { dev } force accept bogus autoconf { cz } force accept bogus autoconf { eu } -- Best regards Maksim Rodin
Any tool in base which allows to get all IPs in prefix?
Hello Is there any tool in base which allows to get something like this? $ nmap -sL -n IP_PREFIX ... a long list of ip addresses ... -- Maksim
Re: NFS mounted but shows nothing even df -h has it
The /mnt/hdd partition on your NFS server might just be not mounted which does not prevent nfs service from successfully serving an empty directory. Or one of your two nfs clients might have deleted all your files and you did not notice this. On Wed May 31 09:27:04 2023, Maksim Rodin wrote: > Hello, > Silly question but... > Are you sure that your NFS server still has any files on /mnt/hdd? > > On Wed May 31 09:07:15 2023, Jazzi Hong wrote: > > Hello, > > > > I have OpenBSD 7.2 installed and NFS service running on Cubieboard2, > > one Linux client and one MacOS client, everything works fine for the > > last 6 months. > > > > Yesterday as usual I mounted NFS share and showed mounting > > successfully even command `df -h` got it, but `ls /Users/jazzi/nfs` > > showed nothing. Tried on both Linux and MacOS. > > > > OpenBSD is 24*7 running and I didn't do anything to change the system, > > maybe it's too hot so I shut it down for the whole night and power on > > the next day but it didn't work. > > > > +++ > > Here is how I mount it on MacOS: > > > > > sudo mount -t nfs -o > > > resvport,async,nolocks,locallocks,soft,wsize=32768,rsize=32768 > > > 192.168.31.231:/mnt/hdd /Users/jazzi/nfs > > > > +++ > > Here is the settings on OpenBSD NFS server: > > > > # $ cat /etc/exports > > > > # For Macbook Air > > /mnt/hdd -alldirs -mapall=root 192.168.31.76 > > > > # For Linux desktop > > /mnt/hdd -alldirs -mapall=root 192.168.31.77 > > > > > > > > Any help will be appreciated, thank you. > > > > > > > > -- > > jazzi > > > > Best Regard, > > > > -- > С уважением, > Родин Максим -- С уважением, Родин Максим
Re: NFS mounted but shows nothing even df -h has it
Hello, Silly question but... Are you sure that your NFS server still has any files on /mnt/hdd? On Wed May 31 09:07:15 2023, Jazzi Hong wrote: > Hello, > > I have OpenBSD 7.2 installed and NFS service running on Cubieboard2, > one Linux client and one MacOS client, everything works fine for the > last 6 months. > > Yesterday as usual I mounted NFS share and showed mounting > successfully even command `df -h` got it, but `ls /Users/jazzi/nfs` > showed nothing. Tried on both Linux and MacOS. > > OpenBSD is 24*7 running and I didn't do anything to change the system, > maybe it's too hot so I shut it down for the whole night and power on > the next day but it didn't work. > > +++ > Here is how I mount it on MacOS: > > > sudo mount -t nfs -o > > resvport,async,nolocks,locallocks,soft,wsize=32768,rsize=32768 > > 192.168.31.231:/mnt/hdd /Users/jazzi/nfs > > +++ > Here is the settings on OpenBSD NFS server: > > # $ cat /etc/exports > > # For Macbook Air > /mnt/hdd -alldirs -mapall=root 192.168.31.76 > > # For Linux desktop > /mnt/hdd -alldirs -mapall=root 192.168.31.77 > > > > Any help will be appreciated, thank you. > > > > -- > jazzi > > Best Regard, > -- С уважением, Родин Максим
vfs.nfs.iothreads - how much is safe?
Hello, I found an option while reading man mount_nfs: "Use sysctl(8) or modify sysctl.conf(5) to adjust the vfs.nfs.iothreads value, which is the number of kernel threads created to serve asynchronous NFS I/O requests." I tried to raise this value to a maximum of 20 and saw a decent speedup in file transfer without any visible issue. But when using dd to write a 1GB zero file to an nfs share just for fun I saw my system become a little unresponsive while dd was running (my mouse was moving interruptedly): # dd if=/dev/zero of=/nfs_share/testfile bs=1000M count=1 How safe is it to raise this value to its maximum of 20? -- Maksim Rodin
Re: PC Engines APU platform EOL
Hello, Is there any problem with fanless x86_64 mini PCs with several NICs, sold on aliexpress? On Thu May 4 13:19:17 2023, Aaron Mason wrote: > On Thu, May 4, 2023 at 1:17 PM Damian McGuckin wrote: > > > > > > > Happy apu2 & apu4 user here. > > > > Ditto. > > > > > Are there other OpenBSD friendly options? > > > > Same question but qualifying that to add FANLESS and RACKMOUNT. > > > > I am thinking of trying an Intel Ruggest NUC for some scenarios but at > > best, they have dual RJ45 ethernets. > > > > Thanks - Damian > > > > The ZimaBoards are x86 based, again dual NICs but they do have the > PCIe slot to add extra. > > -- > Aaron Mason - Programmer, open source addict > I've taken my software vows - for beta or for worse > -- С уважением, Родин Максим
Re: old nslookup binary found?
Thank you very much! I must have missed it during that upgrade. On Sat Apr 15 09:41:13 2023, Peter Hessler wrote: > On 2023 Apr 15 (Sat) at 09:33:51 +0300 (+0300), Maksim Rodin wrote: > :Hello, > :I accidentally found a possibly old nslookup binary from 2019 > :in /usr/sbin when I ran nslookup as root: > :root ~ # echo $PATH > :/sbin:/usr/sbin:/bin:/usr/bin:/usr/X11R6/bin:/usr/local/sbin:/usr/local/bin > :root ~ # which nslookup > :/usr/sbin/nslookup > :root ~ # nslookup openbsd.org > :Bad system call (core dumped) > :root ~ # ls -lA /usr/sbin/nslookup > :-r-xr-xr-x 1 root bin 1499352 Oct 12 2019 /usr/sbin/nslookup > : > :But a working nslookup binary is there: > :root ~ # ls -lA /usr/bin/nslookup > :-r-xr-xr-x 3 root bin 403056 Mar 25 19:15 /usr/bin/nslookup > : > :Is it really just the old official binary which could remain after an > :upgrade? > :This is the 7 year old OpenBSD installation which is regularly > :upgraded. > : > :-- > :Maksim Rodin > : > > The upgrade from 6.6->6.7 guide did tell you to delete these files. > > https://www.openbsd.org/faq/upgrade67.html#RmFiles > > > -- > Living on Earth may be expensive, but it includes an annual free trip > around the Sun. -- С уважением, Родин Максим
old nslookup binary found?
Hello, I accidentally found a possibly old nslookup binary from 2019 in /usr/sbin when I ran nslookup as root: root ~ # echo $PATH /sbin:/usr/sbin:/bin:/usr/bin:/usr/X11R6/bin:/usr/local/sbin:/usr/local/bin root ~ # which nslookup /usr/sbin/nslookup root ~ # nslookup openbsd.org Bad system call (core dumped) root ~ # ls -lA /usr/sbin/nslookup -r-xr-xr-x 1 root bin 1499352 Oct 12 2019 /usr/sbin/nslookup But a working nslookup binary is there: root ~ # ls -lA /usr/bin/nslookup -r-xr-xr-x 3 root bin 403056 Mar 25 19:15 /usr/bin/nslookup Is it really just the old official binary which could remain after an upgrade? This is the 7 year old OpenBSD installation which is regularly upgraded. -- Maksim Rodin
Re: how tail waits for file to appear again?
Hello, Thank you for your advice! After I ran this tool and again read tail sources and `man kqueue` more carefully I hope I now have a better understanding how it is done it tail. While all the files opened with tail are in place kqueue is called with zero timeout which makes it just wait for chosen events forever. When one of the files being watched is deleted or renamed kqueue is repeatedly called with a 1 second timeout which will obviously return every time with 0 after which it is checked if the file which previously disappeared is back. If the file is back the timeout for kqueue is again set to zero, old file stats are replaced with new ones and normal data watching is resumed. On Fri Feb 17 15:17:17 2023, Stuart Henderson wrote: > On 2023-02-17, Maksim Rodin wrote: > >> > I was able to reproduce watching for new data and truncation of the > >> > file using "kqueue" but I do not quite understand how the original tail > >> > watches when the file appears again after deletion or renaming. > > I am sorry that I could not be clear enough in my words above. > > I meant I already understood a little how "kqueue" > > magic works. And I replicated some event watching logic in my learning > > task. > > I see how file truncating, deleting and renaming is watched by waiting > > NOTE_DELETE, NOTE_RENAME and NOTE_TRUNCATE events in EVFILT_VNODE > > filter. > > But I still do not see where and what event is watched for to make sure the > > file with initial file name is back (e.g. after deletion). After deleting a > > file there is no initial file > > descriptor, so there is nothing to watch using the old EVFILT_VNODE > > filter. > > I also see that after NOTE_DELETE | NOTE_RENAME events are caught only > > tfreopen function > > is called and I do not see any event watching actions in that function. > > It will probably be easier to grok if you see for yourself rather > than have it explained. > > I suggest running tail under ktrace, and watch the output live e.g. > with something like "kdump -l | ts %.T". In particular note the last > parameter to kevent(). > > > So my primary question might be: how can I monitor file creation (using > > events) > > by only knowing its name? > > While events can be used as part of it, you do need a little more > than events to watch for file creation. > > -- С уважением, Родин Максим
Re: how tail waits for file to appear again?
> > I was able to reproduce watching for new data and truncation of the > > file using "kqueue" but I do not quite understand how the original tail > > watches when the file appears again after deletion or renaming. I am sorry that I could not be clear enough in my words above. I meant I already understood a little how "kqueue" magic works. And I replicated some event watching logic in my learning task. I see how file truncating, deleting and renaming is watched by waiting NOTE_DELETE, NOTE_RENAME and NOTE_TRUNCATE events in EVFILT_VNODE filter. But I still do not see where and what event is watched for to make sure the file with initial file name is back (e.g. after deletion). After deleting a file there is no initial file descriptor, so there is nothing to watch using the old EVFILT_VNODE filter. I also see that after NOTE_DELETE | NOTE_RENAME events are caught only tfreopen function is called and I do not see any event watching actions in that function. So my primary question might be: how can I monitor file creation (using events) by only knowing its name? On Fri Feb 17 11:47:03 2023, Mike Fischer wrote: > > > Am 17.02.2023 um 06:23 schrieb Maksim Rodin : > > > > Hello, > > Sorry if I chose the wrong place to ask such a question. > > I have been learning C for a couple of months and along with reading > > "C Primer Plus" by Stephen Prata and doing some exercises from it I took > > a hard (for me) task to replicate a tail program in its simplest form. > > I was able to reproduce watching for new data and truncation of the > > file using kqueue but I do not quite understand how the original tail > > watches when the file appears again after deletion or renaming. > > By reading the original tail sources downloaded from OpenBSD mirror I > > see that this is done by calling tfreopen function which seems to use a > > "for" loop to (continuously?) stat(2) the file name till stat(2) > > successfully > > returns and it does not seem to load a CPU as a simple continuous "for" > > loop would do. > > No, the for loop in line 362 of forward.c > (https://cvsweb.openbsd.org/cgi-bin/cvsweb/src/usr.bin/tail/forward.c?annotate=1.33) > iterates over the files. Note that tail allows you to monitor more than one > file at a time, see tail(1). > > > > Can someone explain how it is done? > > tfreopen is called in line 224 of the same file inside a while(1) loop. At > the top of this loop kevent is called (L191). See kevent(2) for details on > how that works. That is the real _magic_ here ;-) > > tfqueue sets up the event mechanism for a single file so you may want to look > at that as well. > > > > May be there is a better way to watch for the file to appear correctly? > > The way tail(1) does this seems pretty optimal to me. > > > > Is inserting a sleep(3) in a loop an appropriate way? > > You could do this, but it’s less optimal than using kqueue/kevent because > sleep(3) will wait longer than necessary in some cases and wake up sooner > than required in others. It is basically a way to do polling which is always > worse than event driven code. > > > > > > Below is the function how it is done in tail: > > It would have been better to cite the file name and line numbers, very easy > with https://cvsweb.openbsd.org as I did above. There is also a mirror of the > repo on Github, which also makes this sort of thing very easy: > https://github.com/openbsd. E.g.: > https://github.com/openbsd/src/blob/master/usr.bin/tail/forward.c#L224 > > The links to the repositories are right on the https://www.openbsd.org home > page, so not hard to find at all. > > > HTH > Mike > > PS. Note that I am not an expert on kqueue/kevent programming. So followups > for details on these functions would probably need to be answered by someone > else. -- Maksim Rodin
how tail waits for file to appear again?
Hello, Sorry if I chose the wrong place to ask such a question. I have been learning C for a couple of months and along with reading "C Primer Plus" by Stephen Prata and doing some exercises from it I took a hard (for me) task to replicate a tail program in its simplest form. I was able to reproduce watching for new data and truncation of the file using kqueue but I do not quite understand how the original tail watches when the file appears again after deletion or renaming. By reading the original tail sources downloaded from OpenBSD mirror I see that this is done by calling tfreopen function which seems to use a "for" loop to (continuously?) stat(2) the file name till stat(2) successfully returns and it does not seem to load a CPU as a simple continuous "for" loop would do. Can someone explain how it is done? May be there is a better way to watch for the file to appear correctly? Is inserting a sleep(3) in a loop an appropriate way? Below is the function how it is done in tail: """ #define AFILESINCR 8 static const struct timespec * tfreopen(struct tailfile *tf) { static struct tailfile **reopen = NULL; static intnfiles = 0, afiles = 0; static const struct timespec ts = {1, 0}; struct stat sb; struct tailfile **treopen, *ttf; int i; if (tf && !(tf->fp == stdin) && ((stat(tf->fname, &sb) != 0) || sb.st_ino != tf->sb.st_ino)) { if (afiles < ++nfiles) { afiles += AFILESINCR; treopen = reallocarray(reopen, afiles, sizeof(*reopen)); if (treopen) reopen = treopen; else afiles -= AFILESINCR; } if (nfiles <= afiles) { for (i = 0; i < nfiles - 1; i++) if (strcmp(reopen[i]->fname, tf->fname) == 0) break; if (i < nfiles - 1) nfiles--; else reopen[nfiles-1] = tf; } else { warnx("Lost track of %s", tf->fname); nfiles--; } } for (i = 0; i < nfiles; i++) { ttf = reopen[i]; if (stat(ttf->fname, &sb) == -1) continue; if (sb.st_ino != ttf->sb.st_ino) { (void) memcpy(&(ttf->sb), &sb, sizeof(ttf->sb)); ttf->fp = freopen(ttf->fname, "r", ttf->fp); if (ttf->fp == NULL) ierr(ttf->fname); else { warnx("%s has been replaced, reopening.", ttf->fname); tfqueue(ttf); } } reopen[i] = reopen[--nfiles]; } return nfiles ? &ts : NULL; } """ -- Maksim Rodin
Re: iridium/chromium webcam access
What if you just run chromium without all this and try webcammictest? >ENABLE_WASM=1 chrome --incognito --user-data-dir=/tmp/chrome > /dev/video0 rw > added in both /etc/chromium/unveil.main and > /etc/chromium/unveil.utility_video At least webcammictest works for me without any magic or config editing. > /dev/video0 rw > added in both /etc/chromium/unveil.main and > /etc/chromium/unveil.utility_video I think you do not need it in /etc/chromium/unveil.main and /etc/chromium/unveil.utility_video: there is already "/dev/video rw" in there and that should be enough because /dev/video is a soft link to /dev/video0 I found the following in my shell history when I last tried (with success!) to run jitsi test call: AUDIOPLAYDEVICE=snd/0 AUDIORECDEVICE=snd/1 ENABLE_WASM=yes chrome I repeated this command right now and could successfully connect to jitsi test meeting. It showed my ugly face from my webcam and told me that my mic seems to work fine ;-) On Mon Dec 26 11:49:24 2022, Robert Alessi wrote: > On Mon, Dec 26, 2022 at 12:03:46PM +0300, Maksim Rodin wrote: > > Could you once again test your webcam on https://webcammictest.com/ ? > > I use Chromium for Microsoft Teams video and audio calls. > > IIRC, the only thing I had to do, was `doas chown myuser /dev/video0` > > Are you sure that after setting up /etc/fbtab you (and not root) are the > > owner of your /dev/video0 device? > > I just did once again like so: > > ENABLE_WASM=1 chrome --incognito --user-data-dir=/tmp/chrome > > with of course: > > /dev/video0 rw > > added in both /etc/chromium/unveil.main and > /etc/chromium/unveil.utility_video > > and ls -lh /dev/video* gives me this: > > lrwxr-xr-x 1 rootwheel 6 Nov 20 17:24 /dev/video -> video0 > crw--- 1 robert robert 44, 0 Nov 20 17:24 /dev/video0 > crw--- 1 rootwheel44, 1 Nov 20 17:24 /dev/video1 > > where robert is my username. > > Nevertheless, access to the camera is still denied and the console > returns the same message over and over: > > uvideo0: could not SET probe request: STALLED > > Needless to mention that firefox-esr works. > > I must say that the fact it works on your side gives me hope, thank > you! I must keep looking into this. > > большое спасибо, > > -- R. > -- С уважением, Родин Максим
Re: iridium/chromium webcam access
Hi, Could you once again test your webcam on https://webcammictest.com/ ? I use Chromium for Microsoft Teams video and audio calls. IIRC, the only thing I had to do, was `doas chown myuser /dev/video0` Are you sure that after setting up /etc/fbtab you (and not root) are the owner of your /dev/video0 device? On Mon Dec 26 09:12:45 2022, Robert Alessi wrote: > On Sun, Dec 25, 2022 at 11:28:19PM +0100, Stefan Hagen wrote: > > Try to start chrome with: > > --enable-features=RunVideoCaptureServiceInBrowserProcess > > Thank you for this information, I was unaware of this parameter. > > I just tried it, unfortunately without success. Here follows what I > did for the record: > - set both kern.audio.record and kern.video.record to 1 > - have the rights to /dev/video0 properly set in /etc/fbtab > - added '/dev/video0 rw' in /etc/chromium/unveil.main and > /etc/chromium/unveil.utility_video as advised in the FAQ > - used this command line to launch chromium: > chrome --incognito --user-data-dir=/tmp/chrome > --enable-features=RunVideoCaptureServiceInBrowserProcess > > But chromium still can't access the webcam and /var/log/messages still > prints: > > uvideo0: could not SET probe request: STALLED > > as soon as I tick the "allow chromium to use your webcam" box. > > I used https://webcamtests.com to do the test. Iridium/Chromium both > fail while firefox-esr succeeds. > > > -- Robert > -- С уважением, Родин Максим
tap vm network interfaces are not added to bridge/veb host interface
Hello. Recently I tried to change my vmm network to using veb instead of bridge. I tried to do it as simple as possible and just renamed hostname.bridge0 to hostname.veb0, renamed hostname.vether0 to hostname.vport0, and changed hostname.veb0 to include vport0 interface: Here is the current network configuration on the host machine: $ tail -n3 /etc/hostname.* ==> /etc/hostname.alc0 <== inet autoconf ==> /etc/hostname.veb0 <== add vport0 up ==> /etc/hostname.vport0 <== inet 172.25.0.1 255.255.255.0 up And here is the vmm configuration: $ cat /etc/vm.conf switch "vmm_switch" { interface veb0 } vm "addc" { memory 4G disk "/DISK1/vmm/addc/disk0.img" interface { switch "vm_switch" lladdr fe:e1:ba:d3:57:48 } owner vmowner disable } The only change in pf.conf is this: # match out on egress from vether0:network to any nat-to (egress) match out on egress from vport0:network to any nat-to (egress) After that I could not access my vm by its network address anymore though it was alive and accessible through the console. After some investigation I found out that when I start the vm as vm owner its tap0 interface is not automatically added to veb0 interface as a child interface. When I manually added tap0 interface to veb0 as a child, network connectivity was back. I tried to revert all the changes back to working with bridge but the tap interface of the vm since that still needs manual addition to bridge interface as well. After every change like these I rebooted the host machine to make sure nothing from the previous configuration is left behind but nothing has changed in this behaviour. Here is the ifconfig output when the vm is not started: veb0: flags=8843 description: switch1-vmm_switch index 4 llprio 3 groups: veb vport0 flags=3 port 5 ifpriority 0 ifcost 0 vport0: flags=8943 mtu 1500 lladdr fe:e1:ba:d0:aa:a8 index 5 priority 0 llprio 3 groups: vport inet 172.25.0.1 netmask 0xff00 broadcast 172.25.0.255 Here is the ifconfig output when the vm is running (no network access to the vm): veb0: flags=8843 description: switch1-vmm_switch index 4 llprio 3 groups: veb vport0 flags=3 port 5 ifpriority 0 ifcost 0 vport0: flags=8943 mtu 1500 lladdr fe:e1:ba:d0:aa:a8 index 5 priority 0 llprio 3 groups: vport inet 172.25.0.1 netmask 0xff00 broadcast 172.25.0.255 tap0: flags=8843 mtu 1500 lladdr fe:e1:ba:d3:3e:d8 description: vm1-if0-addc index 9 priority 0 llprio 3 groups: tap status: active Here is the ifconfig output when I add tap0 to veb0 (network access to the vm is ok): veb0: flags=8843 description: switch1-vmm_switch index 4 llprio 3 groups: veb vport0 flags=3 port 5 ifpriority 0 ifcost 0 tap0 flags=3 port 9 ifpriority 0 ifcost 0 vport0: flags=8943 mtu 1500 lladdr fe:e1:ba:d0:aa:a8 index 5 priority 0 llprio 3 groups: vport inet 172.25.0.1 netmask 0xff00 broadcast 172.25.0.255 tap0: flags=8943 mtu 1500 lladdr fe:e1:ba:d3:3e:d8 description: vm1-if0-addc index 9 priority 0 llprio 3 groups: tap status: active My vmd host is OpenBSD 7.2 amd64 (which is used as a workstation as well if that matters). Is there something I missed during changing network configuration from bridge to veb and back again? -- Maksim Rodin
Re: PC Engines APU alternative for OpenBSD - 2022h2
Hello, > > Seeing recent issues with buggy BIOSes I wanted to avoid mini pc hunting > on Aliexpress :/ > I would be happy to have more choices to hunt for. But there aren't many. Qotom mini PCs are not bad. I bought one 5 years ago and it is still OK. I ordered another one as a spare recently and wait for it to arrive: Qotom Mini PC 5* I225-V 2.5G Lan Celeron J4105 AES-NI Quad core Pfsense Firewall Router Mini PC Q730G5 -- Regards Maksim Rodin
Warning in .xsession-errors Actions not found: exec-formatted
Recently I found the following in the OpenBSD 7.0 Changelog https://www.openbsd.org/plus70.html """ Added unveil(2) calls to xterm in the case where there are no exec-formatted or exec-selected resources set. """ Do I understand it right: if I do have these types of resources set they are expected to work? I have the following in my .Xresources: """ XTerm*VT100*translations: #override \n\ Ctrl Shift C: copy-selection(CLIPBOARD) \n\ Ctrl Shift V: insert-selection(CLIPBOARD) \n\ Shift : scroll-back(1, halfpage) \n\ Shift : scroll-forw(1, halfpage) \n\ : scroll-back(1, pixel) \n\ : scroll-forw(1, pixel) \n\ Shift : exec-formatted("/usr/local/bin/xdg-open '%s'", SELECT) \n\ Mod1 S: exec-formatted("/usr/local/bin/xdg-open https://google.com/search/?text=%s";, SELECT) """ I occasionally found out that the hotkeys defined for "exec-formatted" sections do not work anymore. I am sure that these hotkeys worked when I had OpenBSD 7.0 but after some sysupgrade (now it is 7.2 amd64) they do not work anymore. Did anything change since 7.0? -- Maksim Rodin
Cannot edit a command in history in vi-mode
Hello, My default shell is ksh and there is "set -o vi" in .profile 1) When I type in a command directly in the terminal window(xterm), I can press "Esc" and then "v" and after that my $EDITOR opens (nvim) and I am able to make modifications to this command. Then I press "Esc" and "ZZ" in the editor, it closes and the final command is executed. 2) When I try to do the same with the command in the shell history, this looks different: I press "Esc" and then "/" and then something I want to find in the history and then "Enter". Using "n" I find the command in history and press "Esc" and "v" to edit this command. The $EDITOR opens, I make some modifications, then save and exit the $EDITOR, and the old command is executed without any changes I have just made. Is case 2 the correct behaviour or do I do something wrong? My current system is OpenBSD 7.2 amd64 -- Maksim Rodin
Re: sndiod and multiple audio devices
Hello, Try setting these environment variables before running your application (or in your .profile or in your .xsession file): export AUDIOPLAYDEVICE=snd/0 export AUDIORECDEVICE=snd/1 see "man 7 sndiod" This worked for me. On Sat Aug 13 18:24:45 2022, Isaac Meerwarth wrote: > Greetings all, > > I recently bought a logi webcam and I was able to configure it with help > from the FAQ. However, when I switched the default audio device from rsnd/0 > to rsnd/1 via sndiod(8) two things happen: > > 1) I am able to record audio through my webcam (rsnd/1) > > 2) I loose audio out in my headphones (rsnd/0??) > > My headphones are attached to my desktop through 3.5mm jack on the > motherboard. > > After consulting sndiod(8) I believe I need to set up a sub-device of some > sort but my comprehension is lacking. > > I think I understand the logic, what I don't understand is how to configure > sndiod to use both devices. > > > dmesg attached > > Isaac > > 0 dev 24 function 2 "AMD 17h Data Fabric" rev 0x00 > pchb10 at pci0 dev 24 function 3 "AMD 17h Data Fabric" rev 0x00 > pchb11 at pci0 dev 24 function 4 "AMD 17h Data Fabric" rev 0x00 > pchb12 at pci0 dev 24 function 5 "AMD 17h Data Fabric" rev 0x00 > pchb13 at pci0 dev 24 function 6 "AMD 17h Data Fabric" rev 0x00 > pchb14 at pci0 dev 24 function 7 "AMD 17h Data Fabric" rev 0x00 > isa0 at pcib0 > isadma0 at isa0 > pckbc0 at isa0 port 0x60/5 irq 1 irq 12 > pckbd0 at pckbc0 (kbd slot) > wskbd0 at pckbd0: console keyboard > pcppi0 at isa0 port 0x61 > spkr0 at pcppi0 > vmm0 at mainbus0: SVM/RVI > umass0 at uhub0 port 1 configuration 1 interface 0 "Hitachi-LG Data Storage > Inc Portable Super Multi Drive" rev 2.00/0.00 addr 2 > umass0: using ATAPI over Bulk-Only > scsibus5 at umass0: 2 targets, initiator 0 > cd0 at scsibus5 targ 1 lun 0: removable > ulpt0 at uhub0 port 4 configuration 1 interface 0 "Brother HL-L5100DN series" > rev 2.00/1.00 addr 3 > ulpt0: using bi-directional mode > ugen0 at uhub0 port 4 configuration 1 "Brother HL-L5100DN series" rev > 2.00/1.00 addr 3 > uvideo0 at uhub1 port 2 configuration 1 interface 0 "HD Webcam C270 HD Webcam > C270" rev 2.00/1.00 addr 2 > video0 at uvideo0 > uaudio0 at uhub1 port 2 configuration 1 interface 3 "HD Webcam C270 HD Webcam > C270" rev 2.00/1.00 addr 2 > uaudio0: class v1, high-speed, sync, channels: 0 play, 2 rec, 3 ctls > audio1 at uaudio0 > umass1 at uhub1 port 7 configuration 1 interface 0 "Seagate Expansion SW" rev > 3.20/18.01 addr 3 > umass1: using SCSI over Bulk-Only > scsibus6 at umass1: 2 targets, initiator 0 > sd6 at scsibus6 targ 1 lun 0: > serial.0bc2203bNAC819CJ > sd6: 3815447MB, 512 bytes/sector, 7814037167 sectors > uhidev0 at uhub2 port 1 configuration 1 interface 0 "Yubico YubiKey > OTP+FIDO+CCID" rev 2.00/5.43 addr 2 > uhidev0: iclass 3/1 > ukbd0 at uhidev0: 8 variable keys, 6 key codes > wskbd1 at ukbd0 mux 1 > uhidev1 at uhub2 port 1 configuration 1 interface 1 "Yubico YubiKey > OTP+FIDO+CCID" rev 2.00/5.43 addr 2 > uhidev1: iclass 3/0 > fido0 at uhidev1: input=64, output=64, feature=0 > ugen1 at uhub2 port 1 configuration 1 "Yubico YubiKey OTP+FIDO+CCID" rev > 2.00/5.43 addr 2 > uhub3 at uhub2 port 4 configuration 1 interface 0 "VIA Labs, Inc. USB2.0 Hub" > rev 2.10/6.34 addr 3 > uhidev2 at uhub3 port 2 configuration 1 interface 0 "Logitech Gaming Mouse > G900" rev 2.00/1.05 addr 4 > uhidev2: iclass 3/1 > ums0 at uhidev2: 16 buttons, Z and W dir > wsmouse0 at ums0 mux 0 > uhidev3 at uhub3 port 2 configuration 1 interface 1 "Logitech Gaming Mouse > G900" rev 2.00/1.05 addr 4 > uhidev3: iclass 3/0, 17 report ids > uhidpp0 at uhidev3 > ukbd1 at uhidev3 reportid 1: 8 variable keys, 6 key codes > wskbd2 at ukbd1 mux 1 > ucc0 at uhidev3 reportid 3: 652 usages, 18 keys, array > wskbd3 at ucc0 mux 1 > uhid0 at uhidev3 reportid 4: input=1, output=0, feature=0 > uhidev4 at uhub3 port 3 configuration 1 interface 0 "ZSA Technology Labs > Planck EZ Glow" rev 2.00/0.00 addr 5 > uhidev4: iclass 3/1 > ukbd2 at uhidev4: 8 variable keys, 6 key codes > wskbd4 at ukbd2 mux 1 > uhidev5 at uhub3 port 3 configuration 1 interface 1 "ZSA Technology Labs > Planck EZ Glow" rev 2.00/0.00 addr 5 > uhidev5: iclass 3/0, 5 report ids > uhid1 at uhidev5 reportid 3: input=2, output=0, feature=0 > ucc1 at uhidev5 reportid 4: 672 usages, 18 keys, array > wskbd5 at ucc1 mux 1 > ukbd3 at uhidev5 reportid 5: 128 variable keys, 0 key codes > wskbd6 at ukbd3 mux 1 > ugen2 at uhub3 port 5 "VIA Labs, Inc. USB Billboard Device" rev 2.01/0.01 > addr 6 > uhub4 at uhub2 port 8 configuration 1 interface 0 "VIA Labs, Inc. USB3.0 Hub" > rev 3.20/6.34 addr 7 > vscsi0 at root > scsibus7 at vscsi0: 256 targets > softraid0 at root > scsibus8 at softraid0: 256 targets > sd7 at scsibus8 targ 1 lun 0: > sd7: 3815447MB, 512 bytes/sector, 7814035553 sectors > root on sd0a (9819c30129901e02.a) swap on sd0b dump on sd0b > amdgpu0: NAVI10 40 CU rev 0x02 > amdgpu0: 2560x1440, 32bpp > wsdisplay0 at amdgpu0 mux 1: console (st
dabbrev-expand, action not found
Hello, I found an interesting option while reading "man xterm": ... dabbrev-expand() Expands the word before cursor by searching in the preceding text on the screen and in the scrollback buffer for words starting with that abbreviation. Repeating dabbrev-expand() several times in sequence searches for an alternative expansion by looking farther back... and tried to reproduce the example from man in my .Xresources file: ... XTerm*VT100*translations: #override \n\ Meta /:dabbrev-expand() \n\ Ctrl Shift C: copy-selection(CLIPBOARD) \n\ ... After "xrdb -load .Xresources" and trying to use the Meta + / in xterm it does not seem to work. And there is an error after using this key combination in ~/.xsession-errors: "Warning: Actions not found: dabbrev-expand" Should it work at all? -- Maksim Rodin
Re: doas and args matching
> $ /sbin/wsconsctl display.brightness=50 wsconsctl: /dev/ttyC0: Permission > denied > Did you forget to type "doas" before your command? On Пт 29 июл 2022 15:38:37, Alexis wrote: > > Alexander Hall writes: > > > > There's a good chance i'm misunderstanding, but doesn't this run > > > into > > > the same issue? Namely, that (as far as i'm aware) it's not possible > > > to specify that a doas-permitted command be allowed to run with > > > arbitrary arguments (or range of arguments), rather than only the > > > arguments specified in doas.conf? > > > > Just leaving out the "args ..." from the config should accomplish that. > > Not on 7.1, unless i'm doing something wrong? > > /etc/doas.conf: > >permit nopass alexis as root cmd /sbin/wsconsctl > > Hence the OP's question, and my suggested kludge. > > > Alexis. > -- С уважением, Родин Максим
Re: make the mouse in cwm follow active window
Hello! Sorry for confusing. I noticed that for the first time on Ubuntu where I use cwm as well. I will check if it is different on OpenBSD. On Вс 17 июл 2022 11:48:20, Marcus MERIGHI wrote: > Hello! > > a23s4a2...@yandex.ru (Maksim Rodin), 2022.07.16 (Sat) 11:58 (CEST): > > When I have two windows on the screen (OpenBSD cwm) and > > move the active one using one of the build in cwm commands (window-snap-...) > > the window being moved looses focus when a mouse pointer is not above > > that window anymore and another window behind the first becomes active > > instead. > > Thank you for pointing me at the window-snap-* functions, never used > them before and they are handy! > > BUT I do not see what you see... when I, for instance, use > window-snap-up-right, then the mouse pointer gets moved, too. And, with > the mouse pointer, the focus as well. > > Am I doing something different than you? > > Marcus > > P.S.: I'm on -current as of the day before yesterday. > > > I suppose there is no way to make the active window being moved remain > > active > > because focus follows mouse but may be there is a way to make the > > mouse pointer follow the window being moved? > > > > -- > > Maksim Rodin > > -- С уважением, Родин Максим
make the mouse in cwm follow active window
Hello, When I have two windows on the screen (OpenBSD cwm) and move the active one using one of the build in cwm commands (window-snap-...) the window being moved looses focus when a mouse pointer is not above that window anymore and another window behind the first becomes active instead. I suppose there is no way to make the active window being moved remain active because focus follows mouse but may be there is a way to make the mouse pointer follow the window being moved? -- Maksim Rodin
Is there a way to build mod_auth_kerb?
Hello, I am trying to build mod_auth_kerb for apache2 on OpenBSD 6.9 I installed heimdal-libs-7.7.0p0 and downloaded the latest src for mod_auth_kerb from github After unpacking and configuring the following way: ./configure --with-krb5=/usr/local/heimdal --with-krb4=no I try to run 'make' I get a bunch of warnings like these: ``` /usr/local/heimdal/include/krb5-protos.h:18:52: note: expanded from macro 'KRB5_DEPRECATED_FUNCTION' #define KRB5_DEPRECATED_FUNCTION(x) __attribute__((__deprecated__(x))) ^ src/mod_auth_kerb.c:1547:47: warning: incompatible pointer types passing 'request_rec *' (aka 'struct request_rec *') to parameter of type 'const char *' [-Wincompatible-pointer-types] log_rerror(APLOG_MARK, APLOG_ERR, 0, r, ^ src/mod_auth_kerb.c:379:46: note: passing argument to parameter 'fmt' here const request_rec *r, const char *fmt, ...) ^ src/mod_auth_kerb.c:1553:50: warning: incompatible pointer types passing 'request_rec *' (aka 'struct request_rec *') to parameter of type 'const char *' [-Wincompatible-pointer-types] log_rerror(APLOG_MARK, APLOG_NOTICE, 0, r, ^ src/mod_auth_kerb.c:379:46: note: passing argument to parameter 'fmt' here const request_rec *r, const char *fmt, ...) ^ src/mod_auth_kerb.c:1560:44: warning: incompatible pointer types passing 'request_rec *' (aka 'struct request_rec *') to parameter of type 'const char *' [-Wincompatible-pointer-types] log_rerror(APLOG_MARK, APLOG_DEBUG, 0, r, ^ src/mod_auth_kerb.c:379:46: note: passing argument to parameter 'fmt' here const request_rec *r, const char *fmt, ...) ``` and the following error: ``` Error while executing cc -O2 -pipe -g -D_POSIX_THREADS -pthread -I/usr/local/include/apache2 -I/usr/local/include/apr-1/ -I/usr/local/include/apr-1/ -I/usr/local/include/db4 -I/usr/local/include -I. -Ispnegokrb5 -I/usr/local/heimdal/include -I/usr/local/include -c src/mod_auth_kerb.c -fPIC -DPIC -o src/.libs/mod_auth_kerb.o apxs:Error: Command failed with rc=65536 . *** Error 1 in /root/mod_auth_kerb-master (Makefile:16 'src/mod_auth_kerb.so') ``` Is it possible to compile that module on OpenBSD at all? -- Best regards Maksim Rodin
Re: Unwind does not seem to query forwarders it is pointed to
> So something is odd. When unwind starts or learns about new resolvers it > checks if they can do DNSSEC validation. It the equivalent of this: > > dig @192.168.1.150 +dnssec . NS > and > dig @192.168.1.1 +dnssec . NS > > and got a response it liked. 192.168.1.150 is a Samba 4 internal DNS server which I think is not capable of dnssec yet. And I do not need it now. It is pointed to 192.168.1.1 as a forwarder. 192.168.1.1 is an unbound + nsd OpenBSD router which I did not set up to do dnssec. It is pointed to my provider's DNS server as a forwarder. I do not quite understand how any of the two DNS servers pretend to give DNSSEC information On Пн 06 дек 2021 17:20:28, Florian Obser wrote: > On 2021-12-06 13:49 +03, Maksim Rodin wrote: > > Hello > > I have the following unwind.conf: > > ``` > > cat /etc/unwind.conf > > fwd1=192.168.1.150 > > fwd2=192.168.1.1 > > forwarder { $fwd1 $fwd2 } > > preference forwarder > > ``` > > and an automatically generated resolv.conf: > > ``` > > cat /etc/resolv.conf > > nameserver 127.0.0.1 # resolvd: unwind > > lookup file bind > > ``` > > I may not understand the purpose of unwind correctly but I expect the > > unwind to respond to DNS queries using the forwarders it is pointed to > > in its config. > > That is one purpose, and you configured it do exactly that. > > > But when I do: > > ``` > > nslookup dc.mydomain.ru > > ``` > > It says: > > ``` > > Server: 127.0.0.1 > > Address:127.0.0.1#53 > > > > ** server can't find dc.mydomain.ru: SERVFAIL > > ``` > > > > And I see in the logs the following: > > ``` > > unwind[8550]: validation failure : no signatures from > > 192.168.1.150 for DS ru. while building chain of trust > > ``` > > The DNS server on 192.168.1.150 definitely knows about the host > > dc.mydomain.ru > > > > When I ask that DNS server directly: > > ``` > > nslookup dc.mydomain.ru 192.168.1.150 > > ``` > > It returns the correct answer > > > > So the unwind daemon seems to always query root name servers instead of my > > two > > servers. > > Is that the expected behavior? > > It does not do that. I talks to your two servers. But it tries to do > DNSSEC validation: "no signatures from 192.168.1.150 for DS ru." > > So something is odd. When unwind starts or learns about new resolvers it > checks if they can do DNSSEC validation. It the equivalent of this: > > dig @192.168.1.150 +dnssec . NS > and > dig @192.168.1.1 +dnssec . NS > > and got a response it liked. > > $ unwindctl status > > probably outputs something like > > 1. forwarder validating > > So it knows the root zone is signed and your forwarders hand out DNSSEC > information, but for some reason your forwarders do not answer to > > dig @192.168.1.150 +dnssec ru DS > > No idea why. > > > > > -- > > Maksim Rodin > > > > -- > I'm not entirely sure you are real. > -- С уважением, Родин Максим
Unwind does not seem to query forwarders it is pointed to
Hello I have the following unwind.conf: ``` cat /etc/unwind.conf fwd1=192.168.1.150 fwd2=192.168.1.1 forwarder { $fwd1 $fwd2 } preference forwarder ``` and an automatically generated resolv.conf: ``` cat /etc/resolv.conf nameserver 127.0.0.1 # resolvd: unwind lookup file bind ``` I may not understand the purpose of unwind correctly but I expect the unwind to respond to DNS queries using the forwarders it is pointed to in its config. But when I do: ``` nslookup dc.mydomain.ru ``` It says: ``` Server: 127.0.0.1 Address:127.0.0.1#53 ** server can't find dc.mydomain.ru: SERVFAIL ``` And I see in the logs the following: ``` unwind[8550]: validation failure : no signatures from 192.168.1.150 for DS ru. while building chain of trust ``` The DNS server on 192.168.1.150 definitely knows about the host dc.mydomain.ru When I ask that DNS server directly: ``` nslookup dc.mydomain.ru 192.168.1.150 ``` It returns the correct answer So the unwind daemon seems to always query root name servers instead of my two servers. Is that the expected behavior? -- Maksim Rodin
Re: django-ldap-auth authentication lasts several minutes on OpenBSD
Thank you very much. It really seems to be a DNS issue. On Пн 06 дек 2021 09:04:15, Michael Hekeler wrote: > > The only machine using another DNS server from my router is the Linux Mint > > development machine > > which holds the copy of my code and also runs django development > > server on 127.0.0.1:8080 and from where everything works without delays. >^^^ > Then remove the entry from /etc/hosts and clear dns cache. > Then try again on development machine and if you encounter same delays > then you have found the culprit... > -- Maksim Rodin
Re: django-ldap-auth authentication lasts several minutes on OpenBSD
This is very strange because all involved machines are using one and the same internal dns server on the Samba ADDC as a resolver which I made resolve all the names and addresses needed during authentication process: 1) Samba ADDC (aka the LDAP server) resolves its name and its IP. 2) Django OpenBSD machine resolves its name and its IP. 3) Client machine with the browser (this time I took windows 10 which was joined the AD domain) resolves its name and its IP. 4) And all of them resolve hostnames and IPs of each other. The only machine using another DNS server from my router is the Linux Mint development machine which holds the copy of my code and also runs django development server on 127.0.0.1:8080 and from where everything works without delays. On that development machine I only added Samba ADDC address to /etc/hosts to make the authentication run with TLS using the ADDC hostname and not complain about TLS errors. >From that machine I browse to the Django development webserver on the same machine through http://127.0.0.1:8080 and authenticate to the application with my AD login and password without a delay so I am not sure there is a problem with DNS. On Пт 03 дек 2021 10:45:03, Stuart Henderson wrote: > On 2021-12-03, Maksim Rodin wrote: > > The AD DC machine is an Ubuntu 20 machine with samba 4. > > The test machine where I initially have all the code and from where I > > tested this application initially > > is a Linux Mint machine. > > I enabled some logging in Django to see what happens when I log > > in to the application > > When I run "python manage.py runserver 0.0.0.0:8080" on my Linux machine > > and try to authenticate to the application in my browser on the same > > machine I am logged in > > within a second. > > When I run "python manage.py runserver 0.0.0.0:8080" on the OpenBSD test > > server and try to authenticate to the application from my browser > > (using OpenBSD machine's IP or hostname) it lasts several minutes. > > There is no error in the application log. Just a big delay till I am > > successfully authenticated. > > A delay of that sort of length strongly hints at a DNS or reverse DNS problem. > > -- Maksim Rodin
django-ldap-auth authentication lasts several minutes on OpenBSD
Hello I am not quite sure if the question belongs here but it seems to be related to the OS where the django-ldap-auth is used as a ldap client. I have a working django application which uses django-ldap-auth to authenticate active directory users to django. The AD DC machine is an Ubuntu 20 machine with samba 4. The test machine where I initially have all the code and from where I tested this application initially is a Linux Mint machine. I enabled some logging in Django to see what happens when I log in to the application When I run "python manage.py runserver 0.0.0.0:8080" on my Linux machine and try to authenticate to the application in my browser on the same machine I am logged in within a second. When I run "python manage.py runserver 0.0.0.0:8080" on the OpenBSD test server and try to authenticate to the application from my browser (using OpenBSD machine's IP or hostname) it lasts several minutes. There is no error in the application log. Just a big delay till I am successfully authenticated. There is no difference in the log output from manage.py process: ``` [02/Dec/2021 22:41:59] "GET /accounts/login/?next=/sdp/ HTTP/1.1" 200 1987 Initiating TLS ``` Here I have to wait several minutes on OpenBSD Then it goes further: ``` search_s('dc=domain,dc=ru', 2, 'sAMAccountName=%(user)s') returned 1 objects: cn=Ivan Ivanov,ou=it,dc=domain,dc=ru cn=Ivan Ivanov,ou=it,dc=domain,dc=ru is a member of cn=sd,ou=groups,dc=domain,dc=ru Populating Django user i.ivanov cn=Ivan Ivanov,ou=it,dc=domain,dc=ru is a member of cn=sd,ou=groups,dc=domain,dc=ru cn=Ivan Ivanov,ou=it,dc=domain,dc=ru is a member of cn=sd,ou=groups,dc=domain,dc=ru [02/Dec/2021 22:45:50] "POST /accounts/login/ HTTP/1.1" 302 0 ``` By the way I have an openldap client installed as a dependency on the same OpenBSD machine and a .ldaprc file in my home directory with some parameters set: BASEdc=domain,dc=ru BINDDN cn=bind,ou=IT,dc=domain,dc=ru URI ldap://dc.domain.ru SIZELIMIT 12000 TIMELIMIT 15 TLS_CACERT /home/myuser/samba-ca.pem TLS_REQCERT demand With this file in my profile I can make ldapsearch like this: ldapsearch -x -ZZ -W "(sAMAccountName=bind)" After I enter my ldap password it succeeds without any pause. Similar parameters are used in django settings.py related to LDAP: ``` import ldap from django_auth_ldap.config import LDAPSearch AUTH_LDAP_SERVER_URI = 'ldap://dc.domain.ru' AUTH_LDAP_BIND_DN = "CN=bind,OU=IT,DC=domain,DC=ru" AUTH_LDAP_BIND_PASSWORD = "mypasswd" AUTH_LDAP_AUTHORIZE_ALL_USERS = True AUTH_LDAP_USER_SEARCH = LDAPSearch( "dc=domain,dc=ru", ldap.SCOPE_SUBTREE, "sAMAccountName=%(user)s" ) ... ... AUTH_LDAP_START_TLS = True AUTH_LDAP_GLOBAL_OPTIONS = { ldap.OPT_X_TLS_CACERTFILE: '/home/myuser/samba-ca.pem', ldap.OPT_X_TLS_REQUIRE_CERT: ldap.OPT_X_TLS_DEMAND, } ``` What option specific to the OpenBSD may I be missing it my configuration? -- Maksim Rodin
Re: sysctl hw.sensors.lm1 shows only one fan
This works for me, now I see all of my fans. Some of them look funny, because they are really not connected and show -2560RPM, Status Critical. But my MB is very old, and I do not expect it to work perfectly. Thanks a lot! On Пт 24 сен 2021 17:53:51, Stuart Henderson wrote: > On 2021-09-24, Maksim Rodin wrote: > > My system has several fans connected to the MB (Supermicro X8SIL-F) > > "sysctl hw.sensors.lm1" shows only fan0. > > Is there a way to make my system (OpenBSD 6.9 stable) show more fans in > > that output? > > > > Try "boot -c" and "enable ipmi" "quit". > > Apparently some machines had a problem with this (IIRC there was some > IBM server with a problem) but most systems don't have a problem, and > some systems (including most Supermicro) do attach sensors there. > > > > -- > Please keep replies on the mailing list. > -- Best regards Maksim
sysctl hw.sensors.lm1 shows only one fan
My system has several fans connected to the MB (Supermicro X8SIL-F) "sysctl hw.sensors.lm1" shows only fan0. Is there a way to make my system (OpenBSD 6.9 stable) show more fans in that output? -- Regards Maksim