Re: Options to have relayd add IP to pf?
> > Le 23 août 2024 à 17:12, Peter N. M. Hansteen a écrit : > > On Fri, Aug 23, 2024 at 12:54:20PM +0200, Joel Carnat wrote: >> I have a server which gets flooded with unsolicited HTTP requests. So far, I >> use relayd filters to identify those requests and block them, at relayd >> level. It works as they never reach the web server but relayd is still >> working to block them. >> >> I thought of parsing relayd logs to get those IPs and add them to a pf block >> table, using an automated script. > > If the problem is that there are a lot of requests from the same hosts coming > in rapid-fire, it is > possible that state tracking rules with overloading could be the thing to try. > > The other thing that comes to mind is to put together something that parses > the logs > and adds offenders to a table of addresses that PF will block. > > Something along the lines of what is described in > https://nxdomain.no/~peter/forcing_the_password_gropers_through_a_smaller_hole.html > (also prettified but tracked at > https://bsdly.blogspot.com/2017/04/forcing-password-gropers-through.html) > could be what you need (some assembly required, obviously). > > - Peter Unfortunately, those are not single IP spamming. It looks more like infected computers and/or computer farms sending individual requests at "normal" rate. There are just thousands of them. The only way to identify them is by looking at User-Agent and/ou HTTP requests body. So pf only won’t be enough there. I thought I could use some matching relayd rules that would tag the connections so that pf blocks them. But it seems pftag is not made for this. Writing a script and feed it using syslog is doable. But I hoped I could use only relayd and pf.
Options to have relayd add IP to pf?
Hi, I have a server which gets flooded with unsolicited HTTP requests. So far, I use relayd filters to identify those requests and block them, at relayd level. It works as they never reach the web server but relayd is still working to block them. I thought of parsing relayd logs to get those IPs and add them to a pf block table, using an automated script. I also thought of using tags to forward the connections to a program that would add the IP to the pf block table. Would there be a simpler / smarter way to have relayd add an IP matching a block rule into a pf table? Thanks, Joel C.
Re: pf.conf syntax highlighting in your favourite editor
I think vim already has it. share/vim/${P}/syntax/pf.vim > Le 23 juil. 2024 à 16:49, Tom Smyth a écrit : > > Folks, > I wondering had anyone tried to make a syntax highlighting for pf.conf > syntax, > > to help folks new to the pf.conf syntax in the editor of their choice... > > I was thinking that this approach might be lower hanging fruit rather > than trying to write a rule editor in nsh (for now at least), and it > might be more generally useful for for the community or those in the > community who like syntax highlighting > > i saw some pf.conf syntax highlighting for sublime text editor, > > I was just wondering if any had done highlighting for vim, emacs > etc... ? or at least something I can get started with ... > > Thanks again > > Tom Smyth. >
Re: Configure User-Agent is relayd HTTP Check ?
Le 29/05/2024 à 14:45, Kirill A. Korinsky a écrit : On Wed, 29 May 2024 12:19:15 +0100, Joel Carnat wrote: Is there a way to specify a User-Agent value for the check http or shall I rather tell relayd to validate on "code 418"? here two possible way to overstep it. 1. Use `check script /some/script` which uses curl, wget, ftp or any other way to make HTTP call which is accepted by that server. 2. Use `check send "HEAD /health HTTP/1.1\r\nHost: host\r\nUser-Agent: dummy\r\n\r\n" expect "200 OK HTTP/1.1"` (I haven't tested it, it may contains typos, but it should make an idea) Thank you! I went for solution 2 but it seems the string is not send properly by relayd. Using the console, I get a proper reply from the server: # echo -n "HEAD /health HTTP/1.1\r\nHost: test.home.arpa\r\nUser-Agent: relayd/7.5\r\n\r\n" | nc 192.168.0.125 8080 | head HTTP/1.1 200 OK Cache-Control: no-store A tcpdump looks like: 00:00:00.000396 IP 192.168.0.201.42445 > 192.168.0.125.8080: Flags [P.], seq 1:70, ack 1, win 256, options [nop,nop,TS val 3095794693 ecr 1], length 69: HTTP: HEAD /health HTTP/1.1 E..y.:@.@..}..k.*..%I%. HEAD /health HTTP/1.1 Host: test.home.arpa User-Agent: relayd/7.5 00:00:00.001924 IP 192.168.0.125.8080 > 192.168.0.201.42445: Flags [P.], seq 1:379, ack 70, win 4197, options [nop,nop,TS val 1 ecr 3095794693], length 378: HTTP: HTTP/1.1 200 OK E.@.@..}*..%..le.`. HTTP/1.1 200 OK Cache-Control: no-store Copy/Pasting the HTTP request into relayd this way, the backend server does not accept it: check send "HEAD /health HTTP/1.1\r\nHost: test.home.arpa\r\nUser-Agent: relayd/7.5\r\n\r\n" expect "HTTP/1.1 200 OK" A tcpdump looks like: 00:00:00.003641 IP 192.168.0.201.6097 > 192.168.0.125.8080: Flags [P.], seq 1:78, ack 1, win 256, options [nop,nop,TS val 3359675463 ecr 1], length 77: HTTP: HEAD /health HTTP/1.1\r\nHost: test.home.arpa\r\nUser-Agent: relayd/7.5\r\n\r\n[!http] EI@.@..}D.&...(jf.. .@.GHEAD /health HTTP/1.1\r\nHost: test.home.arpa\r\nUser-Agent: relayd/7.5\r\n\r\n 00:00:00.196735 IP 192.168.0.125.8080 > 192.168.0.201.6097: Flags [.], ack 78, win 4197, options [nop,nop,TS val 2 ecr 3359675463], length 0 E..4..@.@..,...}..(jD.'J...e... .@.G 00:00:00.008892 IP 192.168.0.201.6097 > 192.168.0.125.8080: Flags [F.], seq 78, ack 1, win 256, options [nop,nop,TS val 3359675673 ecr 2], length 0 E..4..@.@..[...}D.'J..(j... .@.. 00:00:00.66 IP 192.168.0.125.8080 > 192.168.0.201.6097: Flags [.], ack 79, win 4197, options [nop,nop,TS val 2 ecr 3359675673], length 0 E..4..@.@..,...}..(jD.'K...e... .@.. 00:00:00.000411 IP 192.168.0.125.8080 > 192.168.0.201.6097: Flags [P.], seq 1:104, ack 79, win 4197, options [nop,nop,TS val 2 ecr 3359675673], length 103: HTTP: HTTP/1.1 400 Bad Request E.@.@..}..(jD.'K...e$.. .@..HTTP/1.1 400 Bad Request Content-Type: text/plain; charset=utf-8 Connection: close Using single quotes or real CRLF in the relayd configuration does not solve this issue. I'd rather not use the script option as I fear this would lead to lots of forks :-/ Regards, Joel C.
Configure User-Agent is relayd HTTP Check ?
Hi, Some web applications don’t like when relayd connects to them, for health-checks, without providing a User-Agent HTTP header. They return an HTTP/418. So something like relay "ipv4" { listen on www.example.com port 443 tls protocol "https" forward to port 8080 check http "/health" code 200 } will not work. Is there a way to specify a User-Agent value for the check http or shall I rather tell relayd to validate on "code 418"? Thank you, Joel C. -- Envoyé de mon iPhone
Re: How to use randon outgoing network aliases?
Le 3/12/24 à 15:40, Stuart Henderson a écrit : On 2024-03-12, Joel Carnat wrote: Hi, I have a server with a single NIC but several IPs configured: # cat /etc/hostname.vio0 inet 192.0.2.10 255.255.255.0 inet alias 192.0.2.11 255.255.255.0 inet alias 192.0.2.12 255.255.255.0 The default gateway is set to 192.0.2.1 in /etc/mygate. I would like outgoing network traffic to randomely appear coming from any of those IPs. Can be done with PF nat-to: either one rule with an address pool, or multiple rules with probabilities (e.g. for three: 33%, 50%, plus one with no probability to catch the rest). Thank you both. I have it working.
How to use randon outgoing network aliases?
Hi, I have a server with a single NIC but several IPs configured: # cat /etc/hostname.vio0 inet 192.0.2.10 255.255.255.0 inet alias 192.0.2.11 255.255.255.0 inet alias 192.0.2.12 255.255.255.0 The default gateway is set to 192.0.2.1 in /etc/mygate. I would like outgoing network traffic to randomely appear coming from any of those IPs. I've read faq/pf/pools.html, pf.conf and route manpage but I don't get which directive would be the right one to use. Can this be achieved with pf and/or route? Or do I have to look at setting up routing domains attached to the interface aliases and have several daemon instances run in those domains? Thanks, Joel C.
Re: relayd fallback when using tag/tagged
Le 2/15/24 à 10:33, Michael Hekeler a écrit : Hello, I'm trying to configure relayd(8) to use tags, to allow legit host names only and modify HTTP headers, and fallback. But I can't have it working properly. I don't understand exactly what you want to achieve. Do you want: A. Requests with http header "www.example" going to primary. And going to fallback if primary is down. And block all other requests. or: B. Request with http header "www.example" going to primary. And all other going to fallback. And block nothing (=all requests are served either by primary or by fallback) It looks more like A. I want to relay to the primary by default. I the primary fails, then I want to relay to a secondary. Which is just a static webpage saying "work in progress, be back soon". If A) then put both servers in the table and let HCE decide which host is up. Something like that (header check ignored in example): table {192.0.2.4 192.0.2.7} redirect www { listen on 192.0.2.30 port 80 forward to check http “/” code 200 } This implies "mode roundrobin" which is not what I want. The secondary should only be displayed when the primary is down. If B) then you need an an additional pass rule in your protocol. Something like that (to be honest I don't know why you need the tag here so I ignored that in that example): http protocol www { pass request quick header "Host" value "www.example" \ forward to pass request forward to block } I need tags because the relayd(8) exposes several FQDN and sets various headers depending on those. Using such a configuration: #-8<--- table { 192.0.2.4 } table { 192.0.2.7} http protocol www { block match request header "Host" value "www.example" tag "example" pass request tagged "example" forward to } relay www { listen on 192.0.2.30 port 80 protocol www forward to port 80 check http "/" code 200 forward to port 80 } #-8<--- forwards all tagged HTTP traffic to the primary server. But if it is turned off, relayd(8) only replies with error rather than sending the traffic to the fallback server. Removing tags and using a simple "pass" directive in protocol (as described in the man page) does work as expected regarding the fallback server. Is there a way to use both tags and fallback with relayd(8) to mimic Apache's Failover[1] configuration with "ProxyPass" and "BalancerMember (...) status=+H" ? Thank you, Joel C. [1] https://httpd.apache.org/docs/trunk/howto/reverse_proxy.html#failover -- Bonne journée, Joel C. Tél: +33 663541230
Re: relayd fallback when using tag/tagged
Le 13/02/2024 à 10:07, Manuel Giraud a écrit : Joel Carnat writes: Hello, I'm trying to configure relayd(8) to use tags, to allow legit host names only and modify HTTP headers, and fallback. But I can't have it working properly. Using such a configuration: #-8<--- table { 192.0.2.4 } table { 192.0.2.7} http protocol www { block match request header "Host" value "www.example" tag "example" pass request tagged "example" forward to } I've not tested it but maybe you're missing this last rule in the previous block: pass request forward to That doesn't work either. If I add it, with or without a tagged directive, it becomes the only effective rule (last matching rule?) and it never reaches the primary server.
Re: relayd fallback when using tag/tagged
The proposed rules don't seem to change relayd(8)'s behaviour. It keeps sending HTTP traffic to the primary server and fail when it is down. The secondary / fallback server is never used. Starting status: relayd[26195]: host 192.0.2.7, check http code (14ms,http code ok), state unknown -> up, availability 100.00% relayd[26195]: host 192.0.2.4, check http code (44ms,http code ok), state unknown -> up, availability 100.00% Stopping the backend server: *relayd[21988]: host 192.0.2.4, check http code (3ms,http code malformed), state up -> down, availability 95.65% relayd[54506]: host 192.0.2.4, check http code (1ms,tcp connect failed), state up -> down, availability 99.44%* HTTP request while primary host is down: relayd[63036]: relay www4tls, session 6 (1 active), example, 1.2.3.4 -> :0, session failed, [ww.example/] [Host: www.example] [User-Agent: curl/7.81.0] GET Le 13/02/2024 à 04:29, l...@trungnguyen.me a écrit : Hi On February 13, 2024 12:20:26 AM UTC, Joel Carnat wrote: Hello, I'm trying to configure relayd(8) to use tags, to allow legit host names only and modify HTTP headers, and fallback. But I can't have it working properly. Using such a configuration: #-8<--- table { 192.0.2.4 } table { 192.0.2.7} http protocol www { block match request header "Host" value "www.example" tag "example" pass request tagged "example" forward to Try: match request header "Host" value "www.example" tag example pass forward to tagged example } relay www { listen on 192.0.2.30 port 80 protocol www forward to port 80 check http "/" code 200 forward to port 80 } #-8<--- forwards all tagged HTTP traffic to the primary server. But if it is turned off, relayd(8) only replies with error rather than sending the traffic to the fallback server. What errors are you having? Removing tags and using a simple "pass" directive in protocol (as described in the man page) does work as expected regarding the fallback server. Is there a way to use both tags and fallback with relayd(8) to mimic Apache's Failover[1] configuration with "ProxyPass" and "BalancerMember (...) status=+H" ? Thank you, Joel C. [1] https://httpd.apache.org/docs/trunk/howto/reverse_proxy.html#failover https://man.openbsd.org/relayd.conf.5#tag
relayd fallback when using tag/tagged
Hello, I'm trying to configure relayd(8) to use tags, to allow legit host names only and modify HTTP headers, and fallback. But I can't have it working properly. Using such a configuration: #-8<--- table { 192.0.2.4 } table { 192.0.2.7} http protocol www { block match request header "Host" value "www.example" tag "example" pass request tagged "example" forward to } relay www { listen on 192.0.2.30 port 80 protocol www forward to port 80 check http "/" code 200 forward to port 80 } #-8<--- forwards all tagged HTTP traffic to the primary server. But if it is turned off, relayd(8) only replies with error rather than sending the traffic to the fallback server. Removing tags and using a simple "pass" directive in protocol (as described in the man page) does work as expected regarding the fallback server. Is there a way to use both tags and fallback with relayd(8) to mimic Apache's Failover[1] configuration with "ProxyPass" and "BalancerMember (...) status=+H" ? Thank you, Joel C. [1] https://httpd.apache.org/docs/trunk/howto/reverse_proxy.html#failover
Re: Donations
> Le 26 oct. 2023 à 16:38, Ingo Schwarze a écrit : > > The advice is extremely simple: > > If you can, donate directly to the OpenBSD project because that means > 1. the donation can be used for any purpose, including all purposes >that can be funded by the foundation and some that can't > 2. it causes less overhead, less paperwork, less bookkeeping effort, >hence fewer distractions of developers from actual development > > Use the Foundation only if *you* have a specific reason why your > specific donation can only be made through the Foundation and not > directly. If you don't know, then it seems to me you have no specific > reason, so donating directly is better. > > Yours, > Ingo Maybe it should be written this way on the donations.html page of the web site. Having the reference to "PayPal recurring" for the Fondation coming first made me assume this was the preferred way to donate. But if I understand you properly, using the other PayPal links should rather be used, right? Thanks, Joel C.
Require host-name from DHCP clients
Hi, Because of Apple Private Address feature, my static IP allocations based on MAC address (hardware ethernet) doesn't work anymore. Looking at dhcpd.leases, some devices provide a client-hostname value ; but not every one. Is there a dhcpd.conf configuration parameter that forces DHCP clients to send a client-hostname information in their DHCP request? And if so, can this information be used by dhcpd(8) to apply a fixed-address to those device? Thank you, Joel C.
Usage of pf(4) with tap(4) and veb(4)
Hi, I'd like confirm I understood how pf works in a mixed veb/vport/tap environment. I'm using OpenBSD 7.3/amd64 (if that matters). I have a physical host that runs services (relayd, httpd...) the "classical" way and also provides VM using vmd. I have a couple of public IPs that are either affected to the host (via vportN) or to some VMs (via tapN). I'm doing all the IP filtering on the host's pf (because some VMs are Linux and I don't know iptables). Here's a sum'up of my configuration: # cat /etc/hostname.em0 up # cat /etc/hostname.vport0 rdomain 0 inet aa.bb.cc.5 255.255.255.0 !route -n add -inet default aa.bb.cc.1 up # cat /etc/hostname.vport1 rdomain 1 inet aa.bb.cc.6 255.255.255.0 !route -T 1 -n add -inet default aa.bb.cc.1 up # cat /etc/hostname.tap2 rdomain 2 up # cat /etc/hostname.veb0 add em0 add vport0 add vport1 add tap2 up # cat /etc/vm.conf (...) switch "wan" { interface veb0 } (...) vm linux { (...) interface tap2 { rdomain 2 switch "wan" # configure enp0s2 with aa.bb.cc.7/24 } (...) My initial pf configuration looked like: block return log pass on lo pass in on vport0 proto tcp to vport0 port ssh pass in on vport1 proto tcp to vport1 port { http, https } pass in on tap2 proto tcp to aa.bb.cc.7 port ssh pass out This filters properly on vport0 and vport1. But nothing is filtered on tap2: the http service running in the VM is accessible via aa.bb.cc.7. First question: is it expected that pf doesn't filter inbound traffic on a tap interface by default? Or is it specific to the fact that tap2 belongs to veb0? After re-reading veb(4), I ran `ifconfig veb0 link1` and could achieve the wished filtering by modifying my pf configuration as such: block return log pass on lo pass on em0 pass in on vport0 proto tcp to vport0 port ssh pass in on vport1 proto tcp to vport1 port { http, https } pass out on tap2 proto tcp to aa.bb.cc.7 port ssh pass out on vport0 pass out on vport1 pass in on tap2 Second question: is this the proper way to configure veb0 and pf or is there a "better" way of doing the filtering? Thanks for feedback, Joel C.
Re: access rdomain0 localhost from rdomainN
On Mon, May 15, 2023 at 10:21:55AM -, Stuart Henderson wrote: > > I think your options are 1) run a second copy (I suggest symlinking > rc.d/unbound -> e.g. rc.d/unbound1, and setting unbound1_rtable=1), > or 2) leak the traffic between tables using a PF rule, I have this > on my laptop: > > pass out quick on rdomain 2 to 127.0.0.1 nat-to 127.0.0.1 rtable 0 > This works great, thank you! For the record, there was a "set skip on lo" directive that came with the pf.conf example file. I had to remove it for the NAT rule to work. > (in my case I have a wg tunnel in rdomain 2 for certain traffic > but would like to use unwind on the main table for DNS lookups).
Re: access rdomain0 localhost from rdomainN
On Sun, May 14, 2023 at 10:32:15PM -0600, Zack Newman wrote: > On 2023-05-14, Joel Carnat wrote: > > I have unbound listening on lo0 (127.0.0.1, rdomain0) and resolv.conf > > configured with "nameserver 127.0.0.1". > > You can also have unbound(8) listen on lo1. > I have tried that but this seems to cause trouble with IPv6. # grep 'interface:' /var/unbound/etc/unbound.conf #interface: 127.0.0.1 interface: lo0 interface: lo1 #interface: ::1 # ifconfig lo0 lo0: flags=8049 mtu 32768 index 6 priority 0 llprio 3 groups: lo inet6 ::1 prefixlen 128 inet6 fe80::1%lo0 prefixlen 64 scopeid 0x6 inet 127.0.0.1 netmask 0xff00 # ifconfig lo1 lo1: flags=8049 rdomain 1 mtu 32768 index 13 priority 0 llprio 3 groups: lo inet6 ::1 prefixlen 128 inet6 fe80::1%lo1 prefixlen 64 scopeid 0xd inet 127.0.0.1 netmask 0xff00 # unbound-checkconf [1684134988] unbound-checkconf[16790:0] fatal error: ::1 present twice, cannot bind the same ports twice. The first entry is address ::1 from interface: lo0 and the second is address ::1 from interface: lo1 > Without more information-for example, showing what pf.conf(5) contains- > there is no way we can help you. As of now, I have nothing in pf.conf. I have try things but they didn't work at all: #pass on rdomain 1 #match out on rdomain 1 to 127.0.0.1 nat-to (lo0) rtable 0 > > I have two rdomain(4)s, and I have no issue pinging both lo(4) > interfaces (both interfaces have the IPv6 and IPv4 loopback > addresses assigned to them): > Using ping with '-V' works here too > > Is it possible to access lo0 from other rdomains? > > There shouldn't be anything you have to do to access the loopback > interface within its own rdomain; however if you want to access an > interface that is part of a separate rdomain, you will likely need to > instruct pf to use a separate rtable(4). That's what I suspected. What would the pf rule look like to implement "from lo1 on rdomain 1, I want to access lo0 from rdomain 0"? Thanks, Joel C.
access rdomain0 localhost from rdomainN
Hi, I have configured rdomain 1 and bound daemons (httpd and relayd) to it. They work as expected but I still have issues with DNS resolving on localhost. I have unbound listening on lo0 (127.0.0.1, rdomain0) and resolv.conf configured with "nameserver 127.0.0.1". When I try to use it from my other rdomain, the connection (from nslookup or dig) is not possible. Even pinging the IP is not working: # route -T 0 exec ping -n -c 1 127.0.0.1 PING 127.0.0.1 (127.0.0.1): 56 data bytes 64 bytes from 127.0.0.1: icmp_seq=0 ttl=255 time=0.138 ms # route -T 1 exec ping -n -c 1 127.0.0.1 PING 127.0.0.1 (127.0.0.1): 56 data bytes --- 127.0.0.1 ping statistics --- 1 packets transmitted, 0 packets received, 100.0% packet loss I can still ping external IPs though: # route -T 1 exec ping -n -c 1 9.9.9.9 PING 9.9.9.9 (9.9.9.9): 56 data bytes 64 bytes from 9.9.9.9: icmp_seq=0 ttl=58 time=0.453 ms I have tried using the "reject" and "pf" examples from the rdomain manpage but it doesn't solve my issue. I'm not even sure I understood what it was supposed to do :) Is it possible to access lo0 from other rdomains? Thanks, Joel C.
Re: hardware
> Le 18 avr. 2023 à 11:30, Stuart Henderson a écrit > : > > On 2023-04-18, Mischa wrote: >>> On 2023-04-17 23:37, Mike Larkin wrote: >>> On Mon, Apr 17, 2023 at 02:21:14PM -0600, Theo de Raadt wrote: Gustavo Rios wrote: > What is the best supported servers by OpenBSD ? The silver ones work a little bit better than the black ones. >>> >>> disagree. All my long running servers are the black ones. >> >> I concur. The black ones are the best! >> They also need to have blue blinkenlights. > > No love for the blue ones? If SunFire v100 count as blue, I do.
Re: Using gzip-static with httpd location
Le 23/03/2023 à 22:22, Jared Harper a écrit : On Thursday, March 23rd, 2023 at 2:15 PM, Jordan Geoghegan wrote: On 3/9/23 17:31, Joel Carnat wrote: Hi, I just tried applying gzip compression on a simple test web site using httpd and the gzip-static option ; using OpenBSD 7.2/amd64. As I understood the man page, gzip-static is supposed to be used inside the server block ; like listen, errdocs or tls. But doing so does not seem to enable gzip compression for files defined in a location block. What fails: server "default" { listen on 127.0.0.1 port 80 gzip-static block drop location "/.well-known/acme-challenge/" { root "/acme" request strip 2 pass } location "/www/" { root "/test" request strip 1 pass } } What works: server "default" { listen on 127.0.0.1 port 80 block drop location "/.well-known/acme-challenge/" { root "/acme" request strip 2 pass } location "/www/" { gzip-static root "/test" request strip 1 pass } } As you may see, what works is using gzip-static inside a location block and not outside. I've tested is using Firefox, curl and https://gtmetrix.com. All confirm gzip-static must be inside the location block to provide compressed resources. Here's an example of the curl command I used: # curl -I --compressed http://localhost:80/www/index.html HTTP/1.1 200 OK Connection: keep-alive Content-Encoding: gzip Content-Length: 1083 Content-Type: text/html Date: Fri, 10 Mar 2023 01:27:53 GMT Last-Modified: Fri, 10 Mar 2023 00:53:26 GMT Server: OpenBSD httpd Is this an expected behaviour? Regards, Joel C. Can confirm - I recently stumbled over this confusing behaviour as well. Curious if this is a bug or a man page issue. Regards, Jordan On my server (7.2 amd64) I have gzip-static set in the server block as documented, and it appears to work as expected. I am sorry that it probably doesn't help your situation, but maybe the differences in configuration can help point you in the right direction? -- config -- server "hrpr.us" { listen on * port 80 location "/.well-known/acme-challenge/*" { root "/acme" request strip 2 } location * { block return 302 "https://$HTTP_HOST$REQUEST_URI"; } } server "hrpr.us" { listen on * tls port 443 alias www.hrpr.us gzip-static tls { certificate "/etc/ssl/hrpr.us.fullchain.pem" key "/etc/ssl/private/hrpr.us.key" } location "/pub/*" { directory auto index } location "/.well-known/acme-challenge/*" { root "/acme" request strip 2 } root "/htdocs/hrpr.us/" } -- curl results -- ~% curl -I --compressed https://hrpr.us HTTP/1.1 200 OK Connection: keep-alive Content-Encoding: gzip Content-Length: 749 Content-Type: text/html Date: Thu, 23 Mar 2023 21:18:20 GMT Last-Modified: Thu, 23 Mar 2023 20:49:21 GMT Server: OpenBSD httpd ~% curl -I https://hrpr.us HTTP/1.1 200 OK Connection: keep-alive Content-Length: 1531 Content-Type: text/html Date: Thu, 23 Mar 2023 21:18:23 GMT Last-Modified: Thu, 23 Mar 2023 20:49:21 GMT Server: OpenBSD httpd The only big difference I see between your configuration and mine is that I have a default block then pass each location. I wonder if (and how) this can be a clue.
Re: Using gzip-static with httpd location
Le 10/03/2023 à 16:41, Marcus MERIGHI a écrit : Hello, j...@carnat.net (Joel Carnat), 2023.03.10 (Fri) 02:31 (CET): I just tried applying gzip compression on a simple test web site using httpd and the gzip-static option ; using OpenBSD 7.2/amd64. As I understood the man page, gzip-static is supposed to be used inside the server block ; like listen, errdocs or tls. But doing so does not seem to enable gzip compression for files defined in a location block. You have to provide the .gz file manually. httpd(8) does not create the gzip file content on the fly. This thread: https://marc.info/?t=16360323104 from when the feature was added, starts with the OP saying: In other words, if a client support gzip compression, when "file" is requested, httpd will check if "file.gz" is avaiable to serve. Well, the .gz file does exist. And I can switch from working state to non-working state by just moving the gzip-static option from inside the location section to outside of it (still inside the server section). Also, from httpd.conf(5): Enable static gzip compression to save bandwidth. If gzip encoding is accepted and if the requested file exists with an additional .gz suffix, use the compressed file instead and deliver it with content encoding gzip. Marcus
Using gzip-static with httpd location
Hi, I just tried applying gzip compression on a simple test web site using httpd and the gzip-static option ; using OpenBSD 7.2/amd64. As I understood the man page, gzip-static is supposed to be used inside the server block ; like listen, errdocs or tls. But doing so does not seem to enable gzip compression for files defined in a location block. What fails: server "default" { listen on 127.0.0.1 port 80 gzip-static block drop location "/.well-known/acme-challenge/*" { root "/acme" request strip 2 pass } location "/www/*" { root "/test" request strip 1 pass } } What works: server "default" { listen on 127.0.0.1 port 80 block drop location "/.well-known/acme-challenge/*" { root "/acme" request strip 2 pass } location "/www/*" { gzip-static root "/test" request strip 1 pass } } As you may see, what works is using gzip-static inside a location block and not outside. I've tested is using Firefox, curl and https://gtmetrix.com. All confirm gzip-static must be inside the location block to provide compressed resources. Here's an example of the curl command I used: # curl -I --compressed http://localhost:80/www/index.html HTTP/1.1 200 OK Connection: keep-alive Content-Encoding: gzip Content-Length: 1083 Content-Type: text/html Date: Fri, 10 Mar 2023 01:27:53 GMT Last-Modified: Fri, 10 Mar 2023 00:53:26 GMT Server: OpenBSD httpd Is this an expected behaviour? Regards, Joel C.
devel/bamf bamfdaemon segmentation fault
Hi, On OpenBSD 7.2-current (snapshot from Jan 8th) with bamf-0.5.4p0 installed, bamfdaemon dies right away after launching. I start it from an xterm in an XFCE session. My user is in class staff and in groups wheel and operator. Everything works perfectly right ; except bamfdaemon. # ktrace /usr/local/libexec/bamf/bamfdaemon DEBUG (8758): glibtop_open_p () LibGTop-Server(c=8758): [WARNING] kvm_nlist (i4bisppp_softc) LibGTop-Server(c=8758): [WARNING] pid 8758 received eof. Segmentation fault I don't understand what it means but the last few lines of kdump are: 25878 bamfdaemon CALL futex(0x79089d9a570,0x82,2147483647,0,0) 25878 bamfdaemon RET futex 0 25878 bamfdaemon CALL kbind(0x7f7bfc98,24,0xe0d755d6d621d909) 25878 bamfdaemon RET kbind 0 25878 bamfdaemon CALL kbind(0x7f7bfcb8,24,0xe0d755d6d621d909) 25878 bamfdaemon RET kbind 0 25878 bamfdaemon CALL kbind(0x7f7bfc38,24,0xe0d755d6d621d909) 25878 bamfdaemon RET kbind 0 25878 bamfdaemon CALL kbind(0x7f7bfc68,24,0xe0d755d6d621d909) 25878 bamfdaemon RET kbind 0 25878 bamfdaemon CALL kbind(0x7f7bfbe8,24,0xe0d755d6d621d909) 25878 bamfdaemon RET kbind 0 25878 bamfdaemon CALL kbind(0x7f7bfbe8,24,0xe0d755d6d621d909) 25878 bamfdaemon RET kbind 0 25878 bamfdaemon CALL kbind(0x7f7c01e8,24,0xe0d755d6d621d909) 25878 bamfdaemon RET kbind 0 25878 bamfdaemon CALL kbind(0x7f7c0178,24,0xe0d755d6d621d909) 25878 bamfdaemon RET kbind 0 25878 bamfdaemon CALL kbind(0x7f7c0178,24,0xe0d755d6d621d909) 25878 bamfdaemon RET kbind 0 25878 bamfdaemon CALL kbind(0x7f7c0198,24,0xe0d755d6d621d909) 25878 bamfdaemon RET kbind 0 25878 bamfdaemon PSIG SIGSEGV SIG_DFL code=SEGV_MAPERR addr=0x0 trapno=6 25878 bamfdaemon STRU struct pollfd { fd=14, events=0x1, revents=0<> } 25878 bamfdaemon STRU struct pollfd [2] { fd=9, events=0x1, revents=0<> } { fd=17, events=0x1, revents=0<> } 25878 bamfdaemon STRU struct pollfd [2] { fd=11, events=0x1, revents=0<> } { fd=12, events=0x1, revents=0<> } I can provide the 4MB full ktrace.out if needed. Kernel is: OpenBSD 7.2-current (GENERIC.MP) #925: Sun Jan 8 09:12:38 MST 2023 Any idea what happens / how to solve this? Thanks, Joel C.
Re: Xorg freeze with ThinkPad A485 / ATI Radeon Vega
Le 03/12/2022 à 21:51, Adriano Barbosa a écrit : On Sat, Dec 03, 2022 at 06:01:38PM +0100, Joel Carnat wrote: Le 02/12/2022 à 10:21, Bodie a écrit : On Fri Dec 2, 2022 at 12:14 AM CET, Joel Carnat wrote: Hi, About once a week, Xorg freezes while I'm using my ThinkPad A485 with OpenBSD 7.2. I've tried switching the window manager (XFCE, Gnome, WindowMaker, cwm) but it still happens. I only have a few apps opened (Firefox ESR, a terminal, a file manager). Tonight, I had just rebooted the system (because of syspatch and fw_update) and uptime was 1H30. The other times, I could suspend/resume a few times until freeze happened. Xorg is frozen in the sense that the cursor can only move but can't interact with windows. Same for the keyboard, no shortcuts works. I can't even switch to console with Ctrl+Alt+F1. I'm stuck with a screenshot-like of what I was doing. Note that sshd does work. I can remotely connect to the laptop. If I restart xenodm/gdm, it just fails. So I have to ˋrebootˋ. dmesg only outputs: [drm] *ERROR* ring sdma0 timeout, signaled seq=19512, emitted seq=19512 [drm] *ERROR* Process information: process pid 0 thread pid I've attached the full dmesg and Xorg logs. Is there something I can do to debug further? What is happening in other parts? Like top(1), systat(1), vmstat(8), Any modifications in /etc/sysctl.conf for more files, connections,.. What login class are you in? Oh, it seems nextcloud (Nextcloud Client) taking a bunch of resources (about 80% CPU in top) has to be killed from ssh. Then Xorg starts responding again... What is weird is that issuing various commands in SSH do not suffer from this freeze / slow effects. Only the X environment. Did you experience this on older versions? Last update I added a build dependency (x11/gnome/libcloudproviders) as a significant difference besides the upgrade itself, but I have no reason to think that is the cause. Could you test without this dependency? The version I'm using is 3.6.1p0. Cheers, Joel C.
Re: Xorg freeze with ThinkPad A485 / ATI Radeon Vega
Le 02/12/2022 à 10:21, Bodie a écrit : On Fri Dec 2, 2022 at 12:14 AM CET, Joel Carnat wrote: Hi, About once a week, Xorg freezes while I'm using my ThinkPad A485 with OpenBSD 7.2. I've tried switching the window manager (XFCE, Gnome, WindowMaker, cwm) but it still happens. I only have a few apps opened (Firefox ESR, a terminal, a file manager). Tonight, I had just rebooted the system (because of syspatch and fw_update) and uptime was 1H30. The other times, I could suspend/resume a few times until freeze happened. Xorg is frozen in the sense that the cursor can only move but can't interact with windows. Same for the keyboard, no shortcuts works. I can't even switch to console with Ctrl+Alt+F1. I'm stuck with a screenshot-like of what I was doing. Note that sshd does work. I can remotely connect to the laptop. If I restart xenodm/gdm, it just fails. So I have to ˋrebootˋ. dmesg only outputs: [drm] *ERROR* ring sdma0 timeout, signaled seq=19512, emitted seq=19512 [drm] *ERROR* Process information: process pid 0 thread pid I've attached the full dmesg and Xorg logs. Is there something I can do to debug further? What is happening in other parts? Like top(1), systat(1), vmstat(8), Any modifications in /etc/sysctl.conf for more files, connections,.. What login class are you in? Oh, it seems nextcloud (Nextcloud Client) taking a bunch of resources (about 80% CPU in top) has to be killed from ssh. Then Xorg starts responding again... What is weird is that issuing various commands in SSH do not suffer from this freeze / slow effects. Only the X environment.
Xorg freeze with ThinkPad A485 / ATI Radeon Vega
Hi, About once a week, Xorg freezes while I'm using my ThinkPad A485 with OpenBSD 7.2. I've tried switching the window manager (XFCE, Gnome, WindowMaker, cwm) but it still happens. I only have a few apps opened (Firefox ESR, a terminal, a file manager). Tonight, I had just rebooted the system (because of syspatch and fw_update) and uptime was 1H30. The other times, I could suspend/resume a few times until freeze happened. Xorg is frozen in the sense that the cursor can only move but can't interact with windows. Same for the keyboard, no shortcuts works. I can't even switch to console with Ctrl+Alt+F1. I'm stuck with a screenshot-like of what I was doing. Note that sshd does work. I can remotely connect to the laptop. If I restart xenodm/gdm, it just fails. So I have to ˋrebootˋ. dmesg only outputs: [drm] *ERROR* ring sdma0 timeout, signaled seq=19512, emitted seq=19512 [drm] *ERROR* Process information: process pid 0 thread pid I've attached the full dmesg and Xorg logs. Is there something I can do to debug further? Thanks, Joel C. OpenBSD 7.2 (GENERIC.MP) #0: Wed Oct 26 12:01:47 MDT 2022 r...@syspatch-72-amd64.openbsd.org:/usr/src/sys/arch/amd64/compile/GENERIC.MP real mem = 24544055296 (23407MB) avail mem = 23782797312 (22681MB) random: good seed from bootblocks mpath0 at root scsibus0 at mpath0: 256 targets mainbus0 at root bios0 at mainbus0: SMBIOS rev. 3.1 @ 0x6a572000 (62 entries) bios0: vendor LENOVO version "R0WET67W (1.35 )" date 03/22/2022 bios0: LENOVO 20MVS14301 acpi0 at bios0: ACPI 5.0 acpi0: sleep states S0 S3 S4 S5 acpi0: tables DSDT FACP SSDT SSDT CRAT CDIT SSDT TPM2 UEFI MSDM SLIC BATB HPET APIC MCFG SBST WSMT VFCT IVRS FPDT SSDT SSDT SSDT BGRT UEFI SSDT acpi0: wakeup devices GPP0(S3) GPP1(S3) GPP2(S3) GPP3(S4) GPP4(S3) L850(S3) GPP5(S4) GPP6(S3) GP17(S3) XHC0(S3) XHC1(S3) GP18(S3) LID_(S3) SLPB(S3) acpitimer0 at acpi0: 3579545 Hz, 32 bits acpihpet0 at acpi0: 14318180 Hz acpimadt0 at acpi0 addr 0xfee0: PC-AT compat cpu0 at mainbus0: apid 0 (boot processor) cpu0: AMD Ryzen 5 PRO 2500U w/ Radeon Vega Mobile Gfx, 1996.32 MHz, 17-11-00 cpu0: FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,MMX,FXSR,SSE,SSE2,HTT,SSE3,PCLMUL,MWAIT,SSSE3,FMA3,CX16,SSE4.1,SSE4.2,MOVBE,POPCNT,AES,XSAVE,AVX,F16C,RDRAND,NXE,MMXX,FFXSR,PAGE1GB,RDTSCP,LONG,LAHF,CMPLEG,SVM,EAPICSP,AMCR8,ABM,SSE4A,MASSE,3DNOWP,OSVW,SKINIT,TCE,TOPEXT,CPCTR,DBKP,PCTRL3,MWAITX,ITSC,FSGSBASE,BMI1,AVX2,SMEP,BMI2,RDSEED,ADX,SMAP,CLFLUSHOPT,SHA,IBPB,XSAVEOPT,XSAVEC,XGETBV1,XSAVES cpu0: 32KB 64b/line 8-way D-cache, 64KB 64b/line 4-way I-cache, 512KB 64b/line 8-way L2 cache, 4MB 64b/line 16-way L3 cache cpu0: smt 0, core 0, package 0 mtrr: Pentium Pro MTRR support, 8 var ranges, 88 fixed ranges cpu0: apic clock running at 24MHz cpu0: mwait min=64, max=64, C-substates=1.1, IBE cpu1 at mainbus0: apid 1 (application processor) cpu1: AMD Ryzen 5 PRO 2500U w/ Radeon Vega Mobile Gfx, 1996.23 MHz, 17-11-00 cpu1: FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,MMX,FXSR,SSE,SSE2,HTT,SSE3,PCLMUL,MWAIT,SSSE3,FMA3,CX16,SSE4.1,SSE4.2,MOVBE,POPCNT,AES,XSAVE,AVX,F16C,RDRAND,NXE,MMXX,FFXSR,PAGE1GB,RDTSCP,LONG,LAHF,CMPLEG,SVM,EAPICSP,AMCR8,ABM,SSE4A,MASSE,3DNOWP,OSVW,SKINIT,TCE,TOPEXT,CPCTR,DBKP,PCTRL3,MWAITX,ITSC,FSGSBASE,BMI1,AVX2,SMEP,BMI2,RDSEED,ADX,SMAP,CLFLUSHOPT,SHA,IBPB,XSAVEOPT,XSAVEC,XGETBV1,XSAVES cpu1: 32KB 64b/line 8-way D-cache, 64KB 64b/line 4-way I-cache, 512KB 64b/line 8-way L2 cache, 4MB 64b/line 16-way L3 cache cpu1: smt 1, core 0, package 0 cpu2 at mainbus0: apid 2 (application processor) cpu2: AMD Ryzen 5 PRO 2500U w/ Radeon Vega Mobile Gfx, 1996.22 MHz, 17-11-00 cpu2: FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,MMX,FXSR,SSE,SSE2,HTT,SSE3,PCLMUL,MWAIT,SSSE3,FMA3,CX16,SSE4.1,SSE4.2,MOVBE,POPCNT,AES,XSAVE,AVX,F16C,RDRAND,NXE,MMXX,FFXSR,PAGE1GB,RDTSCP,LONG,LAHF,CMPLEG,SVM,EAPICSP,AMCR8,ABM,SSE4A,MASSE,3DNOWP,OSVW,SKINIT,TCE,TOPEXT,CPCTR,DBKP,PCTRL3,MWAITX,ITSC,FSGSBASE,BMI1,AVX2,SMEP,BMI2,RDSEED,ADX,SMAP,CLFLUSHOPT,SHA,IBPB,XSAVEOPT,XSAVEC,XGETBV1,XSAVES cpu2: 32KB 64b/line 8-way D-cache, 64KB 64b/line 4-way I-cache, 512KB 64b/line 8-way L2 cache, 4MB 64b/line 16-way L3 cache cpu2: smt 0, core 1, package 0 cpu3 at mainbus0: apid 3 (application processor) cpu3: AMD Ryzen 5 PRO 2500U w/ Radeon Vega Mobile Gfx, 1996.23 MHz, 17-11-00 cpu3: FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,MMX,FXSR,SSE,SSE2,HTT,SSE3,PCLMUL,MWAIT,SSSE3,FMA3,CX16,SSE4.1,SSE4.2,MOVBE,POPCNT,AES,XSAVE,AVX,F16C,RDRAND,NXE,MMXX,FFXSR,PAGE1GB,RDTSCP,LONG,LAHF,CMPLEG,SVM,EAPICSP,AMCR8,ABM,SSE4A,MASSE,3DNOWP,OSVW,SKINIT,TCE,TOPEXT,CPCTR,DBKP,PCTRL3,MWAITX,ITSC,FSGSBASE,BMI1,AVX2,SMEP,BMI2,RDSEED,ADX,SMAP,CLFLUSHOPT,SHA,IBPB,XSAVEOPT,XSAVEC,XGETBV1,XSAVES cpu3: 32KB 64b/line 8-way D-cache, 64KB 64b/line 4-way I-cache, 512KB 64b/line 8-way L2 cache, 4MB 64b/line 16-way L3 cache cpu3: smt 1, core 1, package 0 cpu4 at
Re: rrdtool fails to install on 7.2 due to freetype.30.2 not found for cairo
Did you install x* packages? > Le 24 oct. 2022 à 05:12, Jim Anderson a écrit : > > Installed 7.2 and rrdtool will not install due to an error > installing freetype for cairo. > > # pkg_add rrdtool > quirks-6.42 signed on 2022-10-23T09:59:17Z > rrdtool-1.7.2p1:pcre-8.44: ok > rrdtool-1.7.2p1:libffi-3.4.2: ok > rrdtool-1.7.2p1:sqlite3-3.39.3: ok > rrdtool-1.7.2p1:xz-5.2.5p2: ok > rrdtool-1.7.2p1:bzip2-1.0.8p0: ok > rrdtool-1.7.2p1:libiconv-1.17: ok > rrdtool-1.7.2p1:gettext-runtime-0.21p1: ok > rrdtool-1.7.2p1:python-3.9.15p0: ok > rrdtool-1.7.2p1:glib2-2.72.4: ok > rrdtool-1.7.2p1:png-1.6.37p0: ok > rrdtool-1.7.2p1:lzo2-2.10p2: ok > Can't install cairo-1.17.6 because of libraries > |library freetype.30.2 not found > | /usr/X11R6/lib/libfreetype.so.30.0 (system): minor is too small > | /usr/X11R6/lib/libfreetype.so.30.1 (system): minor is too small > Direct dependencies for cairo-1.17.6 resolve to png-1.6.37p0 > glib2-2.72.4 lzo2-2.10p2 > > Full dependency tree is png-1.6.37p0 xz-5.2.5p2 gettext-runtime-0.21p1 > pcre-8.44 bzip2-1.0.8p0 libiconv-1.17 sqlite3-3.39.3 glib2-2.72.4 > python-3.9.15p0 lzo2-2.10p2 libffi-3.4.2 > > rrdtool-1.7.2p1:graphite2-1.3.14: ok > Can't install harfbuzz-5.2.0: can't resolve cairo-1.17.6 > rrdtool-1.7.2p1:fribidi-1.0.12: ok > Can't install pango-1.50.10: can't resolve harfbuzz-5.2.0 > rrdtool-1.7.2p1:rrdupdate-1.7.2p1: ok > rrdtool-1.7.2p1:libxml-2.10.3: ok > Can't install rrdtool-1.7.2p1: can't resolve pango-1.50.10 > Running tags: ok > New and changed readme(s): >/usr/local/share/doc/pkg-readmes/glib2 > Couldn't install cairo-1.17.6 harfbuzz-5.2.0 pango-1.50.10 > rrdtool-1.7.2p1 > > # pkg_info > base64-1.5p0 > curl-7.85.0 > intel-firmware-20220809v0 > kcgi-0.13.0 > nghttp2-1.49.0 > quirks-6.42 > rsync-3.2.5pl0 >
Question on using !!prog with syslogd(8)
Hello, I want to take actions when specific logs appear but still want to log them in a file (for further inspection). But "!!prog" does not work as I would expect. I've tested on 7.1 and 7.2/snapshots. When using '!!', only the first action is applied. I configured syslog.conf this way: !!sshd *.* /var/log/sshd *.* |/home/jca/Téléchargements/sshd_alert !* In this configuration, only logging to the file works. If I configure : !!sshd #*.*/var/log/sshd *.* |/home/jca/Téléchargements/sshd_alert !* then the pipe works and the script runs ok. The manpage says: "!!prog causes the subsequent block to abort evaluation when a message matches, ensuring that only a single set of actions is taken." As it says "set of actions", I expected my configuration to work. Is it possible to take several actions inside a !!prog block? Thank you, Joel C.
Issue with FDE and bootblocks on 7.2 snapshots ?
Hi, I’ve been trying to install my T460s from scratch (using FDE with UEFI boot and gpt disk configuration) using the 2022-09-13 snapshot. At the end of installation process, I keep getting « Failed to install bootblocks ». I tried a several times. Also tried a non-FDE installation (using UEFI gpt) and it also failed. I’ve just run the FDE installation using install71.img and everything went ok. Then I « sysupgrade -s » and everything went ok too. Just saying in case it is a bug in install72.img. Regards, Joel
Re: Trouble using keepassxc-proxy with iridium/chromium
On Sun, 22 May 2022 19:27:19 +0200 Antoine Jacoutot wrote: > On Sun, May 22, 2022 at 07:17:49PM +0200, Joel Carnat wrote: > > Hello, > > > > From a brand new 7.1/amd64 installation, I'm trying to use > > keepassxc-proxy with Iridium. As I did for Firefox-ESR, I added > > "/usr/local/bin/keepassxc-proxy rx" to /etc/iridium/unveil.main. > > But it never connects to the database. > > I think you also need: > /usr/local/bin r > Yep, it solves both Iridium & Chromium issues. Thanks a lot. > > > > Using Firefox-ESR, it works ok. > > Using "iridium --disable-unveil", it works ok. > > Using Chromium, it doesn't work either. When disabling unveil, it > > works. > > > > I've tried looking at ktrace/kdump but I don't really know what to > > look for. When searching for "NAMI.*keepassxc", I could only find > > calls that seem to end properly (RET 0). > > > > Anyone knows what to add to unveil.main to have keepassxc-proxy? > > What should I be looking for in kdump to identify what fails? > > > > ktrace.out is a bit more 100MB but I can make it available online > > if it helps. > > > > Thanks, > > Joel C. > > >
Trouble using keepassxc-proxy with iridium/chromium
Hello, >From a brand new 7.1/amd64 installation, I'm trying to use keepassxc-proxy with Iridium. As I did for Firefox-ESR, I added "/usr/local/bin/keepassxc-proxy rx" to /etc/iridium/unveil.main. But it never connects to the database. Using Firefox-ESR, it works ok. Using "iridium --disable-unveil", it works ok. Using Chromium, it doesn't work either. When disabling unveil, it works. I've tried looking at ktrace/kdump but I don't really know what to look for. When searching for "NAMI.*keepassxc", I could only find calls that seem to end properly (RET 0). Anyone knows what to add to unveil.main to have keepassxc-proxy? What should I be looking for in kdump to identify what fails? ktrace.out is a bit more 100MB but I can make it available online if it helps. Thanks, Joel C.
Strange relayd(8) logs meaning?
Hello, I have relayd(8) in front of nginx(8) to render a local Nextcloud instance. From time to time, the Nextcloud client fails saying "Host not found" which has no sense. The whole workstation still accesses the network and can resolve anything. relayd(8) listens on em1 IP and uses http protocol. nginx(8) listens on localhost. I have noted that relayd(8) writes such following entries in syslog when the Nextcloud client fails: May 18 16:45:54 server relayd[39533]: relay https_lan_relay, session 2816 (1 active), nextcloud, 192.168.0.48 -> :9081, done, PROPFIND -> 127.0.0.1:9081; PROPFIND; PROPFIND; PROPFIND; PROPFIND; PROPFIND; PROPFIND; REPORT; May 18 16:46:42 server relayd[97191]: relay https_lan_relay, session 3091 (1 active), nextcloud, 10.15.5.76 -> :9081, done, GET -> 127.0.0.1:9081; GET; PROPFIND; GET; GET; PROPFIND; GET; Are such logs (the HTTP commands pipelined with ;) indicating something weird happening on relayd or isn't it a clue for anything ; and I must dig somewhere else? Thank you, Joel C.
Values from wsconsctl(8) and xbacklight(1) may differ
Hello, I have just noticed that depending on how you change the display brightness on my ThinkPad, values may differ wether I query using wsconsctl(8) and xbacklight(1). Here's what I have observed: # doas wsconsctl display.brightness ; xbacklight -get display.brightness=25.11% 25.00 # xbacklight -set 50 # doas wsconsctl display.brightness ; xbacklight -get display.brightness=50.00% 50.00 # doas wsconsctl display.brightness=75 display.brightness -> 75.11% # doas wsconsctl display.brightness ; xbacklight -get display.brightness=75.11% 50.00 I also tried using the (ThinkPad T460s) keyboard brightness buttons and check which software show the proper value. From the previous step, hit the button until I got to the max brightness I could get, I end up with: # doas wsconsctl display.brightness ; xbacklight -get display.brightness=100.00% 50.00 Then as little light as I could get: # doas wsconsctl display.brightness ; xbacklight -get display.brightness=2.69% 50.00 In case it matters, this is "7.0 GENERIC.MP#335 amd64" with "Intel HD Graphics 520" attached as inteldrm0 using "modeset(0): glamor X acceleration enabled on Mesa Intel(R) HD Graphics 520 (SKL GT2)" Regards, Joel C.
Re: Either intel nor glamor drivers do not work for Samsung NC215S
Hi, This GMA 3150 may be the same I had on my Dell Inspiron Mini 10. It’s be ages since it has not been booted but I recall this card to be really specific ; hear not being compatible with standard intel driver. Looking at my archives (https://www.tumfatig.net/2010/openbsd-on-dell-inspiron-10/) it seems I used the vesa driver to have X running. You may give it a try. Regards, Joel C. > Le 3 févr. 2022 à 22:01, Sven Wolf a écrit : > > > >> On 2/3/22 21:33, Sergey Andrianov wrote: >> Yes, it did not help. Just noticed that in README for xf86-video-intel >> it's written: >>> PineView-M (Atom N400 series) >> but mine is N570. > The N455 and N570 should have the same 3150 GFX, see > https://news.samsung.com/global/samsung-electronics-launches-eco-friendly-solar-powered-rechargeable-netbook-nc-215s > > https://ark.intel.com/content/www/us/en/ark/compare.html?productIds=49491,55637 > >
Re: Edimax EW-7612UAN V2 appears as "generic" Realtek WLAN Adapter
Le 25/01/2022 à 01:32, Jonathan Gray a écrit : On Tue, Jan 25, 2022 at 01:08:16AM +0100, Joel Carnat wrote: Hello, Because my Internet Box has just died, I plugged a spare Edimax EW-7612UAN V2 on my OpenBSD 7.0 router and connected it to my iPhone WiFi connection sharing. I've tested it for a few hours with Video-on-Demand, email etc and it works without issues. The thing is it seems to be recognized as a generic Realtek device: # dmesg (...) urtwn0 at uhub0 port 4 configuration 1 interface 0 "Realtek 802.11n WLAN Adapter" rev 2.00/2.00 addr 2 urtwn0: MAC/BB RTL8192CU, RF 6052 2T2R, address 08:be:ac:1d:40:96 # usbdevs -v (...) addr 02: 7392:7822 Realtek, 802.11n WLAN Adapter high speed, power 500 mA, config 1, rev 2.00, iSerial 00e04c01 driver: urtwn0 It is also not referenced in the man page as a supported device. In src/sys/dev/usb/usbdevs, on line 1722, I could can find product EDIMAX RTL8192CU 0x7822 RTL8192CU but I have no real clue about what to modify and propose a diff. Sorry. for usb, strings from the device are preferred with usbdevs as fallback see usbd_cache_devinfo() in sys/dev/usb/usb_subr.c Hum, ok. Does this means that particular device does not announce itself with the proper brand/model but with a generic identifier?
Edimax EW-7612UAN V2 appears as "generic" Realtek WLAN Adapter
Hello, Because my Internet Box has just died, I plugged a spare Edimax EW-7612UAN V2 on my OpenBSD 7.0 router and connected it to my iPhone WiFi connection sharing. I've tested it for a few hours with Video-on-Demand, email etc and it works without issues. The thing is it seems to be recognized as a generic Realtek device: # dmesg (...) urtwn0 at uhub0 port 4 configuration 1 interface 0 "Realtek 802.11n WLAN Adapter" rev 2.00/2.00 addr 2 urtwn0: MAC/BB RTL8192CU, RF 6052 2T2R, address 08:be:ac:1d:40:96 # usbdevs -v (...) addr 02: 7392:7822 Realtek, 802.11n WLAN Adapter high speed, power 500 mA, config 1, rev 2.00, iSerial 00e04c01 driver: urtwn0 It is also not referenced in the man page as a supported device. In src/sys/dev/usb/usbdevs, on line 1722, I could can find product EDIMAX RTL8192CU 0x7822 RTL8192CU but I have no real clue about what to modify and propose a diff. Sorry. Regards, Joel C.
Re: Using Connection:keep-alive with relayd
Hi, Unfortunately, I already tried using those header settings during my testing. And those don't solve my problem. What 'match header set "Keep-Alive" value "$TIMEOUT"' does is force relayd(8) to send a Keep-Alive header to httpd(8). But httpd(8) is already replying with a "Connection: keep-alive" header. And that does not prevent relayd(8) to reply to client with two Connection headers, 'Connection: keep-alive' and 'Connection: close\r\n'. Which is still what makes the client close the connection. I've attached a wireshark capture of the whole session. Le Tue, Nov 16, 2021 at 06:25:52AM -0800, Paul Pace a écrit : > I meant to reply earlier, since no one else did but I am brand-new to > figuring out how to use relays. > > I think what you are looking for is in the relayd.conf(5)[1] examples > section. Here is one example: > > The following configuration would add a relay to forward secure HTTPS > connections to a pool of HTTP webservers using the loadbalance mode (TLS > acceleration and layer 7 load balancing). The HTTP protocol definition will > add two HTTP headers containing address information of the client and the > server, set the “Keep-Alive” header value to the configured session timeout, > and include the “sessid” variable in the hash to calculate the target host: > > http protocol "https" { > match header set "X-Forwarded-For" \ > value "$REMOTE_ADDR" > match header set "X-Forwarded-By" \ > value "$SERVER_ADDR:$SERVER_PORT" > match header set "Keep-Alive" value "$TIMEOUT" > > match query hash "sessid" > > pass > block path "/cgi-bin/index.cgi" value "*command=*" > > tls { no tlsv1.0, ciphers "HIGH" } > } > > relay "tlsaccel" { > listen on www.example.com port 443 tls > protocol "https" > forward to port 8080 mode loadbalance check tcp > } > > > And here is an excerpt from Relayd and Httpd Mastery: > > > Set > > The set option sets an item’s value. Use this to change the value of a > > HTTP > > header, a query string, a URL, or anything else relayd can filter on. If > > the thing > > doesn’t exist, it gets added. The set option is most commonly used with > > the > > match operation. > > > > Here I change the Connection header. This header controls if the TCP/IP > > connection should stay open once the request is granted, or if it should > > terminate. > > Many applications set this to keep-alive even if they don’t need it. > > Here, we tell > > relayd to rewrite the incoming client request and to make this header > > always say > > close. > >match request header set "Connection" value "close" > > And another: > > >http protocol https { > > match request header append "X-Forwarded-For" value "$REMOTE_ADDR" > > match request header append "X-Forwarded-By" \ > >value "$SERVER_ADDR:$SERVER_PORT" > > match request header set "Connection" value "close" > > # Various TCP performance options > > tcp { nodelay, sack, socket buffer 65536, backlog 128 } > >} > > No matter what, we append our relay host’s information to the > > X-Forwarded- > > For and X-Forwarded-By headers. If the application doesn’t need these > > headers, > > their presence won’t hurt anything. > > > > The sample relayd.conf always changes the Connection header to close. > > This > > tells the server to answer a single HTTP request per TCP connection. The > > alternative, keep-alive, tells the server to answer several HTTP > > requests in a > > single TCP connection. Putting everything in a single TCP connection > > decreases > > the networking overhead, but puts all the load on a single back-end > > server. > > Closing the connection with every request increases the networking > > overhead but > > spreads it between all of the servers in the farm. Test your application > > with and > > without close. > > Note the book covers OpenBSD 6.1 and some things have changed, but at least > for myself I have learned basically how to use the tool, and with the man > page I am able to figure out configurations for myself better than I ever > did with nginx or Ubuntu. > > [1] https://man.openbsd.org/OpenBSD-7.0/relayd.conf#EXAMPLES > > I hope this helps. > Paul > > On 2021-11-12 16:37, Jo
Using Connection:keep-alive with relayd
Hi, I have noticed that relayd(8) sends a "Connection: close" HTTP header even if the backend server has sent a "Connection: keep-alive" HTTP header. Here's my configuration: # cat /etc/httpd.conf server "default" { listen on * port 80 location * { root "/htdocs/hugo" } } # cat /etc/relayd.conf ext_addr="127.0.0.1" table { 127.0.0.1 } http protocol https { match request header append "X-Forwarded-For" value "$REMOTE_ADDR" match request header append "X-Forwarded-By" \ value "$SERVER_ADDR:$SERVER_PORT" tcp { sack, backlog 128 } } relay wwwtls { listen on $ext_addr port 81 protocol https forward to port http } If I used curl(1) to get resources from httpd(8), it uses only one HTTP connection: # curl -Ivs http://localhost:80/ http://localhost:80/css/all.min.css * Trying 127.0.0.1:80... * Connected to localhost (127.0.0.1) port 80 (#0) HEAD / HTTP/1.1 Host: localhost User-Agent: curl/7.79.0 Accept: */* * Mark bundle as not supporting multiuse < HTTP/1.1 200 OK HTTP/1.1 200 OK < Connection: keep-alive Connection: keep-alive < Content-Length: 7729 Content-Length: 7729 < Content-Type: text/html Content-Type: text/html < Date: Sat, 13 Nov 2021 00:20:07 GMT Date: Sat, 13 Nov 2021 00:20:07 GMT < Last-Modified: Wed, 27 Oct 2021 07:27:51 GMT Last-Modified: Wed, 27 Oct 2021 07:27:51 GMT < Server: OpenBSD httpd Server: OpenBSD httpd < * Connection #0 to host localhost left intact * Found bundle for host localhost: 0xcdeb98aae80 [serially] * Can not multiplex, even if we wanted to! * Re-using existing connection! (#0) with host localhost * Connected to localhost (127.0.0.1) port 80 (#0) HEAD /css/all.min.css HTTP/1.1 Host: localhost User-Agent: curl/7.79.0 Accept: */* * Mark bundle as not supporting multiuse < HTTP/1.1 200 OK HTTP/1.1 200 OK < Connection: keep-alive Connection: keep-alive < Content-Length: 59344 Content-Length: 59344 < Content-Type: text/css Content-Type: text/css < Date: Sat, 13 Nov 2021 00:20:07 GMT Date: Sat, 13 Nov 2021 00:20:07 GMT < Last-Modified: Wed, 24 Mar 2021 22:34:18 GMT Last-Modified: Wed, 24 Mar 2021 22:34:18 GMT < Server: OpenBSD httpd Server: OpenBSD httpd < * Connection #0 to host localhost left intact But if I use curl(1) to get the same resources via relayd(8), the connections are closed for each resources: # curl -Ivs http://localhost:81/ http://localhost:81/css/all.min.css * Trying 127.0.0.1:81... * Connected to localhost (127.0.0.1) port 81 (#0) HEAD / HTTP/1.1 Host: localhost:81 User-Agent: curl/7.79.0 Accept: */* * Mark bundle as not supporting multiuse < HTTP/1.1 200 OK HTTP/1.1 200 OK < Connection: keep-alive Connection: keep-alive < Connection: close Connection: close < Content-Length: 7729 Content-Length: 7729 < Content-Type: text/html Content-Type: text/html < Date: Sat, 13 Nov 2021 00:22:24 GMT Date: Sat, 13 Nov 2021 00:22:24 GMT < Last-Modified: Wed, 27 Oct 2021 07:27:51 GMT Last-Modified: Wed, 27 Oct 2021 07:27:51 GMT < Server: OpenBSD httpd Server: OpenBSD httpd < * Closing connection 0 * Hostname localhost was found in DNS cache * Trying 127.0.0.1:81... * Connected to localhost (127.0.0.1) port 81 (#1) HEAD /css/all.min.css HTTP/1.1 Host: localhost:81 User-Agent: curl/7.79.0 Accept: */* * Mark bundle as not supporting multiuse < HTTP/1.1 200 OK HTTP/1.1 200 OK < Connection: keep-alive Connection: keep-alive < Connection: close Connection: close < Content-Length: 59344 Content-Length: 59344 < Content-Type: text/css Content-Type: text/css < Date: Sat, 13 Nov 2021 00:22:24 GMT Date: Sat, 13 Nov 2021 00:22:24 GMT < Last-Modified: Wed, 24 Mar 2021 22:34:18 GMT Last-Modified: Wed, 24 Mar 2021 22:34:18 GMT < Server: OpenBSD httpd Server: OpenBSD httpd < * Closing connection 1 If I use telnet(1) and send the HTTP commands "by hand", I could see that the HTTP connection was left up and that I could grab several resources ; so the connection is not really closed by relayd(8). Is there a way to tell relayd(8) to not send that extra "Connection: close" header? Thank you, Joel C.
Re: relayd and snmp agentx
On Sat, Nov 06, 2021 at 09:24:47AM +0100, Martijn van Duren wrote: > On Fri, 2021-11-05 at 15:59 +, Stuart Henderson wrote: > > On 2021-11-05, Joel Carnat wrote: > > > Hello, > > > > > > I read in relayd.conf(5) that there is an SNMP agentx feature. And > > > there is an OPENBSD-RELAYD-MIB.txt file in 7.0 /usr/share/snmp/mibs > > > directory. > > > > > > But in snmpd.conf(5), I couldn't found any reference for subagent or > > > agentx. Reading the sources logs, I understood that agentx was removed > > > from snmpd(8) around Jun 30, 2020. > > > > btw, martijn@ is working on this, see "snmpd(8): New application layer - > > step towards agentx support" on tech@ which would benefit from test > > reports/feedback > > ++ > I do want to emphasise that that diff doesn't include the agentx bits. > But having more (any) test-reports will greatly help speed agentx > support in snmpd(8) up. > > > > > Is there a way to query relayd MIB on OpenBSD 7.0? > > > Either by using snmpd(8) or ports/net/net-snmpd. > > > > Worth a try via net-snmp, or build snmpd from an old checkout.. > > > I developed libagentx with net-snmpd, so that one should work just fine. > Setting "master agentx" in should work, since both daemons default to > /var/agentx/master (as specified by the RFC), but you might need to > tweak agentXPerms a little. > > I don't recommend using the old snmpd agentx code. There's quite a few > fixes since then and the reason I removed the code is because it allowed > anyone with access to crash the daemon by use after free. > Thanks for the details. So I configured netsnmpd to access agentx from relayd. And it seems to work. As for now, I'm using "Joel Knight's Net-SNMP and snmpd coexistence" configuration. I already did so when I wanted to get disk & I/O details via SNMP. So that's OK for me if that's the way to go ; until agentx support in back in snmpd(8). I'll subscribe to tech@ so that I'll get notified when snmp diffs pops up and will test and report things.
relayd and snmp agentx
Hello, I read in relayd.conf(5) that there is an SNMP agentx feature. And there is an OPENBSD-RELAYD-MIB.txt file in 7.0 /usr/share/snmp/mibs directory. But in snmpd.conf(5), I couldn't found any reference for subagent or agentx. Reading the sources logs, I understood that agentx was removed from snmpd(8) around Jun 30, 2020. Is there a way to query relayd MIB on OpenBSD 7.0? Either by using snmpd(8) or ports/net/net-snmpd. Thank you, Joel C.
Multiple SSID when operating in Host AP mode?
Hello, Is it possible (as of OpenBSD 7.0/amd64) to configure a bwfm (Broadcom BCM4356) device in hostap mode and publish several nwid ; from that single device? The idea would be to have several SSIDs with different configurations ; as some IoT devices don't support "greater" network than 11g when computers and phones would support up to 11n/11ac. Thank you, Joel C.
start_timeout not found on sysupgrade
Hi, I have just upgraded from 7.0-beta Sep 5 snapshot to Sep 7. During the process, I noticed the following error message: Welcome to the OpenBSD/amd64 7.0 installation program. /autoinstall[2697]: start_timeout: not found Performing non-interactive upgrade. The upgrade process went ok though. Regards, Joel
Run a command on "last day of month"
Hello, I would like to run a command on "the last day of each month". From what I understood reading the crontab(5) manpage, the simplest way would be setting day-of-month to "28-31". But this would mean running the command 4 times for months that have 31 days. Is there a simpler/better way to configure crontab(1) to run a command on "the last day of month" only ? Thank you, Joel C.
Re: Using relayd as a reverse proxy for multiple local servers
Hi, In my testings, using « listen on * port https tls » doesn’t work either. What I did is replace the « * » with the IP address where I want relayd to listen to. And as my gateway has several interfaces, I created a relay section for each single interface I wanted relayd to bind to. Regards, Joel Envoyé de mon iPad > Le 27 mai 2021 à 11:03, Philip Kaludercic a écrit : > listen on * port https tls
Bugs running 6.9-CURRENT on MacBook Pro Touchbar 2017
Hi, I went back on testing OpenBSD on my MacBookPro14,3. I just installed 6.9-CURRENT and here's a list of non-working stuff. - keyboard and touchpad don't work. I have to use a USB keyboard/mouse. internal keyboard does work in the boot loader. but stops working after the kernel is loaded. - Xorg works but only in wsfb mode. - when I install the amdgpu-firmware, the console and Xorg gets black. And I have to force shutdown using the power button. - the wireless card can scan the SSID around me. but I can't connect to mine. it references the SSID in ifconfig but the status is "no network". - sleep, using zzz, drops the system to the debugger. If you have ideas, I'll be glad to test patches on it. Regards, Jo OpenBSD 6.9-current (RAMDISK_CD) #28: Fri May 21 13:27:21 MDT 2021 dera...@amd64.openbsd.org:/usr/src/sys/arch/amd64/compile/RAMDISK_CD real mem = 17055719424 (16265MB) avail mem = 16534794240 (15768MB) random: good seed from bootblocks mainbus0 at root bios0 at mainbus0: SMBIOS rev. 3.0 @ 0x7aedf000 (37 entries) bios0: vendor Apple Inc. version "429.100.7.0.0" date 02/26/2021 bios0: Apple Inc. MacBookPro14,3 acpi0 at bios0: ACPI 5.0 acpi0: tables DSDT FACP UEFI ECDT HPET APIC MCFG SBST SSDT SSDT SSDT SSDT SSDT SSDT SSDT SSDT SSDT SSDT SSDT SSDT SSDT DMAR VFCT acpiec0 at acpi0 acpihpet0 at acpi0: 2399 Hz acpimadt0 at acpi0 addr 0xfee0: PC-AT compat cpu0 at mainbus0: apid 0 (boot processor) cpu0: Intel(R) Core(TM) i7-7700HQ CPU @ 2.80GHz, 3792.96 MHz, 06-9e-09 cpu0: FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,EST,TM2,SSSE3,SDBG,FMA3,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,DEADLINE,AES,XSAVE,OSXSAVE,AVX,F16C,RDRAND,NXE,PAGE1GB,RDTSCP,LONG,LAHF,ABM,3DNOWP,PERF,ITSC,FSGSBASE,TSC_ADJUST,SGX,BMI1,AVX2,SMEP,BMI2,ERMS,INVPCID,MPX,RDSEED,ADX,SMAP,CLFLUSHOPT,PT,SRBDS_CTRL,MD_CLEAR,TSXFA,IBRS,IBPB,STIBP,L1DF,SSBD,SENSOR,ARAT,XSAVEOPT,XSAVEC,XGETBV1,XSAVES,MELTDOWN cpu0: 256KB 64b/line 8-way L2 cache cpu0: apic clock running at 24MHz cpu0: mwait min=64, max=64, C-substates=0.2.1.2.4.1.1.1, IBE cpu at mainbus0: not configured cpu at mainbus0: not configured cpu at mainbus0: not configured cpu at mainbus0: not configured cpu at mainbus0: not configured cpu at mainbus0: not configured cpu at mainbus0: not configured ioapic0 at mainbus0: apid 2 pa 0xfec0, version 20, 24 pins acpiprt0 at acpi0: bus 0 (PCI0) acpiprt1 at acpi0: bus 1 (PEG0) acpiprt2 at acpi0: bus 4 (PEG1) acpiprt3 at acpi0: bus 122 (PEG2) acpiprt4 at acpi0: bus 3 (RP01) acpiprt5 at acpi0: bus 2 (RP17) acpipci0 at acpi0 PCI0: 0x0004 0x0011 0x0001 acpicmos0 at acpi0 "APP0001" at acpi0 not configured "APP0003" at acpi0 not configured "ACPI0001" at acpi0 not configured "ACPI0002" at acpi0 not configured "APP000B" at acpi0 not configured "APP000D" at acpi0 not configured "BCM2E7C" at acpi0 not configured "APP" at acpi0 not configured "ACPI0003" at acpi0 not configured "PNP0C0D" at acpi0 not configured "PNP0C0C" at acpi0 not configured "APP0002" at acpi0 not configured "PNP0C0E" at acpi0 not configured acpicpu at acpi0 not configured cpu0: using VERW MDS workaround pci0 at mainbus0 bus 0 pchb0 at pci0 dev 0 function 0 "Intel Xeon E3-1200 v6/7 Host" rev 0x05 ppb0 at pci0 dev 1 function 0 "Intel Core 6G PCIE" rev 0x05: msi pci1 at ppb0 bus 1 "ATI Polaris 11" rev 0xc7 at pci1 dev 0 function 0 not configured "ATI Radeon Pro Audio" rev 0x00 at pci1 dev 0 function 1 not configured ppb1 at pci0 dev 1 function 1 "Intel Core 6G PCIE" rev 0x05: msi pci2 at ppb1 bus 4 ppb2 at pci2 dev 0 function 0 vendor "Intel", unknown product 0x1578 rev 0x02 pci3 at ppb2 bus 5 ppb3 at pci3 dev 0 function 0 "Intel JHL6540 Thunderbolt" rev 0x02: msi pci4 at ppb3 bus 6 "Intel JHL6540 Thunderbolt" rev 0x02 at pci4 dev 0 function 0 not configured ppb4 at pci3 dev 1 function 0 "Intel JHL6540 Thunderbolt" rev 0x02: msi pci5 at ppb4 bus 8 ppb5 at pci3 dev 2 function 0 "Intel JHL6540 Thunderbolt" rev 0x02: msi pci6 at ppb5 bus 7 xhci0 at pci6 dev 0 function 0 "Intel JHL6540 Thunderbolt" rev 0x02: msi, xHCI 1.10 usb0 at xhci0: USB revision 3.0 uhub0 at usb0 configuration 1 interface 0 "Intel xHCI root hub" rev 3.00/1.00 addr 1 ppb6 at pci3 dev 4 function 0 "Intel JHL6540 Thunderbolt" rev 0x02: msi pci7 at ppb6 bus 65 ppb7 at pci0 dev 1 function 2 "Intel Core 6G PCIE" rev 0x05: msi pci8 at ppb7 bus 122 ppb8 at pci8 dev 0 function 0 vendor "Intel", unknown product 0x1578 rev 0x02 pci9 at ppb8 bus 123 ppb9 at pci9 dev 0 function 0 "Intel JHL6540 Thunderbolt" rev 0x02: msi pci10 at ppb9 bus 124 "Intel JHL6540 Thunderbolt" rev 0x02 at pci10 dev 0 function 0 not configured ppb10 at pci9 dev 1 function 0 "Intel JHL6540 Thunderbolt" rev 0x02: msi pci11 at ppb10 bus 126 ppb11 at pci9 dev 2 function 0 "Intel JHL6540 Thunderbolt" rev 0x02: msi pci12 at ppb11 bus 125 xhci1 at pci12 dev 0 function 0 "Intel JHL6540 Thunderbolt" rev 0x02:
Re: periodic network access failure when accessing nextcloud via relayd
On Thu, Apr 01, 2021 at 01:47:11PM -0600, Ashlen wrote: > On 21/03/31 23:50, Joel Carnat wrote: > > Hello, > > > > I have Nextcloud 21 running with php-7.4, httpd(8) and relayd(8). > > On my laptop, a script regularly runs nextcloudcmd to synchonize the files > > with the nextcloud instance. And quite often, nextcloudcmd returns such > > error: > > 03-31 23:28:56:089 [ info nextcloud.sync.networkjob.lscol ]:LSCOL of > > > > QUrl("https://nextcloud.tumfatig.net/remote.php/dav/files/user85419/Uploads";) > > FINISHED > > WITH STATUS "UnknownNetworkError Network access is disabled." > > I did some reading on the issue.[1][2][3] It appears to affect some > users on other platforms if the 'Use system proxy' setting on the desktop > client is enabled (though some reported that the presence/absence of the > option didn't seem to affect anything). > Thanks. I found those links and tried to set parameters on the nextcloudcmd. But I couldn't find how to say "don't try to use a proxy". So I'm not sure if it tries to do something with that setting or not. I also tried passing the credential via .netrc or via parameters. But that didn't change anything. > As an experiment, you could temporarily disable keep-alive in relayd.conf(5). > It probably won't fix anything (in which case you can revert it), but it's > worth trying imo. I have tried it and it doesn't change the erroneous behaviour. I also tried to set a tcp protocol forward rule (based on SSH example from manpage) but the failures also happen. Finally I trie using a hostname in the table definition (rather than using 127.0.0.1) but that was no luck. I wrote a script that would run the GET and PROPFIND commands found in the logs, using curl. And those never fail. So this would look like nextcloudcmd has something buggy. But using nextcloudcmd to connect directly to httpd (via ssh tunnel) also make the failure disappear. The only work-around I can see now is to modify my crontab to ensure consecutive syncs don't happen too frequently... Regards, Jo
periodic network access failure when accessing nextcloud via relayd
Hello, I have Nextcloud 21 running with php-7.4, httpd(8) and relayd(8). On my laptop, a script regularly runs nextcloudcmd to synchonize the files with the nextcloud instance. And quite often, nextcloudcmd returns such error: 03-31 23:28:56:089 [ info nextcloud.sync.networkjob.lscol ]:LSCOL of QUrl("https://nextcloud.tumfatig.net/remote.php/dav/files/user85419/Uploads";) FINISHED WITH STATUS "UnknownNetworkError Network access is disabled." Both run OpenBSD 6.8/amd64. It seems that it only happens when I access nextcloud via relayd. If I access nextcloud straight via httpd, the error never pops up. Running relayd in debug mode, I saw the following difference: * when traffic works ok relay https_lan, session 2 (1 active), 0, 192.168.1.76 -> :8083, done, [Host: nextcloud.tumfatig.net] [User-Agent: Mozilla/5.0 (OpenBSD) mirall/3.0.1git (Nextcloud)] [nextcloud.tumfatig.net/ocs/v1.php/cloud/capabilities: format=json] GET -> 127.0.0.1:8083; [Host: nextcloud.tumfatig.net] [User-Agent: Mozilla/5.0 (OpenBSD) mirall/3.0.1git (Nextcloud)] [nextcloud.tumfatig.net/remote.php/dav/files/user85419/Uploads] PROPFIND; * when the error occurs relay https_lan, session 1 (1 active), 0, 192.168.1.76 -> 127.0.0.1:8083, done, [Host: nextcloud.tumfatig.net] [User-Agent: Mozilla/5.0 (OpenBSD) mirall/3.0.1git (Nextcloud)] [nextcloud.tumfatig.net/ocs/v1.php/cloud/capabilit ies: format=json] GET -> 127.0.0.1:8083; As you may notice, we can see "192.168.1.76 -> :8083" when it's working and "192.168.1.76 -> 127.0.0.1:8083" when it fails. But I can't see the reason for it in my relayd configuration. I've attached it to this mail. Any thoughts on what I'm doing wrong? Thank you, Jo # vim: ft=pf syntax=pf lan_ip="192.168.1.1" table { 127.0.0.1 } table { 127.0.0.1 } table { 127.0.0.1 } log state changes log connection # HTTP ### http protocol "http" { match header log "Host" match header log "X-Forwarded-For" match header log "User-Agent" match header log "Referer" match url log match header set "X-Forwarded-For" value "$REMOTE_ADDR" match header set "X-Forwarded-By" value "$SERVER_ADDR:$SERVER_PORT" match header set "Keep-Alive" value "$TIMEOUT" match response header set "X-Powered-By" value "Powered by OpenBSD" match request path "/.well-known/acme-challenge/*" forward to tcp { nodelay, socket buffer 65536, backlog 100 } } relay "http" { listen on $lan_ip port 80 protocol "http" forward to port 8080 check tcp # HTTP to HTTPS redirection forward to port 8081 check tcp # Let's Encrypt renewal } # HTTPS ## http protocol "https" { match header log "Host" match header log "X-Forwarded-For" match header log "User-Agent" match header log "Referer" match url log match header set "X-Forwarded-For" value "$REMOTE_ADDR" match header set "X-Forwarded-By" value "$SERVER_ADDR:$SERVER_PORT" match header set "Keep-Alive" value "$TIMEOUT" match response header set "X-Powered-by" value "OpenBSD" tcp { nodelay, socket buffer 65536, backlog 100 } tls keypair nextcloud.tumfatig.net # Default block block request path "/*" # Allow Let's Encrypt operations pass request path "/.well-known/acme-challenge/*" forward to # Nextcloud pass request forward to } relay "https_lan" { listen on $lan_ip port 443 tls protocol "https" forward to port 8081 check tcp # Let's Encrypt renewal forward to port 8083 check tcp # Nextcloud }
Huawei E3372 loops detaching
Hi, I got a Huawei E3372 LTE USB Stick and plugged it on my T460s running OpenBSD 6.8-stable/amd64. I tried all 3 USB ports and they all act the same way : the stick loops attaching/detaching forever. I also tried current (OpenBSD 6.8-current (GENERIC.MP) #308: Wed Feb 3 20:49:28 MST 2021) but it behaves the same. Feb 4 18:35:33 ThinkBSD /bsd: umsm0 at uhub0 port 3 configuration 1 interface 0 "HUAWEI_MOBILE HUAWEI_MOBILE" rev 2.00/1.02 addr 5 Feb 4 18:35:33 ThinkBSD /bsd: umsm0 detached Feb 4 18:35:34 ThinkBSD /bsd: cdce0 at uhub0 port 3 configuration 1 interface 0 "HUAWEI_MOBILE HUAWEI_MOBILE" rev 2.00/1.02 addr 5 Feb 4 18:35:34 ThinkBSD /bsd: cdce0: address 00:1e:10:1f:00:00 Feb 4 18:35:36 ThinkBSD /bsd: cdce0 detached Feb 4 18:35:37 ThinkBSD /bsd: umsm0 at uhub0 port 3 configuration 1 interface 0 "HUAWEI_MOBILE HUAWEI_MOBILE" rev 2.00/1.02 addr 5 Feb 4 18:35:37 ThinkBSD /bsd: umsm0 detached Feb 4 18:35:38 ThinkBSD /bsd: cdce0 at uhub0 port 3 configuration 1 interface 0 "HUAWEI_MOBILE HUAWEI_MOBILE" rev 2.00/1.02 addr 5 Feb 4 18:35:38 ThinkBSD /bsd: cdce0: address 00:1e:10:1f:00:00 Feb 4 18:35:40 ThinkBSD /bsd: cdce0 detached Feb 4 18:35:41 ThinkBSD /bsd: umsm0 at uhub0 port 3 configuration 1 interface 0 "HUAWEI_MOBILE HUAWEI_MOBILE" rev 2.00/1.02 addr 5 Feb 4 18:35:41 ThinkBSD /bsd: umsm0 detached A classical storage USB stick works perfectly so I don't expect the USB ports to be faulty. I expected the key to attach to umsm(4) as it is referenced in the man page ; not to cdce0. Here's some more USB information: # usbdevs - Controller /dev/usb0: addr 01: 8086: Intel, xHCI root hub super speed, self powered, config 1, rev 1.00 driver: uhub0 port 01: 0001.02a0 power Rx.detect port 02: .02a0 power Rx.detect port 03: 0001.02a0 power Rx.detect port 04: .0503 connect enabled recovery port 05: .02a0 power Rx.detect port 06: .02a0 power Rx.detect port 07: .0103 connect enabled recovery port 08: .0503 connect enabled recovery port 09: .02a0 power Rx.detect port 10: .0103 connect enabled recovery port 11: .02a0 power Rx.detect port 12: .02a0 power Rx.detect port 13: .02a0 power Rx.detect port 14: .02a0 power Rx.detect port 15: .02a0 power Rx.detect port 16: .02a0 power Rx.detect addr 02: 8087:0a2b Intel, Bluetooth full speed, self powered, config 1, rev 0.01 driver: ugen0 addr 03: 04f2:b52c Chicony Electronics Co.,Ltd., Integrated Camera high speed, power 500 mA, config 1, rev 0.29, iSerial 0001 driver: uvideo0 addr 04: 1fd2:6007 Melfas, LGDisplay Incell Touch full speed, power 100 mA, config 1, rev 1.00 driver: uhidev0 driver: uhidev1 addr 05: 12d1:14db HUAWEI_MOBILE, HUAWEI_MOBILE high speed, power 2 mA, config 1, rev 1.02 driver: cdce0 Any ideas? Thanks, Jo
Re: Issues with Teclast F7 Plus
On Fri, 2020-12-25 at 00:34 -0500, James Hastings wrote: > On 13 Dec 2020, 13:27:48 +0000, Joel Carnat wrote: > > Hello, > > > > I just got a Teclast F7 Plus laptop and installed OpenBSD 6.8- > > current on > > it. Most things works except apm and touchpad > > > > Using zzz or ZZZ, it seems suspend/hibernation start but are never > > achieved. The backlight keyboard and power led are still on. On > > Linux, > > keyboard goes black and power led slowly blinks. > > Plus, there is no way to resume the laptop. I have to force a > > poweroff > > using the power button. > > > > Regarding the touchpad, it doesn't work ; neither with wsmoused(8) > > nor > > in Xorg. It seems to be identified and attached to pms0. Looking at > > a > > Linux dmesg, it references I2C: > > [ 5.462957] kernel: input: HTIX5288:00 0911:5288 Touchpad as > > /devices/pci:00/:00:17.3/i2c_designware.7/i2c-8/i2c- > > HTIX5288:00/0018:0911:5288.0001/input/input11 > > So I guess OpenBSD should rather attach it to imt(4)? > > > This diff should activate I2C touchpad on your laptop: > > Index: dev/pci/dwiic_pci.c > === > RCS file: /cvs/src/sys/dev/pci/dwiic_pci.c,v > retrieving revision 1.14 > diff -u -p -u -r1.14 dwiic_pci.c > --- dev/pci/dwiic_pci.c 7 Oct 2020 11:17:59 - 1.14 > +++ dev/pci/dwiic_pci.c 23 Dec 2020 00:02:50 - > @@ -117,6 +117,14 @@ const struct pci_matchid dwiic_pci_ids[] > { PCI_VENDOR_INTEL, PCI_PRODUCT_INTEL_APOLLOLAKE_I2C_6 }, > { PCI_VENDOR_INTEL, PCI_PRODUCT_INTEL_APOLLOLAKE_I2C_7 }, > { PCI_VENDOR_INTEL, PCI_PRODUCT_INTEL_APOLLOLAKE_I2C_8 }, > + { PCI_VENDOR_INTEL, PCI_PRODUCT_INTEL_GLK_I2C_1 }, > + { PCI_VENDOR_INTEL, PCI_PRODUCT_INTEL_GLK_I2C_2 }, > + { PCI_VENDOR_INTEL, PCI_PRODUCT_INTEL_GLK_I2C_3 }, > + { PCI_VENDOR_INTEL, PCI_PRODUCT_INTEL_GLK_I2C_4 }, > + { PCI_VENDOR_INTEL, PCI_PRODUCT_INTEL_GLK_I2C_5 }, > + { PCI_VENDOR_INTEL, PCI_PRODUCT_INTEL_GLK_I2C_6 }, > + { PCI_VENDOR_INTEL, PCI_PRODUCT_INTEL_GLK_I2C_7 }, > + { PCI_VENDOR_INTEL, PCI_PRODUCT_INTEL_GLK_I2C_8 }, > }; > > int > Thanks a lot! It does activate the touchpad properly. It seems multitouch does not work. But single tap and properties can be managed via wsconsctl. Not sure why but the patch didn't apply on my fresh copy of the sources. Here's the diff applied to sources as of today. I've also attached the new dmesg if of any interest. OpenBSD 6.8-current (TECLAST) #0: Fri Dec 25 17:52:48 CET 2020 r...@teclast.tumfatig.lan:/usr/src/sys/arch/amd64/compile/TECLAST real mem = 8385544192 (7997MB) avail mem = 8116199424 (7740MB) random: good seed from bootblocks mpath0 at root scsibus0 at mpath0: 256 targets mainbus0 at root bios0 at mainbus0: SMBIOS rev. 3.2 @ 0x79ce9000 (74 entries) bios0: vendor American Megatrends Inc. version "S8K1_A1 tPAD 3.01" date 11/02/2020 acpi0 at bios0: ACPI 6.1 acpi0: sleep states S0 S3 S4 S5 acpi0: tables DSDT FACP FPDT FIDT MSDM MCFG DBG2 DBGP HPET LPIT APIC NPKT SSDT SSDT SSDT SSDT SSDT SSDT SSDT SSDT SSDT UEFI TPM2 DMAR WDAT WSMT acpi0: wakeup devices LID0(S3) HDAS(S3) XHC_(S3) XDCI(S4) RP01(S4) PXSX(S4) RP02(S4) PXSX(S4) RP03(S4) PXSX(S4) RP04(S4) PXSX(S4) RP05(S4) PXSX(S4) RP06(S4) PXSX(S4) acpitimer0 at acpi0: 3579545 Hz, 32 bits acpimcfg0 at acpi0 acpimcfg0: addr 0xe000, bus 0-255 acpihpet0 at acpi0: 1920 Hz acpimadt0 at acpi0 addr 0xfee0: PC-AT compat cpu0 at mainbus0: apid 0 (boot processor) cpu0: Intel(R) Celeron(R) N4100 CPU @ 1.10GHz, 4499.94 MHz, 06-7a-01 cpu0: FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,EST,TM2,SSSE3,SDBG,CX16,xTPR,PDCM,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,DEADLINE,AES,XSAVE,RDRAND,NXE,PAGE1GB,RDTSCP,LONG,LAHF,3DNOWP,PERF,ITSC,FSGSBASE,TSC_ADJUST,SGX,SMEP,ERMS,MPX,RDSEED,SMAP,CLFLUSHOPT,PT,SHA,UMIP,MD_CLEAR,IBRS,IBPB,STIBP,SSBD,SENSOR,ARAT,XSAVEOPT,XSAVEC,XGETBV1,XSAVES,MELTDOWN cpu0: 4MB 64b/line 16-way L2 cache cpu0: smt 0, core 0, package 0 mtrr: Pentium Pro MTRR support, 10 var ranges, 88 fixed ranges cpu0: apic clock running at 19MHz cpu0: mwait min=64, max=64, C-substates=0.2.0.2.4.2.1.1, IBE cpu at mainbus0: not configured cpu at mainbus0: not configured cpu at mainbus0: not configured ioapic0 at mainbus0: apid 1 pa 0xfec0, version 20, 120 pins acpiprt0 at acpi0: bus 0 (PCI0) acpiprt1 at acpi0: bus -1 (RP01) acpiprt2 at acpi0: bus -1 (RP02) acpiprt3 at acpi0: bus 1 (RP03) acpiprt4 at acpi0: bus 2 (RP04) acpiprt5 at acpi0: bus 3 (RP05) acpiprt6 at acpi0: bus -1 (RP06) acpiec0 at acpi0 acpi0: GPE 0x26 already enabled acpipci0 at acpi0 PCI0:
Issues with Teclast F7 Plus
Hello, I just got a Teclast F7 Plus laptop and installed OpenBSD 6.8-current on it. Most things works except apm and touchpad. Using zzz or ZZZ, it seems suspend/hibernation start but are never achieved. The backlight keyboard and power led are still on. On Linux, keyboard goes black and power led slowly blinks. Plus, there is no way to resume the laptop. I have to force a poweroff using the power button. Regarding the touchpad, it doesn't work ; neither with wsmoused(8) nor in Xorg. It seems to be identified and attached to pms0. Looking at a Linux dmesg, it references I2C: [5.462957] kernel: input: HTIX5288:00 0911:5288 Touchpad as /devices/pci:00/:00:17.3/i2c_designware.7/i2c-8/i2c-HTIX5288:00/0018:0911:5288.0001/input/input11 So I guess OpenBSD should rather attach it to imt(4)? Find OpenBSD & Linux dmesg attached. I also add output from OpenBSD pcidump, sysctl, usbdevs in case it helps. Thanks for help, Joel OpenBSD 6.8-current (GENERIC.MP) #222: Sat Dec 12 10:30:51 MST 2020 dera...@amd64.openbsd.org:/usr/src/sys/arch/amd64/compile/GENERIC.MP real mem = 8385544192 (7997MB) avail mem = 8116105216 (7740MB) random: good seed from bootblocks mpath0 at root scsibus0 at mpath0: 256 targets mainbus0 at root bios0 at mainbus0: SMBIOS rev. 3.2 @ 0x79ce9000 (74 entries) bios0: vendor American Megatrends Inc. version "S8K1_A1 tPAD 3.01" date 11/02/2020 acpi0 at bios0: ACPI 6.1 acpi0: sleep states S0 S3 S4 S5 acpi0: tables DSDT FACP FPDT FIDT MSDM MCFG DBG2 DBGP HPET LPIT APIC NPKT SSDT SSDT SSDT SSDT SSDT SSDT SSDT SSDT SSDT UEFI TPM2 DMAR WDAT WSMT acpi0: wakeup devices LID0(S3) HDAS(S3) XHC_(S3) XDCI(S4) RP01(S4) PXSX(S4) RP02(S4) PXSX(S4) RP03(S4) PXSX(S4) RP04(S4) PXSX(S4) RP05(S4) PXSX(S4) RP06(S4) PXSX(S4) acpitimer0 at acpi0: 3579545 Hz, 32 bits acpimcfg0 at acpi0 acpimcfg0: addr 0xe000, bus 0-255 acpihpet0 at acpi0: 1920 Hz acpimadt0 at acpi0 addr 0xfee0: PC-AT compat cpu0 at mainbus0: apid 0 (boot processor) cpu0: Intel(R) Celeron(R) N4100 CPU @ 1.10GHz, 1097.40 MHz, 06-7a-01 cpu0: FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,EST,TM2,SSSE3,SDBG,CX16,xTPR,PDCM,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,DEADLINE,AES,XSAVE,RDRAND,NXE,PAGE1GB,RDTSCP,LONG,LAHF,3DNOWP,PERF,ITSC,FSGSBASE,TSC_ADJUST,SGX,SMEP,ERMS,MPX,RDSEED,SMAP,CLFLUSHOPT,PT,SHA,UMIP,MD_CLEAR,IBRS,IBPB,STIBP,SSBD,SENSOR,ARAT,XSAVEOPT,XSAVEC,XGETBV1,XSAVES,MELTDOWN cpu0: 4MB 64b/line 16-way L2 cache cpu0: smt 0, core 0, package 0 mtrr: Pentium Pro MTRR support, 10 var ranges, 88 fixed ranges cpu0: apic clock running at 19MHz cpu0: mwait min=64, max=64, C-substates=0.2.0.2.4.2.1.1, IBE cpu1 at mainbus0: apid 2 (application processor) cpu1: Intel(R) Celeron(R) N4100 CPU @ 1.10GHz, 1096.64 MHz, 06-7a-01 cpu1: FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,EST,TM2,SSSE3,SDBG,CX16,xTPR,PDCM,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,DEADLINE,AES,XSAVE,RDRAND,NXE,PAGE1GB,RDTSCP,LONG,LAHF,3DNOWP,PERF,ITSC,FSGSBASE,TSC_ADJUST,SGX,SMEP,ERMS,MPX,RDSEED,SMAP,CLFLUSHOPT,PT,SHA,UMIP,MD_CLEAR,IBRS,IBPB,STIBP,SSBD,SENSOR,ARAT,XSAVEOPT,XSAVEC,XGETBV1,XSAVES,MELTDOWN cpu1: 4MB 64b/line 16-way L2 cache cpu1: smt 0, core 1, package 0 cpu2 at mainbus0: apid 4 (application processor) cpu2: Intel(R) Celeron(R) N4100 CPU @ 1.10GHz, 1096.97 MHz, 06-7a-01 cpu2: FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,EST,TM2,SSSE3,SDBG,CX16,xTPR,PDCM,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,DEADLINE,AES,XSAVE,RDRAND,NXE,PAGE1GB,RDTSCP,LONG,LAHF,3DNOWP,PERF,ITSC,FSGSBASE,TSC_ADJUST,SGX,SMEP,ERMS,MPX,RDSEED,SMAP,CLFLUSHOPT,PT,SHA,UMIP,MD_CLEAR,IBRS,IBPB,STIBP,SSBD,SENSOR,ARAT,XSAVEOPT,XSAVEC,XGETBV1,XSAVES,MELTDOWN cpu2: 4MB 64b/line 16-way L2 cache cpu2: smt 0, core 2, package 0 cpu3 at mainbus0: apid 6 (application processor) cpu3: Intel(R) Celeron(R) N4100 CPU @ 1.10GHz, 1096.29 MHz, 06-7a-01 cpu3: FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,EST,TM2,SSSE3,SDBG,CX16,xTPR,PDCM,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,DEADLINE,AES,XSAVE,RDRAND,NXE,PAGE1GB,RDTSCP,LONG,LAHF,3DNOWP,PERF,ITSC,FSGSBASE,TSC_ADJUST,SGX,SMEP,ERMS,MPX,RDSEED,SMAP,CLFLUSHOPT,PT,SHA,UMIP,MD_CLEAR,IBRS,IBPB,STIBP,SSBD,SENSOR,ARAT,XSAVEOPT,XSAVEC,XGETBV1,XSAVES,MELTDOWN cpu3: 4MB 64b/line 16-way L2 cache cpu3: smt 0, core 3, package 0 ioapic0 at mainbus0: apid 1 pa 0xfec0, version 20, 120 pins acpiprt0 at acpi0: bus 0 (PCI0) acpiprt1 at acpi0: bus -1 (RP01) acpiprt2 at acpi0: bus -1 (RP02) acpiprt3 at acpi0: bus 1 (RP03) acpiprt4 at acpi0: bus 2 (RP04) acpiprt5 at acpi0: bus 3 (RP05) acpiprt6 at acpi0: bus -1 (RP06) acpiec0 at acpi0 acpi0: GPE 0x26 already
dhcpd and pf table with fixed-address
Hello, I have linked dhcpd(8) and pf(4) using -A, -C and -L dhcpd flags. It seems dhcpd only adds IP for dynamic leases and not for leases configured using fixed-address. Is this expected or is there something I misconfigured? Thanks, Jo PS: configuration extracts rc.conf.local: dhcpd_flags=-A abandoned_ip_table -C changed_ip_table -L leased_ip_table em1 pf.conf: table persist tablepersist table persist dhcpd.conf: (...) subnet 192.168.2.0 netmask 255.255.255.0 { range 192.168.2.20 192.168.2.50; (...) host Raspberry-Pi-4 { hardware ethernet xx:xx:xx:xx:xx:xx; fixed-address 192.168.2.250; } (...)
Re: Issues with TP-Link UE300
Hi, This seems to work much better! Transferring files via NFS, I could sustain from 118 to 148Mbps. Kernel says: ure0 at uhub0 port 15 configuration 1 interface 0 "TP-LINK USB 10/100/1000 LAN" rev 3.00/30.00 addr 5 ure0: RTL8153 (0x5c20), address d0:37:45:xx:xx:xx rgephy0 at ure0 phy 0: RTL8251 PHY, rev. 0 ifconfig says: ure0: flags=808843 mtu 1500 usbdev says: addr 05: 2357:0601 TP-LINK, USB 10/100/1000 LAN super speed, power 64 mA, config 1, rev 30.00, iSerial 0100 driver: ure0 iperf3 says: [ 5] 0.00-10.00 sec 618 MBytes 518 Mbits/sec sender [ 5] 0.00-10.13 sec 618 MBytes 512 Mbits/sec receiver Thank you very much. On Mon, Sep 28, 2020 at 10:30:16AM +0800, Kevin Lo wrote: > On Sun, Sep 27, 2020 at 11:43:13PM +0200, Joel Carnat wrote: > > > > Hi, > > > > I have plugged a TP-Link UE300 on my ThinkPad X260 running OpenBSD -snapshot > > and it seems I can't get more than 100Mbps. > > > > The dongle attaches and get an IP address. But the speed seems limited. > > Same behaviour when attached to the USB3 port of my APU4D4 (running 6.7). > > When plugged in a MacBook Pro (running macos), it gets Gbps speed. > > > > I have noticed that it gets attached to cdce0; > > I thought the RTL8153 chipset would give me an ure0 device. > > > > Is this expected? > > Is there something I can do to get Gbps out of this device? > > Please try this diff, thanks. > > Index: sys/dev/usb/if_ure.c > === > RCS file: /cvs/src/sys/dev/usb/if_ure.c,v > retrieving revision 1.18 > diff -u -p -u -p -r1.18 if_ure.c > --- sys/dev/usb/if_ure.c 4 Aug 2020 14:45:46 - 1.18 > +++ sys/dev/usb/if_ure.c 28 Sep 2020 02:24:40 - > @@ -76,7 +76,8 @@ const struct usb_devno ure_devs[] = { > { USB_VENDOR_LENOVO, USB_PRODUCT_LENOVO_DOCK_ETHERNET }, > { USB_VENDOR_REALTEK, USB_PRODUCT_REALTEK_RTL8152 }, > { USB_VENDOR_REALTEK, USB_PRODUCT_REALTEK_RTL8153 }, > - { USB_VENDOR_REALTEK, USB_PRODUCT_REALTEK_RTL8156 } > + { USB_VENDOR_REALTEK, USB_PRODUCT_REALTEK_RTL8156 }, > + { USB_VENDOR_TPLINK, USB_PRODUCT_TPLINK_UE300 } > }; > > int ure_match(struct device *, void *, void *); > Index: sys/dev/usb/usbdevs > === > RCS file: /cvs/src/sys/dev/usb/usbdevs,v > retrieving revision 1.720 > diff -u -p -u -p -r1.720 usbdevs > --- sys/dev/usb/usbdevs 3 Aug 2020 14:25:44 - 1.720 > +++ sys/dev/usb/usbdevs 28 Sep 2020 02:24:40 - > @@ -4317,6 +4317,7 @@ product TPLINK RTL8192EU0x0107 RTL8192E > product TPLINK RTL8192EU_2 0x0108 RTL8192EU > product TPLINK RTL8192EU_3 0x0109 RTL8192EU > product TPLINK RTL8188EUS0x010c RTL8188EUS > +product TPLINK UE300 0x0601 UE300 Ethernet > > /* Trek Technology products */ > product TREK THUMBDRIVE 0x ThumbDrive > Index: sys/dev/usb/usbdevs.h > === > RCS file: /cvs/src/sys/dev/usb/usbdevs.h,v > retrieving revision 1.732 > diff -u -p -u -p -r1.732 usbdevs.h > --- sys/dev/usb/usbdevs.h 3 Aug 2020 14:25:56 - 1.732 > +++ sys/dev/usb/usbdevs.h 28 Sep 2020 02:24:40 - > @@ -1,4 +1,4 @@ > -/* $OpenBSD: usbdevs.h,v 1.732 2020/08/03 14:25:56 deraadt Exp $ */ > +/* $OpenBSD$ */ > > /* > * THIS FILE IS AUTOMATICALLY GENERATED. DO NOT EDIT. > @@ -4324,6 +4324,7 @@ > #define USB_PRODUCT_TPLINK_RTL8192EU_2 0x0108 /* RTL8192EU */ > #define USB_PRODUCT_TPLINK_RTL8192EU_3 0x0109 /* RTL8192EU */ > #define USB_PRODUCT_TPLINK_RTL8188EUS 0x010c /* RTL8188EUS */ > +#define USB_PRODUCT_TPLINK_UE3000x0601 /* UE300 > Ethernet */ > > /* Trek Technology products */ > #define USB_PRODUCT_TREK_THUMBDRIVE 0x /* ThumbDrive */ > Index: sys/dev/usb/usbdevs_data.h > === > RCS file: /cvs/src/sys/dev/usb/usbdevs_data.h,v > retrieving revision 1.726 > diff -u -p -u -p -r1.726 usbdevs_data.h > --- sys/dev/usb/usbdevs_data.h3 Aug 2020 14:25:56 - 1.726 > +++ sys/dev/usb/usbdevs_data.h28 Sep 2020 02:24:40 - > @@ -1,4 +1,4 @@ > -/* $OpenBSD: usbdevs_data.h,v 1.726 2020/08/03 14:25:56 deraadt Exp $ > */ > +/* $OpenBSD$ */ > > /* > * THIS FILE IS AUTOMATICALLY GENERATED. DO NOT EDIT. > @@ -11068,6 +11068,10 @@ const struct usb_known_product usb_known > { > USB_VENDOR_TPLINK, USB_PRODUCT_TPLINK_RTL8188EUS, > "RTL8188EUS", > + }, > + { > + USB_VENDOR_TPLINK, USB_PRODUCT_TPLINK_UE300, > + "UE300 Ethernet", > }, > { > USB_VENDOR_TREK, USB_PRODUCT_TREK_THUMBDRIVE, >
Re: Issues with TP-Link UE300
Well, this is no wifi device. This is an ethernet dongle. That particular one: https://www.tp-link.com/en/home-networking/computer-accessory/ue300/ Envoyé de mon iPad > Le 28 sept. 2020 à 00:55, Torsten a écrit : > > HI > As far as I can tell, WiFi is nominal speed, not designated speed > Another dominating factors for that would be USB connection type, hardware > bus connections, motherboard design, direct processor lanes to where > > Wifi is what it is, never as good as hard wired 100mb/1000mb or even 10gb > connections > > Best > T > > -Original Message- > From: owner-m...@openbsd.org On Behalf Of Joel Carnat > Sent: 27 September 2020 22:43 > To: misc@openbsd.org > Subject: Issues with TP-Link UE300 > > Hi, > > I have plugged a TP-Link UE300 on my ThinkPad X260 running OpenBSD -snapshot > and it seems I can't get more than 100Mbps. > > The dongle attaches and get an IP address. But the speed seems limited. > Same behaviour when attached to the USB3 port of my APU4D4 (running 6.7). > When plugged in a MacBook Pro (running macos), it gets Gbps speed. > > I have noticed that it gets attached to cdce0; I thought the RTL8153 chipset > would give me an ure0 device. > > Is this expected? > Is there something I can do to get Gbps out of this device? > > Thanks for help, > Jo > > -- > OpenBSD 6.8 (GENERIC.MP) #85: Sun Sep 27 13:39:51 MDT 2020 > > cdce0 at uhub0 port 15 configuration 2 interface 0 "TP-LINK USB 10/100/1000 > LAN" rev 3.00/30.00 addr 4 > > # doas usbdevs -v > > Controller /dev/usb0: > addr 01: 8086: Intel, xHCI root hub > super speed, self powered, config 1, rev 1.00 > driver: uhub0 > addr 02: 8087:0a2b Intel, Bluetooth > full speed, self powered, config 1, rev 0.01 > driver: ugen0 > addr 03: 5986:0706 SunplusIT Inc, Integrated Camera > high speed, power 500 mA, config 1, rev 0.12 > driver: uvideo0 > addr 04: 2357:0601 TP-LINK, USB 10/100/1000 LAN > super speed, power 64 mA, config 2, rev 30.00, iSerial 0100 > driver: cdce0 > >
Issues with TP-Link UE300
Hi, I have plugged a TP-Link UE300 on my ThinkPad X260 running OpenBSD -snapshot and it seems I can't get more than 100Mbps. The dongle attaches and get an IP address. But the speed seems limited. Same behaviour when attached to the USB3 port of my APU4D4 (running 6.7). When plugged in a MacBook Pro (running macos), it gets Gbps speed. I have noticed that it gets attached to cdce0; I thought the RTL8153 chipset would give me an ure0 device. Is this expected? Is there something I can do to get Gbps out of this device? Thanks for help, Jo -- OpenBSD 6.8 (GENERIC.MP) #85: Sun Sep 27 13:39:51 MDT 2020 cdce0 at uhub0 port 15 configuration 2 interface 0 "TP-LINK USB 10/100/1000 LAN" rev 3.00/30.00 addr 4 # doas usbdevs -v Controller /dev/usb0: addr 01: 8086: Intel, xHCI root hub super speed, self powered, config 1, rev 1.00 driver: uhub0 addr 02: 8087:0a2b Intel, Bluetooth full speed, self powered, config 1, rev 0.01 driver: ugen0 addr 03: 5986:0706 SunplusIT Inc, Integrated Camera high speed, power 500 mA, config 1, rev 0.12 driver: uvideo0 addr 04: 2357:0601 TP-LINK, USB 10/100/1000 LAN super speed, power 64 mA, config 2, rev 30.00, iSerial 0100 driver: cdce0
Re: match two conditions in relayd(8)
On Mon, Jan 27, 2020 at 09:22:40PM +0100, Sebastian Benoit wrote: > Joel Carnat(j...@carnat.net) on 2020.01.27 18:21:43 +0100: > > Hi, > > > > I'm setting up an HTTP(S) Reverse Proxy with relayd(8). > > > > I have one listener with multiple FQDN allowed. > > But I also have a common path that must be treated separately. > > > > As for now, I have: > > http protocol "https" { > > match request header "Host" value "one.domain.local" forward to > > match request header "Host" value "two.domain.local" forward to > > match request path "/common/*" forward to > > } > > relay "domain.local" { > > listen on egress port 443 tls > > protocol "https" > > forward toport 80 check tcp > > forward toport 80 check tcp > > forward to port 80 check tcp > > } > > > > With this configuration, both "/common/" are rendered by . > > But I want > > "one.domain.local/*" to be rendered by > > "one.domain.local/common/" to be rendered by > > "two.domain.local/*" to be rendered by > > "two.domain.local/common/" to be rendered by > > > > Is there some way to achieve this? > > Try using "quick" or possibly "tag" and "tagged". > > /Benno > tag/tagged seem to do the trick. Thanks a lot!
match two conditions in relayd(8)
Hi, I'm setting up an HTTP(S) Reverse Proxy with relayd(8). I have one listener with multiple FQDN allowed. But I also have a common path that must be treated separately. As for now, I have: http protocol "https" { match request header "Host" value "one.domain.local" forward to match request header "Host" value "two.domain.local" forward to match request path "/common/*" forward to } relay "domain.local" { listen on egress port 443 tls protocol "https" forward toport 80 check tcp forward toport 80 check tcp forward to port 80 check tcp } With this configuration, both "/common/" are rendered by . But I want "one.domain.local/*" to be rendered by "one.domain.local/common/" to be rendered by "two.domain.local/*" to be rendered by "two.domain.local/common/" to be rendered by Is there some way to achieve this? Thank you.
snmpd(8) custom OID names
Hello, I have set custom OIDs in my snmpd.conf(5). When I walk or get those values, using snmp(1) or snmpget(1), the "name" parameters is not listed. I only get values described as OPENBSD-BASE-MIB::localTest.* Is there a straight way to get the configured names from snmp clients? Or do I have to write a MIB file for this particular localTest sub-MIB? TIA, Jo
How to specify "device" option in vm.conf to always boot PXE
Hi, I need a VM to always boot from the network. I could do it using vmctl(8): # doas vmctl start test -c -B net -b /bsd -n vswitch0 (...) PXE boot MAC address fe:e1:bb:d1:c5:d8, interface vio0 nfs_boot: using interface vio0, with revarp & bootparams But I can't find the syntax to be used in vm.conf(5). Using "boot /bsd" would not use PXE. Using "boot /pxeboot" would not boot at all. Is it possible to set the boot device as net in vm.conf ? Alternatively, is it possible to force lladdr from vmctl ? Thank you, Jo
Re: relayd shows ssh sessions as idle
On Mon, Jun 17, 2019 at 11:56:08PM +0200, Sebastian Benoit wrote: > Joel Carnat(j...@carnat.net) on 2019.06.12 16:10:25 +0200: > > Hi, > > > > I have configured relayd(8) on my vmd(8) host so that I can connect to > > the running VMs using SSH. > > > > Using relayctl(8), I can see that those sessions have the same value for > > age and idle ; even when something happens in the SSH sessions. > > > > Is this expected or an error in my relayd.conf ? > > > > Thanks. > > > > > > # config snippet > > > > protocol sshtcp { > > tcp { nodelay, socket buffer 65536 } > > this uses the implicit "splice" option. > > If you add "no splice" to the tcp options, the idle time will be reset. > > The reason is this: After connection setup, relayd "splices" the socket > connecting to the ssh client to the socket connecting to the ssh server. > After that, the kernel takes care of transfering data between the client > connection and the forward connection. relayd does not see the traffic > anymore. > > It will only touch the connection again, when a maximum number of bytes are > transfered, or a timeout triggers. > > For tcp connections, the max number of bytes is unlimited, and the timeout > is set toyour session timeout. > > (For http connections, the max number of bytes is smaller, because relayd > wants to look at the headers of the next http request). > > So relayd cannot know if the connection has been idle. It will only know > when it reaches "session timeout". If you dont like this, use "no splice". > However, that makes the connection slower and consume more cpu. > > /Benno > Thanks a lot for this detailled explanation. I'll check cpu consumption and connection speed to see if I'd rather stick with a long timeout configuration. Regards, Jo
relayd shows ssh sessions as idle
Hi, I have configured relayd(8) on my vmd(8) host so that I can connect to the running VMs using SSH. Using relayctl(8), I can see that those sessions have the same value for age and idle ; even when something happens in the SSH sessions. Is this expected or an error in my relayd.conf ? Thanks. # config snippet protocol sshtcp { tcp { nodelay, socket buffer 65536 } } relay ssh_vm1 { listen on $public_ip port 8022 protocol sshtcp transparent forward to $vm1 port 8022 session timeout 28800 } #
Re: productivity/khard (or python) seem slow
On Sat 18/05 19:15, Strahil wrote: > I run vanilla openBSD 6.5 on oVirt (KVM) with gluster as storage and it seems > OK for my needs but I never used khard. > What kind of slowness do you experience? > Maybe I can run some tests and see if the situation is the same on KVM. > Well, it takes several seconds to run. Nearly 3 seconds to list only 100 cards. >From 3 to 5 seconds to issue a search from Mutt.
Re: productivity/khard (or python) seem slow
On Sat 18/05 11:39, David Mimms wrote: > On 2019.05.17 11:41, Paco Esteban wrote: > > On Thu, 16 May 2019, Joel Carnat wrote: > > > > > On Thu 16/05 08:55, Paco Esteban wrote: > > > > Can't say about your VM. On my desktop: > > > > > > > > $ time (khard list | wc -l) > > > >104 > > > > ( khard list | wc -l; ) 0.51s user 0.25s system 97% cpu 0.779 total > > > > > > > > > > Is this on OpenBSD ? The time output looks different. > > > > Of course it is ... (-current though) > > That should be zsh that uses an internal builtin instead of > > /usr/bin/time I guess (did not check). > > > > Here it is on ksh with base time: > > > > $ time (khard list | wc -l) > > 104 > > 0m00.81s real 0m00.59s user 0m00.21s system > > > > Interestingly a bit slower. > > What CPU and storage are you running? > The ThinkPad is: CPU: Intel(R) Core(TM) i5-6300U CPU @ 2.40GHz RAM: 8GB DISK: C300-CTFDDAC256M connected on "Intel 100 Series AHCI" @6.0Gb/s ROOT: The VM runs on Synology KVM: CPU: Intel(R) Celeron(R) CPU J3455 @ 1.50GHz RAM: 16GB DISK: Samsung SSD 850 EVO 1TB
Re: productivity/khard (or python) seem slow
On Thu 16/05 08:55, Paco Esteban wrote: > Hi Joel, > > On Wed, 15 May 2019, Joel Carnat wrote: > > > Hello, > > > > I've just setup vdirsync and khard to sync my addressbook from > > nextcloud. It works but querying the local vcf is damm slow. I also > > noticed that ranger felt a bit slow to start but thought it was the > > software ; so I switched to nnn. > > > > # time (khard list | wc -l) > > 112 > > 0m07.10s real 0m04.08s user 0m02.99s system > > > > Is this an issue with my VM (2 vCPU / 4GB RAM / 20GB SSD) or are Python > > software just slow? > > Can't say about your VM. On my desktop: > > $ time (khard list | wc -l) >104 > ( khard list | wc -l; ) 0.51s user 0.25s system 97% cpu 0.779 total > Is this on OpenBSD ? The time output looks different. Replaying the whole scenario on a real hardware (ThinkPad X260), things are a little bit better. But not that fast. # time (khard list | wc -l) 114 0m02.49s real 0m01.35s user 0m01.06s system Feels as slow as Firefox to start. Really annoying for a "simple" console application. It requires seconds to look for a contact when queried from Mutt. > Ranger works just fine. It takes less than a second to start. Ranger is also a bit better but not that much. About 1 or 2 seconds to launch. When top or mutt are starting nearly instantaneous.
productivity/khard (or python) seem slow
Hello, I've just setup vdirsync and khard to sync my addressbook from nextcloud. It works but querying the local vcf is damm slow. I also noticed that ranger felt a bit slow to start but thought it was the software ; so I switched to nnn. # time (khard list | wc -l) 112 0m07.10s real 0m04.08s user 0m02.99s system Is this an issue with my VM (2 vCPU / 4GB RAM / 20GB SSD) or are Python software just slow? Thanks.
Re: Running php cli when php-fpm uses chroot
On Fri 12/04 15:37, Éric Jacquot wrote: > Hi, > > Le Friday 12 April 2019 à 11:53 +0200, Joel Carnat a écrit : > > Hi, > > > > Is there a better way to handle chroot environnement when running php > > scripts from the cli? > > > > According to pkg-readme (/usr/local/share/doc/pkg-readmes/nextcloud) > > A symlink can be created to workaround this issue: > # ln -sf /var/www/nextcloud /nextcloud > That doesn't seem to work as-is. But... I have installed NC manually and not using ports. Anyway, the pkg-readmes note confirms that it's the way to go. Thanks.
Running php cli when php-fpm uses chroot
Hi, When php-fpm is configured to use chroot, it seems the php(1) cli still tries to work unchrooted. So when running maintenance php scripts (like occ from Nextcloud), errors raises for not finding resources (like mysql socket etc). I couldn't find a option for the php(1) command to "run as chroot". I didn't want to copy php(1) and its deps into the chroot directory. I could manage running the occ command by doing "dirty things": # doas sh -c "ln -s /var/www/htdocs /htdocs && ln -s /var/www/run /run" # doas -u www php occ db:convert-filecache-bigint # doas sh -c "rm /htdocs /run" Is there a better way to handle chroot environnement when running php scripts from the cli? Thanks.
Re: influxdb goes "panic:runtime error: index out of range"
On Mon 08/04 09:00, Daniel Jakots wrote: > On Mon, 8 Apr 2019 13:58:27 +0200, Joel Carnat wrote: > > > On a fresh influxdb instance in an OpenBSD VM: same issue. On a > > fresh influxdb instance in a Linux Ubuntu VM: the error disappears and > > the query gets the correct answers. > > Did you install the exact same influxdb version on Linux? > Yep. 1.6.1 on OpenBSD. 1.1, 1.7.x, 1.6.1 on Ubuntu. > I deleted some series or something else and then > if I do now show series it says the same > > show series > ERR: SHOW SERIES [panic:runtime error: index out of range] > > I thought it was probably an influx's bug so I asked the hidden > maintainer to update but he politely said no :) > > > Find attached the complete log. > > It's quite unreadable as is :p > > Cheers, > Daniel
influxdb goes "panic:runtime error: index out of range"
Hi, On InfluxDB, I'm getting "panic:runtime error: index out of range" every time I run the "SHOW TAG VALUES FROM unbound WITH KEY = clientip WHERE sysName =~ /$hostname/" query from Grafana. And I also get it using the influx shell. I've tried various things, like giving more resources (via login.conf), setting up various log level to identify whats wrong, erase the database and start from scratch, manually insert data in some other database. I can't solve this problem. Even giving more RAM to the VM doesn't help. I thought it could be an issue with bad data or badly stored data so I configured my feeds to fill both my actual instance and new ones I just created. On a fresh influxdb instance in an OpenBSD VM: same issue. On a fresh influxdb instance in a Linux Ubuntu VM: the error disappears and the query gets the correct answers. Find attached the complete log. Any ideas? Thanks. Apr 8 10:50:07 akeela influxd: ts=3D2019-04-08T08:50:07.974342Z lvl=3Derro= r msg=3D"SHOW TAG VALUES FROM unbound WITH KEY =3D clientip WHERE sysName = =3D~ /cherie/ [panic:runtime error: index out of range] goroutine 1382 [run= ning]:\\nruntime/debug.Stack(0xc01f8cf8e0, 0xc02c426dc0, 0x4a)\\n\\t/usr/lo= cal/go/src/runtime/debug/stack.go:24 +0xa7\\ngithub.com/influxdata/influxdb= /query.(*Executor).recover(0xc000162ed0, 0xc01f8cf8e0, 0xc02a2922a0)\\n\\t/= usr/obj/ports/influxdb-1.6.1/go/src/github.com/influxdata/influxdb/query/ex= ecutor.go:394 +0xaf\\npanic(0xcf2640, 0x156fb00)\\n\\t/usr/local/go/src/run= time/panic.go:513 +0x1b9\\nencoding/binary.bigEndian.Uint16(...)\\n\\t/usr/= local/go/src/encoding/binary/binary.go:100\\ngithub.com/influxdata/influxdb= /tsdb.ReadSeriesKeyMeasurement(...)\\n\\t/usr/obj/ports/influxdb-1.6.1/go/s= rc/github.com/influxdata/influxdb/tsdb/series_file.go:338\\ngithub.com/infl= uxdata/influxdb/tsdb.ParseSeriesKey(0x2835e5fb4, 0x1, 0x3d204c, 0x1, 0x3d20= 4c, 0x0, 0x2ebb113c2, 0x1, 0x3d1c3e)\\n\\t/usr/obj/ports/influxdb-1.6.1/go/= src/github.com/influxdata/influxdb Apr 8 10:50:07 akeela influxd: /tsdb/series_file.go:359 +0x35a\\ngithub.co= m/influxdata/influxdb/tsdb.IndexSet.tagValuesByKeyAndExpr(0xc01f8cfd00, 0x1= , 0x2, 0xc0001f49b0, 0xc003bcd950, 0x1, 0x2, 0xf17960, 0x15a5578, 0xc007198= 0b0, ...)\\n\\t/usr/obj/ports/influxdb-1.6.1/go/src/github.com/influxdata/i= nfluxdb/tsdb/index.go:2359 +0x651\\ngithub.com/influxdata/influxdb/tsdb.Ind= exSet.MeasurementTagKeyValuesByExpr(0xc01f8cfd00, 0x1, 0x2, 0xc0001f49b0, 0= xc003bcd950, 0x1, 0x2, 0xf17960, 0x15a5578, 0xc0071980b0, ...)\\n\\t/usr/ob= j/ports/influxdb-1.6.1/go/src/github.com/influxdata/influxdb/tsdb/index.go:= 2470 +0xbe4\\ngithub.com/influxdata/influxdb/tsdb.(*Store).TagValues(0xc000= 208000, 0xf17960, 0x15a5578, 0xc028fce1c0, 0x2, 0x2, 0xf14560, 0xc01e17ef00= , 0x32f2d7fe, 0x109d537404, ...)\\n\\t/usr/obj/ports/influxdb-1.6.1/go/src/= github.com/influxdata/influxdb/tsdb/store.go:1574 +0xeb9\\ngithub.com/influ= xdata/influxdb/coordinator.(*StatementExecutor).executeShowTagValues(0xc000= 1f60e0, 0xc0001a9880, 0xc000176300, 0x48, 0xd9b0c0)\\n\\t/usr/obj/ports/inf= luxdb-1.6.1/go/sr Apr 8 10:50:07 akeela influxd: c/github.com/influxdata/influxdb/coordinato= r/statement_executor.go:1050 +0x31f\\ngithub.com/influxdata/influxdb/coordi= nator.(*StatementExecutor).ExecuteStatement(0xc0001f60e0, 0xf17720, 0xc0001= a9880, 0xc000176300, 0x1, 0x1)\\n\\t/usr/obj/ports/influxdb-1.6.1/go/src/gi= thub.com/influxdata/influxdb/coordinator/statement_executor.go:194 +0x23a9\= \ngithub.com/influxdata/influxdb/query.(*Executor).executeQuery(0xc000162ed= 0, 0xc01f8cf8e0, 0xc0250c33cc, 0x4, 0x0, 0x0, 0xf17960, 0x15a5578, 0x2710, = 0x0, ...)\\n\\t/usr/obj/ports/influxdb-1.6.1/go/src/github.com/influxdata/i= nfluxdb/query/executor.go:334 +0x355\\ncreated by github.com/influxdata/inf= luxdb/query.(*Executor).ExecuteQuery\\n\\t/usr/obj/ports/influxdb-1.6.1/go/= src/github.com/influxdata/influxdb/query/executor.go:236 +0xc9\\n" log_id= =3D0Eg21QnW000 service=3Dquery Apr 8 10:50:07 akeela influxd: [httpd] 192.168.0.128 - - [08/Apr/2019:10:5= 0:07 +0200] "POST /query?chunked=3Dtrue&db=3Dlogs&q=3DSHOW+TAG+VALUES+FROM+= %22unbound%22+WITH+KEY+%3D+%22clientip%22+WHERE+%22sysName%22+%3D~+%2Fcheri= e%2F HTTP/1.1" 200 171 "-" "InfluxDBShell/unknown" 4f985e96-59db-11e9-803d-= 15201
Re: Touchpad - how to enable two-finger scrolling
Hi, On Sun 31/03 03:56, Brogan wrote: > Hello, > > I recently installed OpenBSD 6.4 on a Dell Latitude 6430u and am trying to > get touchpad two-finger scrolling working in X11. As far as I can tell the > touchpad is being loaded via wsmouse but I'm not sure how or where to > properly configure the WSMOUSECFG_TWOFINGERSCROLL found in > /usr/include/dev/wscons/wsconsio.h. > > It does not appear that I have a base xorg.conf file. At least not in the > usual places /etc/ or /etc/X11/. I'm guessing I need to establish that but > I'm not sure how to get started. > > If there's any config file or machine log output I can provide to assist with > helping me let me know. > > Thank you. On my ThinkPad X260, I had to add an extra configuration file: # cat /etc/X11/xorg.conf.d/synaptics.conf # ThinkPad X260 ships with Synaptics clickpad Section "InputClass" Identifier "touchpad" Driver "synaptics" MatchIsTouchpad "on" Option "Device" "/dev/wsmouse0" #Option "Device" "wsmouse" Option "Protocol" "auto-dev" Option "ClickPad" "true" Option "VertTwoFingerScroll" "true" Option "HorizTwoFingerScroll" "true" Option "TapButton1" "1" # Left button Option "TapButton2" "3" # Right button Option "PalmDetect" "true" EndSection #EOF Hope it helps.
Broadcom BCM4356, bwfm0: could not read io type
Hi, I took my working 6.5-BETA disk out of a ThinkPad X230i and pluggued it in a ThinkPad X260. The system boots ok and I can get an X session. But the wireless card doesn't seem to work. # dmesg bwfm0 at pci2 dev 0 function 0 "Broadcom BCM4356" rev 0x02: msi bwfm_pci_intr: handle MB data bwfm0: could not read io type bwfm0: could not read io type bwfm0: could not init # doas ifconfig bwfm0 scan bwfm0: flags=8803 mtu 1500 lladdr 00:00:00:00:00:00 index 1 priority 4 llprio 3 groups: wlan media: IEEE802.11 autoselect status: no network ieee80211: nwid "" ifconfig: SIOCG80211ALLNODES: Network is down # pcidump -v 4:0:0: Broadcom BCM4356 0x: Vendor ID: 14e4 Product ID: 43ec 0x0004: Command: 0006 Status: 0010 0x0008: Class: 02 Subclass: 80 Interface: 00 Revision: 02 0x002c: Subsystem Vendor ID: 17aa Product ID: 0777 The firmware was downloaded using fw_update. The BIOS has just been updated. bios0: vendor LENOVO version "R02ET70W (1.43 )" date 01/28/2019 bios0: LENOVO 20F5S1FH00 Is there something to do to have it working ? Or is this "Bad luck, switch to an Intel Wireless card" ? Thanks for help. PS: I can provide full dmesg & pcidump if required.
Re: FDE with keydrive imponderabilities
Hi, I wonder if you’re not using fdisk for an MBR setup and disklabel for GPT. Why won’t you use 64 as the starting offset of the RAID partition ? -- Envoyé de mon iPhone > Le 22 mars 2019 à 23:26, Normen Wohner a écrit : > > I thought you might be able to help me with a setup concerning > Full Disk Encryption on OpenBSD 6.4 where I am at my whits end. > I am trying to install on a Sony Vaio VPC P11S1E netbook. > It is a 32-bit x86 machine with an internal SSD and SD card reader. > > During boot of the installer my internal disk shows up as wd0. > I have no Idea why it would be IDE but be that as it may. > Plugging in any USB drive shows as sd0 while the SD card-reader > shows two devices, respectively some controller on sd0 and the > actual drive on sd1. > > I really hope to find anything else I could try. > > What I have tried thus far. > booting into the installer, > once everything is in ramdisk is at the Install > etc. prompt I unplug the boot USB and proceed with: > > (S)hell > > > # dd if=/dev/zero of=/dev/wd0 bs=1m count=8 > to erase previous RAID attempt > > # fdisk -iy wd0 > # disklabel -E wd0 >> z >> a a > offset: [64] 1024 > size: [n] > FS type: [4.2BSD] RAID >> w >> q > returns: 'No label changes.' > > # cd /dev > # sh MAKEDEV sd1 > # sh MAKEDEV sd2 > # cd / > > after that either > Route 1: > plugging in SD card > > # fdisk -iy sd1 > # disklabel -E sd1 >> z >> a a > offset: [64] 1024 > size: [n] 1m > FS type: [4.2BSD] RAID >> w >> q > returns: 'No label changes.' > > # dd if=/dev/random of=/dev/sd1a > > # bioctl -c C -k sd1a -l wd0a softraid0 > returns: 'Error sd1 did not quit correctly' > > > This Error remains consistend between boots, > even after restarting to the Installer > > alternatively > Route 2: > plugging in USB stick > # fdisk -iy sd0 > # disklabel -E sd0 >> z >> a a > offset: [64] 1024 > size: [n] 1m > FS type: [4.2BSD] RAID >> w >> q > returns: No label changes. > > # dd if=/dev/random of=/dev/sd0a > > # bioctl -c C -k sd0a -l wd0a softraid0 > returns: softraid0: CRYPTO volume attached as sd2 > #exit > (I)nstall to sd2 > ... > hangs in BIOS after reboot whenever > the Keydrive is plugged in. >
Re: How to monitor class usage/limits?
On Fri 15/03 15:47, Stuart Henderson wrote: > On 2019-03-14, Joel Carnat wrote: > > Hi, > > > > The Internet is full of "OpenBSD desktop works better when rising > > datasize/maxproc/openfiles/stacksize in login.conf". One thing I can't > > manage to find is how you can monitor those values? > > > > I'm Ok to set arbitrary recommended values depending on system > > configuration and general usecases (like using Firefox/Chrome etc). But > > I would like to check for my current used values. Like looking at top > > or vmstat to know how much resources I'm actually using. And how often > > the system raises the 75% threshold. > > > > Is there a way to monitor these usage numbers to set adequate limits? > > > > TIA, > > Jo > > > > > > It doesn't show you everything, but you can check memory in 'maximum > resident set size': > > $ \time -l chrome > Thanks Stuart. This is needed for each command I run and want to be monitored, right? Reading the manpage for ps(1) once again, I ended wondering if that wasn't the answer to my initial question... # ps -ax -o pid,lim,rsz,dsiz,ssiz,tsiz,vsz,command | sed '2,/firefox/d' PID LIMRSZ DSIZ SSIZ TSIZVSZ COMMAND 69866 5875588 7072 3352 16 32 3400 /usr/local/libexec/gvfsd 74573 5875588 104524 188200 80 196 188476 /usr/local/lib/firefox/firefox (...) 67248 5875588 199444 263132 140 196 263468 /usr/local/lib/firefox/firefox (...) 5430 5875588 215532 291920 164 196 292280 /usr/local/lib/firefox/firefox (...) 59826 5875588 116908 190948 128 196 191272 /usr/local/lib/firefox/firefox (...) Does this indicates the values I'm looking for? Thanks.
How to monitor class usage/limits?
Hi, The Internet is full of "OpenBSD desktop works better when rising datasize/maxproc/openfiles/stacksize in login.conf". One thing I can't manage to find is how you can monitor those values? I'm Ok to set arbitrary recommended values depending on system configuration and general usecases (like using Firefox/Chrome etc). But I would like to check for my current used values. Like looking at top or vmstat to know how much resources I'm actually using. And how often the system raises the 75% threshold. Is there a way to monitor these usage numbers to set adequate limits? TIA, Jo
Are there real mountpoints for gvfs/gio shares ?
Hi, I was looking at mounting CIFS shares. OpenBSD is the "client" machine. CIFS a published by a remote NAS. Using XFCE and Thunar, everything works well. But when I try to access the mountpoints from the console, I just can't find them. Things like "gio mount smb://", "gio mount -l" and "gio copy" work well. I read there should be stuff in ~/.gvfs or /run/user/ on Linux. But couldn't find anything mounted on such directories on OpenBSD. Is there a way to access the gvfs shares using regular console tools (other than gio) ? Thanks.
Re: ldap search fails with Let's Encrypt certificate
Le 05/11/2018 17:07, Stuart Henderson a écrit : On 2018/11/05 17:02, Joel Carnat wrote: Le 05/11/2018 16:38, Stuart Henderson a écrit : > On 2018-11-05, Joel Carnat wrote: > > Le 05/11/2018 13:48, Stuart Henderson a écrit : > > > On 2018-11-05, Joel Carnat wrote: > > > > Hi, > > > > > > > > I'm using ldap(1) to query a remote Synology Directory Server > > > > (OpenLDAP > > > > 2.4.x). > > > > Unfortunately, it fails saying: > > > >TLS failed: handshake failed: error:14004410:SSL > > > > routines:CONNECT_CR_SRVR_HELLO:sslv3 alert handshake failure > > > >ldap: LDAP connection failed > > > > > > > > When I use the OpenLDAP ldapsearch, same arguments, I succeeds. > > > > > > > > Using openssl s_client, I could confirm that the OpenLDAP server > > > > accept > > > > TLS: > > > >New, TLSv1/SSLv3, Cipher is AES256-GCM-SHA384 > > > >Server public key is 2048 bit > > > >Secure Renegotiation IS supported > > > >Compression: NONE > > > >Expansion: NONE > > > >No ALPN negotiated > > > >SSL-Session: > > > >Protocol : TLSv1.2 > > > > (...) > > > > > > If this were a cert problem you'd get a message like this from > > > ldap(1) > > > > > > TLS failed: certificate verification failed: unable to get local > > > issuer certificate > > > ldap: LDAP connection failed > > > > > > or > > > > > > TLS failed: name `XX' not present in server certificate > > > > > > So it's not that. > > > > > > ldap(1) uses libtls which defaults to only allowing secure ciphers, > > > specifically TLSv1.2+AEAD+ECDHE:TLSv1.2+AEAD+DHE. > > > > > > ldap(1) doesn't provide a way to weaken that, though you could add > > > a call to tls_config_set_ciphers(tls_config, "compat") in > > > ldapc_connect() > > > to test if it would work. > > > > > > Or an s_client command that would force these ciphers: > > > > > > openssl s_client -cipher TLSv1.2+AEAD+ECDHE:TLSv1.2+AEAD+DHE -CAfile > > > /etc/ssl/cert.pem -connect $hostname:636 > > > > > > If not, perhaps the Synology box is using old OpenSSL without support > > > for these ciphers, or perhaps the cipher config is forcing only old > > > ciphers. FWIW this is what I am currently using on OpenBSD slapd: > > > > > > olcTLSCipherSuite: TLSv1.2+AEAD+ECDHE:TLSv1.2+AEAD+DHE > > > > olcTLSCipherSuite is, by default, empty. > > I could change it to > > "HIGH:+SSLv3:+TLSv1:MEDIUM:+SSLv2:@STRENGTH:+SHA:+MD5:!NULL" which > > doesn't solve the problem. > > When I try to set it as yours, it says: > >dn: cn=config > >changetype: modify > >replace: olcTLSCipherSuite > >olcTLSCipherSuite: TLSv1.2+AEAD+ECDHE:TLSv1.2+AEAD+DHE > > > >modifying entry "cn=config" > >ldap_modify: Other (e.g., implementation specific) error (80) > > > > From OpenBSD, the openssl commands returns: > > CONNECTED(0003) > > 13559346237984:error:14004410:SSL routines:CONNECT_CR_SRVR_HELLO:sslv3 > > alert handshake failure:/usr/src/lib/libssl/ssl_pkt.c:1200:SSL alert > > number 40 > > 13559346237984:error:140040E5:SSL routines:CONNECT_CR_SRVR_HELLO:ssl > > handshake failure:/usr/src/lib/libssl/ssl_pkt.c:585: > > --- > > no peer certificate available > > --- > > No client certificate CA names sent > > --- > > SSL handshake has read 7 bytes and written 0 bytes > > --- > > New, (NONE), Cipher is (NONE) > > Secure Renegotiation IS NOT supported > > Compression: NONE > > Expansion: NONE > > No ALPN negotiated > > SSL-Session: > > Protocol : TLSv1.2 > > Cipher: > > Session-ID: > > Session-ID-ctx: > > Master-Key: > > Start Time: 1541425938 > > Timeout : 7200 (sec) > > Verify return code: 0 (ok) > > --- > > > > On the syno, I can see: > > # openssl version > > OpenSSL 1.0.2o-fips 27 Mar 2018 > > # openssl ciphers -v TLSv1.2+AEAD+ECDHE:TLSv1.2+AEAD+DHE > > Error in cipher list > > 139812538357392:error:1410D0B9:SSL routines:SSL_CTX_set_cipher_list:no > > cipher match:ssl_lib.c:1383: > > > > Does this definitively indicates "ldap search" won't work with > >
Re: ldap search fails with Let's Encrypt certificate
Le 05/11/2018 16:38, Stuart Henderson a écrit : On 2018-11-05, Joel Carnat wrote: Le 05/11/2018 13:48, Stuart Henderson a écrit : On 2018-11-05, Joel Carnat wrote: Hi, I'm using ldap(1) to query a remote Synology Directory Server (OpenLDAP 2.4.x). Unfortunately, it fails saying: TLS failed: handshake failed: error:14004410:SSL routines:CONNECT_CR_SRVR_HELLO:sslv3 alert handshake failure ldap: LDAP connection failed When I use the OpenLDAP ldapsearch, same arguments, I succeeds. Using openssl s_client, I could confirm that the OpenLDAP server accept TLS: New, TLSv1/SSLv3, Cipher is AES256-GCM-SHA384 Server public key is 2048 bit Secure Renegotiation IS supported Compression: NONE Expansion: NONE No ALPN negotiated SSL-Session: Protocol : TLSv1.2 (...) If this were a cert problem you'd get a message like this from ldap(1) TLS failed: certificate verification failed: unable to get local issuer certificate ldap: LDAP connection failed or TLS failed: name `XX' not present in server certificate So it's not that. ldap(1) uses libtls which defaults to only allowing secure ciphers, specifically TLSv1.2+AEAD+ECDHE:TLSv1.2+AEAD+DHE. ldap(1) doesn't provide a way to weaken that, though you could add a call to tls_config_set_ciphers(tls_config, "compat") in ldapc_connect() to test if it would work. Or an s_client command that would force these ciphers: openssl s_client -cipher TLSv1.2+AEAD+ECDHE:TLSv1.2+AEAD+DHE -CAfile /etc/ssl/cert.pem -connect $hostname:636 If not, perhaps the Synology box is using old OpenSSL without support for these ciphers, or perhaps the cipher config is forcing only old ciphers. FWIW this is what I am currently using on OpenBSD slapd: olcTLSCipherSuite: TLSv1.2+AEAD+ECDHE:TLSv1.2+AEAD+DHE olcTLSCipherSuite is, by default, empty. I could change it to "HIGH:+SSLv3:+TLSv1:MEDIUM:+SSLv2:@STRENGTH:+SHA:+MD5:!NULL" which doesn't solve the problem. When I try to set it as yours, it says: dn: cn=config changetype: modify replace: olcTLSCipherSuite olcTLSCipherSuite: TLSv1.2+AEAD+ECDHE:TLSv1.2+AEAD+DHE modifying entry "cn=config" ldap_modify: Other (e.g., implementation specific) error (80) From OpenBSD, the openssl commands returns: CONNECTED(0003) 13559346237984:error:14004410:SSL routines:CONNECT_CR_SRVR_HELLO:sslv3 alert handshake failure:/usr/src/lib/libssl/ssl_pkt.c:1200:SSL alert number 40 13559346237984:error:140040E5:SSL routines:CONNECT_CR_SRVR_HELLO:ssl handshake failure:/usr/src/lib/libssl/ssl_pkt.c:585: --- no peer certificate available --- No client certificate CA names sent --- SSL handshake has read 7 bytes and written 0 bytes --- New, (NONE), Cipher is (NONE) Secure Renegotiation IS NOT supported Compression: NONE Expansion: NONE No ALPN negotiated SSL-Session: Protocol : TLSv1.2 Cipher: Session-ID: Session-ID-ctx: Master-Key: Start Time: 1541425938 Timeout : 7200 (sec) Verify return code: 0 (ok) --- On the syno, I can see: # openssl version OpenSSL 1.0.2o-fips 27 Mar 2018 # openssl ciphers -v TLSv1.2+AEAD+ECDHE:TLSv1.2+AEAD+DHE Error in cipher list 139812538357392:error:1410D0B9:SSL routines:SSL_CTX_set_cipher_list:no cipher match:ssl_lib.c:1383: Does this definitively indicates "ldap search" won't work with OpenLDAP/OpenSSL shipped in Synology DSM ? Oh, I see this cipher list syntax wasn't available in 1.0.x, to check you'll need to expand it (on libressl or openssl 1.1) and pass the whole string in. e.g. try this openssl ciphers ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256 and see which if any are available with their 1.0.2o-fips build. If there's no common cipher then "ldap search" can't work with TLS without patching. This gives: # openssl ciphers ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256 ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256
Re: ldap search fails with Let's Encrypt certificate
Le 05/11/2018 13:48, Stuart Henderson a écrit : On 2018-11-05, Joel Carnat wrote: Hi, I'm using ldap(1) to query a remote Synology Directory Server (OpenLDAP 2.4.x). Unfortunately, it fails saying: TLS failed: handshake failed: error:14004410:SSL routines:CONNECT_CR_SRVR_HELLO:sslv3 alert handshake failure ldap: LDAP connection failed When I use the OpenLDAP ldapsearch, same arguments, I succeeds. Using openssl s_client, I could confirm that the OpenLDAP server accept TLS: New, TLSv1/SSLv3, Cipher is AES256-GCM-SHA384 Server public key is 2048 bit Secure Renegotiation IS supported Compression: NONE Expansion: NONE No ALPN negotiated SSL-Session: Protocol : TLSv1.2 (...) If this were a cert problem you'd get a message like this from ldap(1) TLS failed: certificate verification failed: unable to get local issuer certificate ldap: LDAP connection failed or TLS failed: name `XX' not present in server certificate So it's not that. ldap(1) uses libtls which defaults to only allowing secure ciphers, specifically TLSv1.2+AEAD+ECDHE:TLSv1.2+AEAD+DHE. ldap(1) doesn't provide a way to weaken that, though you could add a call to tls_config_set_ciphers(tls_config, "compat") in ldapc_connect() to test if it would work. Or an s_client command that would force these ciphers: openssl s_client -cipher TLSv1.2+AEAD+ECDHE:TLSv1.2+AEAD+DHE -CAfile /etc/ssl/cert.pem -connect $hostname:636 If not, perhaps the Synology box is using old OpenSSL without support for these ciphers, or perhaps the cipher config is forcing only old ciphers. FWIW this is what I am currently using on OpenBSD slapd: olcTLSCipherSuite: TLSv1.2+AEAD+ECDHE:TLSv1.2+AEAD+DHE olcTLSCipherSuite is, by default, empty. I could change it to "HIGH:+SSLv3:+TLSv1:MEDIUM:+SSLv2:@STRENGTH:+SHA:+MD5:!NULL" which doesn't solve the problem. When I try to set it as yours, it says: dn: cn=config changetype: modify replace: olcTLSCipherSuite olcTLSCipherSuite: TLSv1.2+AEAD+ECDHE:TLSv1.2+AEAD+DHE modifying entry "cn=config" ldap_modify: Other (e.g., implementation specific) error (80) From OpenBSD, the openssl commands returns: CONNECTED(0003) 13559346237984:error:14004410:SSL routines:CONNECT_CR_SRVR_HELLO:sslv3 alert handshake failure:/usr/src/lib/libssl/ssl_pkt.c:1200:SSL alert number 40 13559346237984:error:140040E5:SSL routines:CONNECT_CR_SRVR_HELLO:ssl handshake failure:/usr/src/lib/libssl/ssl_pkt.c:585: --- no peer certificate available --- No client certificate CA names sent --- SSL handshake has read 7 bytes and written 0 bytes --- New, (NONE), Cipher is (NONE) Secure Renegotiation IS NOT supported Compression: NONE Expansion: NONE No ALPN negotiated SSL-Session: Protocol : TLSv1.2 Cipher: Session-ID: Session-ID-ctx: Master-Key: Start Time: 1541425938 Timeout : 7200 (sec) Verify return code: 0 (ok) --- On the syno, I can see: # openssl version OpenSSL 1.0.2o-fips 27 Mar 2018 # openssl ciphers -v TLSv1.2+AEAD+ECDHE:TLSv1.2+AEAD+DHE Error in cipher list 139812538357392:error:1410D0B9:SSL routines:SSL_CTX_set_cipher_list:no cipher match:ssl_lib.c:1383: Does this definitively indicates "ldap search" won't work with OpenLDAP/OpenSSL shipped in Synology DSM ?
ldap search fails with Let's Encrypt certificate
Hi, I'm using ldap(1) to query a remote Synology Directory Server (OpenLDAP 2.4.x). Unfortunately, it fails saying: TLS failed: handshake failed: error:14004410:SSL routines:CONNECT_CR_SRVR_HELLO:sslv3 alert handshake failure ldap: LDAP connection failed When I use the OpenLDAP ldapsearch, same arguments, I succeeds. Using openssl s_client, I could confirm that the OpenLDAP server accept TLS: New, TLSv1/SSLv3, Cipher is AES256-GCM-SHA384 Server public key is 2048 bit Secure Renegotiation IS supported Compression: NONE Expansion: NONE No ALPN negotiated SSL-Session: Protocol : TLSv1.2 (...) Looking inside /etc/ssl/cert.pem, I could find "/O=Digital Signature Trust Co./CN=DST Root CA X3". Which is part of the Let's Encrypt certificate chain. Is this a known issue or am I missing something? Thanks.
Inconsistent stats between snmpd(8) and pfctl(8) ?
Hi, On OpenBSD 6.3/amd64, I'm using snmpd(8) to gather pf(4) statistics. It seems that some stats are not coherent. For example, on egress and vio0 interfaces. Asking snmpd(8), I get : OPENBSD-PF-MIB::pfIfDescr.3 = STRING: "egress" OPENBSD-PF-MIB::pfIfDescr.12 = STRING: "vio0" OPENBSD-PF-MIB::pfIfType.3 = INTEGER: group(0) OPENBSD-PF-MIB::pfIfType.12 = INTEGER: instance(1) OPENBSD-PF-MIB::pfIfRules.3 = Gauge32: 12 OPENBSD-PF-MIB::pfIfRules.12 = Gauge32: 1 Asking pfctl(8), I get : # pfctl -s rules | grep -c egress 8 # pfctl -s rules | grep -c vio0 0 According to the MIB, pfIfRules is "The number of rules which reference the interface." Am I wrong expecting the numbers should be the same ? Thank you.
Re: net-snmpd extend and doas : a tty is required
> Le 12 avr. 2018 à 21:10, Stuart Henderson a écrit : > > On 2018-04-12, Joel Carnat mailto:j...@carnat.net>> wrote: >> Hi, >> >> I want net-snmpd to run a script via the extend directive. >> This script has to run a command using doas to get temporary root >> permission. >> >> The script is run on snmpcmd call but the doas command returns: >> doas: a tty is required >> >> Is there a way to run doas from net-snmpd ? >> I already have doas running from collectd-exec without issues. >> >> Thanks. >> >> # More infos on configuration and commands >> >> # grep extend /etc/snmp/snmpd.conf >> extend test /home/scripts/test.sh >> >> # grep snmpd /etc/doas.conf >> permit nopass _snmpd as root > > Net-SNMP runs as _netsnmp, but you're giving nopass access to _snmpd > (base snmpd's uid, which doesn't execute anything anyway). Of course… Using "permit nopass _netsnmp as root" makes it run as expected. Thanks a lot! smime.p7s Description: S/MIME cryptographic signature
net-snmpd extend and doas : a tty is required
Hi, I want net-snmpd to run a script via the extend directive. This script has to run a command using doas to get temporary root permission. The script is run on snmpcmd call but the doas command returns: doas: a tty is required Is there a way to run doas from net-snmpd ? I already have doas running from collectd-exec without issues. Thanks. # More infos on configuration and commands # grep extend /etc/snmp/snmpd.conf extend test /home/scripts/test.sh # grep snmpd /etc/doas.conf permit nopass _snmpd as root # userinfo _netsnmp login _netsnmp passwd * uid 760 groups _netsnmp change NEVER class daemon gecos Net-SNMP user dir /nonexistent shell /sbin/nologin expire NEVER # cat /home/scripts/test.sh #!/usr/bin/env ksh PATH="/bin:/sbin:/usr/bin:/usr/sbin" echo ligne 1 echo ligne 2 doas -u root ls /bsd exit 0 # snmpwalk -v 2c -c secret 10.0.0.7 .1.3.6.1.4.1.8072.1.3.2.4.1.2.4.116.101.115.116 NET-SNMP-EXTEND-MIB::nsExtendOutLine."test".1 = STRING: ligne 1 NET-SNMP-EXTEND-MIB::nsExtendOutLine."test".2 = STRING: ligne 2 NET-SNMP-EXTEND-MIB::nsExtendOutLine."test".3 = STRING: doas: a tty is required
Re: OpenBSD as an IKEv2 IPsec client with L/P authent
Hi, Le 22/02/2018 09:35, Stuart Henderson a écrit : On 2018-02-22, Igor V. Gubenko wrote: I am far from an expert; having issues myself at the moment, but maybe if we get all of the iked experimenters together, we can figure it out :) This definitely isn't going to work, iked only supports username/password authentication as a responder. not initiator. Is there any software that enables openbsd to be an ipsec initiator using user/pass ? Thanks.
OpenBSD as an IKEv2 IPsec client with L/P authent
Hi, My FTTH home-box provides IKEv2 server support. I connected my iPhone, via 3G, to it. I can now access my internal home-LAN. So I know it works. I want to do the same with an OpenBSD server hosted in "the Cloud" ; in transport mode as far as I understood the docs. I've struggled with ipsec.conf(5), ipsecctl(8) and iked(8) for a couple of hours now but I can't connect OpenBSD to the box. The home-box is using IKEv2 and User/Password authentication mode. The OpenBSD machine in 6.2/amd64. I have configured iked.conf(5) like this: ikev2 active esp \ from egress to 192.168.0.0/24 \ peer 78.192.10.15 And running iked(8) goes: # iked -dv set_policy: could not find pubkey for /etc/iked/pubkeys/ipv4/78.192.10.15 ikev2 "policy1" active esp inet from 108.61.176.54 to 192.168.0.0/24 local any peer 78.192.10.15 ikesa enc aes-256,aes-192,aes-128,3des prf hmac-sha2-256,hmac-sha1 auth hmac-sha2-256,hmac-sha1 group modp2048,modp1536,modp1024 childsa enc aes-256,aes-192,aes-128 auth hmac-sha2-256,hmac-sha1 lifetime 10800 bytes 536870912 rfc7427 ikev2_msg_send: IKE_SA_INIT request from 0.0.0.0:500 to 78.192.10.15:500 msgid 0, 510 bytes ikev2_recv: IKE_SA_INIT response from responder 78.192.10.15:500 to 108.61.176.54:500 policy 'policy1' id 0, 456 bytes And that's all :( Is there a way to use l/p authent with iked(8)? Or am I just not using the right software? In which case, what would the proper tool be? Thanks for help.
Re: iPhone tethering ?
The iPhone can be configured as a wireless AP. Then OpenBSD can connect to it and gain access to the Wild Wild World. -- Envoyé de mon iPhone > Le 23 oct. 2017 à 07:58, SFM a écrit : > > Hi everyone ! > > Does iPhone tethering work with OpenBSD? In other words, is there an > equivalent or alternative to FreeBSD & DragonFlyBSD’s usbmuxd in OpenBSD? The > only thread about “tethering” that I found in the mailing list archives is > about a Palm Treo. > > I will be thankful for any advise! > > RS.- >
Re: rsa 4096 or ed25519 for ssh keys ?
Le 16/10/2017 19:46, Mike Coddington a écrit : On Mon, Oct 16, 2017 at 05:29:34PM +0200, Joel Carnat wrote: Hi, If both server and client are ed25519 compatible. When generating (user) SSH keys, is it recommended to use ed25519 rather than rsa 4096bits? AFAIK, either would be fine. I believe ED25519 is more CPU-intensive, so if that's a factor then stick with RSA. I like ED25519 personally because the keys are small and my CPUs can all handle the workload. Ah? I read ed25519 use smaller keys and is then nicer on the cpu. Some write ed25519 allow faster sftp/scp transfer thanks to that.
rsa 4096 or ed25519 for ssh keys ?
Hi, If both server and client are ed25519 compatible. When generating (user) SSH keys, is it recommended to use ed25519 rather than rsa 4096bits? Thank you.
Re: softraid crypto seem really slower than plain ffs
Hello, I was really annoyed by the numbers I got. So I did the testings again. Using a brand new VM. Being really careful on what I was doing and writing it down after each command run. I did the testings using 6.1 and 6.2-current, in case there were some changes. There weren't. First of all. There isn't 10x difference between PLAIN and ENCRYPTED. I believe I have mixed numbers from my various testings. I also believe Cloud providers don't/can't guarantee throughput on disk. I noticed variations from 1 to 4 on the same VM between 2 days... whatever the OS was. In the end, there only seem to be a 1.5 factor difference between PLAIN and ENCRYPTED. And according to iostat, what happens is that when writing on the encrypted partition (sd1a), io already happen on the plain partition (sd0a). # disklabel sd0 (...) a: 52420031 64RAID c: 524288000 unused # disklabel sd1 (...) a: 48194944 4209056 4.2BSD 2048 16384 12958 # / b: 4208966 64swap# none c: 524195030 unused # iostat -w 1 sd0 sd1 tty sd0 sd1 cpu tin tout KB/t t/s MB/s KB/t t/s MB/s us ni sy in id 0 61 16.00 5180 80.94 16.00 5180 80.94 1 0 91 8 0 0 184 16.00 4594 71.78 16.00 4594 71.78 0 0 95 5 0 0 61 16.00 5126 80.09 16.00 5126 80.09 1 0 95 4 0 0 61 16.00 5014 78.34 16.00 5012 78.31 0 0 94 6 0 (...) Regards. Le 18/09/2017 09:40, Stefan Sperling a écrit : On Sun, Sep 17, 2017 at 07:32:49PM +0100, Kevin Chadwick wrote: I'm not a developer but I know 6.1 moved to a shiny new side channel resistant AES. I seem to remember Theo saying that if it is that slow then even worse; people won't use encryption at all and if they need side channel resistance then they could get a processor with AES-NI etc.. Not sure if it was reverted in the end or not. It was reverted.
softraid crypto seem really slower than plain ffs
Hi, Initially comparing I/O speed between FreeBSD/ZFS/GELI and OpenBSD/FFS/CRYPTO, I noticed that there were a huge difference between plain and encrypted filesystem using OpenBSD. I ran the test on a 1 vCore/1GB RAM Vultr VPS, running OpenBSD 6.2-beta. I had / configured in plain FFS and /home encrypted using bioctl(8). Then I ran a few `dd` and `bonnie++` According to those tests, writing FFS/CRYPTO is about 10 times slower than FFS/PLAIN. For the record, using the same `dd` on FreeBSD, ZFS with GELI is only 2 times slower than plain ZFS. Furthemore, comparing FreeBSD/ZFS/PLAIN and OpenBSD/FFS/PLAIN, the speed is about the same. Finally, it seems reading OpenBSD/FFS/PLAIN and OpenBSD/FFS/CRYPTO is done at the same speed. Is this expected to have so much difference between FFS/PLAIN and FFS/CRYPTO when writing data? TIA, Jo PS: here's my test data. # sysctl kern.version hw.machine hw.model hw.ncpu hw.physmem kern.version=OpenBSD 6.2-beta (GENERIC) #91: Wed Sep 13 22:05:17 MDT 2017 dera...@amd64.openbsd.org:/usr/src/sys/arch/amd64/compile/GENERIC hw.machine=amd64 hw.model=Virtual CPU a7769a6388d5 hw.ncpu=1 hw.physmem=1056817152 # disklabel sd0 # /dev/rsd0c: type: SCSI disk: SCSI disk label: Block Device duid: 69939b6a66c3879a flags: bytes/sector: 512 sectors/track: 63 tracks/cylinder: 255 sectors/cylinder: 16065 cylinders: 3263 total sectors: 52428800 boundstart: 64 boundend: 52420095 drivedata: 0 16 partitions: #size offset fstype [fsize bsize cpg] a: 16739680 35680384 4.2BSD 2048 16384 12958 # / b: 4208966 64swap# none c: 524288000 unused d: 31471335 4209030RAID # disklabel sd1 # /dev/rsd1c: type: SCSI disk: SCSI disk label: SR CRYPTO duid: 4179a9e67beb3d4e flags: bytes/sector: 512 sectors/track: 63 tracks/cylinder: 255 sectors/cylinder: 16065 cylinders: 1958 total sectors: 31470807 boundstart: 64 boundend: 31455270 drivedata: 0 16 partitions: #size offset fstype [fsize bsize cpg] c: 314708070 unused e: 273024 64 4.2BSD 2048 16384 2133 # /etc h: 31182176 273088 4.2BSD 2048 16384 12958 # /home # mount /dev/sd0a on / type ffs (local, wxallowed) /dev/sd1e on /etc type ffs (local, softdep) /dev/sd1h on /home type ffs (local, nodev, nosuid) # df -h Filesystem SizeUsed Avail Capacity Mounted on /dev/sd0a 7.9G915M6.6G12%/ /dev/sd1e 131M4.9M120M 4%/etc /dev/sd1h 14.6G2.0K 13.9G 0%/home # sync && time dd if=/dev/zero of=/TEST bs=512 count=300 && sync 300+0 records in 300+0 records out 153600 bytes transferred in 8.567 secs (179278802 bytes/sec) 0m08.61s real 0m00.29s user 0m07.70s system # sync && time dd if=/dev/zero of=/home/TEST bs=512 count=300 && sync 300+0 records in 300+0 records out 153600 bytes transferred in 20.875 secs (73580525 bytes/sec) 0m20.88s real 0m00.42s user 0m05.54s system # sync && time dd if=/dev/zero of=/TEST bs=4k count=30 && sync 30+0 records in 30+0 records out 122880 bytes transferred in 4.151 secs (296024071 bytes/sec) 0m04.19s real 0m00.04s user 0m04.01s system # sync && time dd if=/dev/zero of=/home/TEST bs=4k count=30 && sync 30+0 records in 30+0 records out 122880 bytes transferred in 22.872 secs (53723676 bytes/sec) 0m22.95s real 0m00.06s user 0m01.89s system # sync && time dd if=/dev/zero of=/TEST bs=8k count=15 && sync 15+0 records in 15+0 records out 122880 bytes transferred in 4.088 secs (300571699 bytes/sec) 0m04.12s real 0m00.05s user 0m03.93s system # sync && time dd if=/dev/zero of=/home/TEST bs=8k count=15 && sync 15+0 records in 15+0 records out 122880 bytes transferred in 21.418 secs (57372236 bytes/sec) 0m21.48s real 0m00.05s user 0m01.72s system # time dd if=/TEST of=/dev/null 240+0 records in 240+0 records out 122880 bytes transferred in 12.327 secs (99677812 bytes/sec) 0m12.33s real 0m00.39s user 0m03.62s system # time dd if=/home/TEST of=/dev/null 240+0 records in 240+0 records out 122880 bytes transferred in 12.802 secs (95979204 bytes/sec) 0m12.80s real 0m00.29s user 0m02.87s system # time dd if=/TEST of=/dev/null bs=512 240+0 records in 240+0 records out 122880 bytes transferred in 12.888 secs (95337724 bytes/sec) 0m12.89s real 0m00.29s user 0m03.41s system # time dd if=/home/TEST of=/dev/null bs=512 240+0 records in 240+0 records out 122880 bytes transferred in 13.951 secs (88076531 bytes/sec) 0m13.95s real 0m00.24s user 0m02.61s system # time dd if=/TEST of=/dev/null bs=4k 30+0 records in 30+0 records out 122880 byte
i386 or amd64 from small Cloud instance ?
Hi, My Cloud instances are always small (1 ou 2 vCPU, far less than 4GB of RAM). From what I saw, all the ports I need are available in i386 and amd64. Every Cloud provider I checked are using KVM hypervisor. Regarding OS and ports performance, does it make sense to use i386 rather than amd64 ? Or is amd64 somehow better even on small configurations ? Thanks, Jo
OpenBSD on XPS M1330, sound and HTML video issues
Hi, I have installed OpenBSD 5.7/amd64 on my "old" Dell XPS M1330. Everything seem right except sound, only working with headphones and not internal speakers, and HTML5 videos, being very choppy (things like YouTube videos). I've read about those issues but couldn't solve them from what I found. I've installed FreeBSD 10.2 and Linux Mint on that box to check if this could be hardware related. FreeBSD has the sound issue but not the video. Linux Mint didn't have any issues. I have compared Firefox configuration but nothing popped up. Acceleration parameters etc seem the same on OpenBSD and the others OSes. I've dumped a bunch of outputs there https://www.dropbox.com/sh/ag1xngy7pks1hb2/AABBQe_YKoSFa4hoYuhiWuara?dl=0 Any thoughts ? Any one who actually display YouTube video from Firefox without using the youtube-dl stuff ? Thanks, Jo
Re: Windows Server on Qemu
> Le 13 août 2015 à 08:41, Mike Larkin a écrit : > > On Wed, Aug 12, 2015 at 06:40:33PM -0700, Mike Larkin wrote: >> On Wed, Aug 12, 2015 at 10:00:49PM +0200, Joel Carnat wrote: >>> Hi, >>> >>> Anyone here succeeded in having Windows Server 2008/2008R2/2012/2012R2 run >>> in qemu-2.2.0 (OpenBSD 5.7/amd64) ? >>> >>> Mine keeps going BSOD on installation. Most of documentation I found was >>> Linux-centric so I may miss some OpenBSD trick. >>> >>> TIA, >>> Jo >>> >> >> I just installed Server 2008 datacenter without any issues. >> >> I'll try some other versions later. >> >> -ml >> > > Server 2008 datacenter 32 bit installed fine. Ah. I only tried 64 bit versions. > > Any later version requires 64 bit and doesn't work on TCG (unaccelerated) > qemu. This is a qemu bug, not an OpenBSD bug. > > Apparently with a couple of diffs floating around on the qemu mailing > list, you can at least get past the 5D BSOD, but you just end up > getting whacked by PatchGuard after a few minutes due to other bugs > in qemu. And then someone fixed^Whacked around that issue and got > further, but then broke app compatibility in some cases. > Yep, that’s what I read too. I was hoping there were good news I didn’t found. > See: > > http://lists.gnu.org/archive/html/qemu-devel/2014-08/msg02161.html > > and > > http://lists.gnu.org/archive/html/qemu-devel/2012-09/msg01412.html > > and > > http://lists.gnu.org/archive/html/qemu-devel/2015-07/msg03729.html > > I'm not sure what you were after, but if you just need "any Windows > server", 32 bit server 2008 runs fine (albeit very slowly, like 25% > native speed). Well, I’m just thinking of replacing my ESXi with an OpenBSD server with Qemu instances. It’s not for production purpose ; just trying/checking a few things on recent MS software. Speed wouldn’t be an issue. I just need to have them work ; from time to time. Regards, Jo
Windows Server on Qemu
Hi, Anyone here succeeded in having Windows Server 2008/2008R2/2012/2012R2 run in qemu-2.2.0 (OpenBSD 5.7/amd64) ? Mine keeps going BSOD on installation. Most of documentation I found was Linux-centric so I may miss some OpenBSD trick. TIA, Jo
Which tools to monitor traffic and alert ?
Hi, I run several standard services (Web, Mail, DNS, …) and have configured Munin to graph traffic and see what happened. I was wondering what was the usual OpenBSD way for proactive/real-time traffic monitoring and alerting. That is, which software to use that would, for example, read HTTPD logs and alert if req/sec from same IP is over 50 ? Looking at the ports, I saw « snort » but I was wondering if there were lighter tools for such tasks. Thanks, Jo
Re: sogo, httpd(8) and the rewrite need
> Le 15 juin 2015 à 01:16, Reyk Floeter a écrit : > >> On 14.06.2015, at 18:08, Joel Carnat wrote: >> >> Hi, >> >> I was going to install SOGo on OpenBSD 5.7 using the native httpd(8). >> In the readme, there are configuration examples for nginx and >> apache-httpd-openbsd. Nothing for the new httpd. >> There are rewrite/redirect features that I can’t figure out how to setup >> with httpd(8). >> >> nginx example: >> location = /principals/ >> { >> rewrite ^ http://$server_name/SOGo/dav; >> allow all; >> } >> >> apache-httpd-openbsd example: >> RedirectMatch ^/principals/$ http://127.0.0.1:8800/SOGo/dav/ >> >> Is it possible to achieve such feature with httpd and/or relayd ? >> > > Kind of. You could try something like: > > location "/principals/" { >block return 301 "http://$SERVER_NAME/SOGo/dav/"; > } > > Replace $SERVER_NAME with the IP, or add $SERVER_PORT, if required. OK, thank you. Now, part 2 ; is there a way to mimic the ProxyPass primitive ? nginx example: server { listen 80; location ^~/SOGo { proxy_pass http://127.0.0.1:2; apache-httpd-openbsd example: ProxyPass /SOGo http://127.0.0.1:2/SOGo ProxyPassReverse /SOGo http://127.0.0.1:2/SOGo That is, the browser thinks it is talking to httpd(8) on https://public.ip:443/ but the daemon forwards to internal hidden process. Thanks.
sogo, httpd(8) and the rewrite need
Hi, I was going to install SOGo on OpenBSD 5.7 using the native httpd(8). In the readme, there are configuration examples for nginx and apache-httpd-openbsd. Nothing for the new httpd. There are rewrite/redirect features that I can’t figure out how to setup with httpd(8). nginx example: location = /principals/ { rewrite ^ http://$server_name/SOGo/dav; allow all; } apache-httpd-openbsd example: RedirectMatch ^/principals/$ http://127.0.0.1:8800/SOGo/dav/ Is it possible to achieve such feature with httpd and/or relayd ? Thanks.
index.php not loading on obsd 5.6
Hi, I just installed 5.6 amd64 on a virtual machine. I installed php-fpm-5.5.14 and launched the daemon. I configured httpd as such : # egrep -v '^$|^#' /etc/httpd.conf ext_addr="egress" server "default" { listen on $ext_addr port 80 directory { no index, index "index.html", index "index.php" } location "*.php" { fastcgi socket "/run/php-fpm.sock" } } Then I started httpd. When I browse to http://host/index.php, the file is interpreted and displayed. When I browse to http://host/, the file is downloaded. What am I missing to display php files automatically ? TIA, Jo
Native ldapd and ldappasswd
Hi, I am configuring native ldapd (OBSD 5.4) for users authentication. But it seems I can't use ldappasswd to modify a userPassword. Here's how the object is configured: # ldapsearch -H ldap://localhost -D "cn=admin,dc=local" -w vierge -b "dc=local" "cn=email" (...) # email, users, local dn: cn=email,ou=users,dc=local objectClass: top objectClass: person cn: email sn: Account used for e-mail services userPassword:: dmllcmdl Here's the command I use to modify the password: # ldappasswd -H ldap://localhost -D "cn=admin,dc=local" -w vierge -S "cn=email,ou=users,dc=local" New password: Re-enter new password: Result: Protocol error (2) On the daemon side, I get: (...) Feb 28 12:13:49.203 [18750] accepted connection from 127.0.0.1 on fd 12 Feb 28 12:13:49.204 [18750] consumed 37 bytes Feb 28 12:13:49.204 [18750] got request type 0, id 1 Feb 28 12:13:49.204 [18750] bind dn = cn=admin,dc=local Feb 28 12:13:49.204 [18750] successfully authenticated as cn=admin,dc=local Feb 28 12:13:49.204 [18750] sending response 1 with result 0 Feb 28 12:13:49.204 [18750] consumed 71 bytes Feb 28 12:13:49.204 [18750] got request type 23, id 2 Feb 28 12:13:49.204 [18750] got extended operation 1.3.6.1.4.1.4203.1.11.1 Feb 28 12:13:49.204 [18750] unimplemented extended operation 1.3.6.1.4.1.4203.1.11.1 Feb 28 12:13:49.204 [18750] sending response 24 with result 2 Feb 28 12:13:49.204 [18750] consumed 7 bytes Feb 28 12:13:49.204 [18750] got request type 2, id 3 Feb 28 12:13:49.204 [18750] current bind dn = cn=admin,dc=local Feb 28 12:13:49.204 [18750] end-of-file on connection 12 Feb 28 12:13:49.204 [18750] closing connection 12 (...) If I run this command: # ldapmodify -H ldap://localhost -D "cn=admin,dc=local" -w vierge dn: cn=email,ou=users,dc=local changetype: modify replace: userPassword userPassword: newP4ss modifying entry "cn=email,ou=users,dc=local" Then the userPassword is properly changed. Isn't it possible to use ldappasswd to do such operation ? Or am I just mis-using it ? TIA, Jo
Re: Generate hashed rootpw for native ldapd
Yep, that works! Thanks :) Le 21 févr. 2014 à 13:41, Abel Abraham Camarillo Ojeda a écrit : > try not including newline: > > $ echo -n passphrase | openssl dgst -sha1 -binary | openssl enc > -base64 | awk '{print "{SHA}"$0}' > {SHA}YhAnRDQFLyD8uD4dD0kiBPyxGIQ= > $ > > > On Fri, Feb 21, 2014 at 6:31 AM, Joel Carnat wrote: >> Hum, I tried it but it doesn't work. >> >> I have a slappasswd else where to test. And here's what I get : >> # print passphrase | openssl dgst -sha1 -binary | openssl enc -base64 | awk >> '{print "{SHA}"$0}' >> {SHA}ZLvhLmLU88dUQwzfUgsq6IV8ZRE= >> # echo passphrase | openssl dgst -sha1 -binary | openssl enc -base64 | awk >> '{print "{SHA}"$0}' >> {SHA}ZLvhLmLU88dUQwzfUgsq6IV8ZRE= >> # slappasswd -h {SHA} -s passphrase >> {SHA}YhAnRDQFLyD8uD4dD0kiBPyxGIQ= >> >> Using the string generated with "slappasswd" works. >> Other two don't :( >> >> Le 21 févr. 2014 à 13:18, Marcus MERIGHI a écrit : >> >>> j...@carnat.net (Joel Carnat), 2014.02.21 (Fri) 12:09 (CET): >>>> I want to generate a hashed rootpw for native ldapd (on OBSD 5.4). >>>> I've tried various things like `echo secret | sha256` but I can't >>>> authenticate. >>>> >>>> If possible, I'd like not to install openldap-server just to get >>>> slappasswd. >>>> >>>> What is the (native) way to generate the "SSHA" hashed format for rootpw ? >>> >>> ``What are {SHA} and {SSHA} passwords and how do I generate them?'' >>> http://www.openldap.org/faq/data/cache/347.html >>> >>> Easiest way there seems to be: >>> >>> print "passphrase" | openssl dgst -sha1 -binary | \ >>> openssl enc -base64 | awk '{print "{SHA}"$0}' >>> >>> No way to test here... >>> >>> Bye, Marcus
Re: Generate hashed rootpw for native ldapd
Hum, I tried it but it doesn't work. I have a slappasswd else where to test. And here's what I get : # print passphrase | openssl dgst -sha1 -binary | openssl enc -base64 | awk '{print "{SHA}"$0}' {SHA}ZLvhLmLU88dUQwzfUgsq6IV8ZRE= # echo passphrase | openssl dgst -sha1 -binary | openssl enc -base64 | awk '{print "{SHA}"$0}' {SHA}ZLvhLmLU88dUQwzfUgsq6IV8ZRE= # slappasswd -h {SHA} -s passphrase {SHA}YhAnRDQFLyD8uD4dD0kiBPyxGIQ= Using the string generated with "slappasswd" works. Other two don't :( Le 21 févr. 2014 à 13:18, Marcus MERIGHI a écrit : > j...@carnat.net (Joel Carnat), 2014.02.21 (Fri) 12:09 (CET): >> I want to generate a hashed rootpw for native ldapd (on OBSD 5.4). >> I've tried various things like `echo secret | sha256` but I can't >> authenticate. >> >> If possible, I'd like not to install openldap-server just to get slappasswd. >> >> What is the (native) way to generate the "SSHA" hashed format for rootpw ? > > ``What are {SHA} and {SSHA} passwords and how do I generate them?'' > http://www.openldap.org/faq/data/cache/347.html > > Easiest way there seems to be: > > print "passphrase" | openssl dgst -sha1 -binary | \ > openssl enc -base64 | awk '{print "{SHA}"$0}' > > No way to test here... > > Bye, Marcus
Generate hashed rootpw for native ldapd
Hi, I want to generate a hashed rootpw for native ldapd (on OBSD 5.4). I've tried various things like `echo secret | sha256` but I can't authenticate. If possible, I'd like not to install openldap-server just to get slappasswd. What is the (native) way to generate the "SSHA" hashed format for rootpw ? TIA, Jo
snmpd, oid and scripts
Hi, I wanted to get rid of net-snmp and use the shipped snmpd(8). I have OpenBSD boxes running various services (DNS, Web, Mail...) and have scripts providing service stats using the extend/exec net-snmp feature. I read about the "oid" feature of snmpd(8) but it seems it can only publish fixed text of numbers. Is there any way to tell snmpd(8) to run an external script and send it's results to the snmp client? TIA, Jo