Anyone successfully using encrypted mosquitto over websockets?
Hi all, I've been battling mosquitto and websockets for too long now. I have weewx weather software (https://weewx.com/) running on my firewall (running 386 current) capturing traffic from my weather station to upload to my https web server. mosquitto is supposed to be able to upload the weather changes in real time over websockets to my web server (still on 6.9) and it does so just fine over OpenVPN. mosquitto using the mqtt protocol on port 9001 with ssl disabled can capture and send data up to the webserver, but both Firefox and Chrome will not connect to the websockets port if the traffic is "insecure". Chrome at least has decent error messages: MQTT: Connecting to MQTT Websockets: ip_cam.openvistas.net 9001 (SSL Disabled) paho-mqtt.min.js:37 Mixed Content: The page at 'https://www.starhouse-observatory.org/weather/belchertown/' was loaded over HTTPS, but attempted to connect to the insecure WebSocket endpoint 'ws://ip_cam.openvistas.net:9001/mqtt'. This request has been blocked; this endpoint must be available over WSS. d._doConnect @ paho-mqtt.min.js:37 jquery.min.js:2 Uncaught DOMException: Failed to construct 'WebSocket': An insecure WebSocket connection may not be initiated from a page loaded over HTTPS. at d._doConnect (https://cdnjs.cloudflare.com/ajax/libs/paho-mqtt/1.1.0/paho-mqtt.min.js:37:251) at d.connect (https://cdnjs.cloudflare.com/ajax/libs/paho-mqtt/1.1.0/paho-mqtt.min.js:31:233) at Client.connect (https://cdnjs.cloudflare.com/ajax/libs/paho-mqtt/1.1.0/paho-mqtt.min.js:70:506) at connect (https://www.starhouse-observatory.org/weather/belchertown/js/belchertown.js?1644249956:1304:12) at HTMLDocument. (https://www.starhouse-observatory.org/weather/belchertown/:148:13) at l (https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js:2:29375) at c (https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js:2:29677) So far, *any* attempt to put this over ssl has failed with a myriad different errors. mosquitto itself has pathetic logging, ktracing the process in an attempt to figure out why has proven fruitless. The real question for the moment is to find out if anyone has got mosquitto/websockets working to push updates out to a web server over an encrypted connection. I know--lots of details lacking here and please accept my apologies in advance--there have been too many iterations to track :-( Feel free to apply the clue-by-four here or in private e-mail. Jeff
Re: httpd.conf: 2 interfaces, 2 listen, IPv6, only one server works
Matt & Łukasz, > Am 07.02.2022 um 19:23 schrieb Łukasz Moskała : > > Actually the request is: > > GET / HTTP/1.1 > Host: example.com > > Host header is REQUIRED by HTTP/1.1 specification: > https://datatracker.ietf.org/doc/html/rfc2616#section-14.23 > > HTTPS also sends host header, but SNI is still used to choose correct > certificate. > Am 07.02.2022 um 18:15 schrieb Matthew Ernisse : > > On Mon, Feb 07, 2022 at 05:23:03PM +0100, Mike Fischer said: >> >> Not quite true. I do use DNS and for practical applications I also >> use HTTPS and SNI. But DNS is secondary and sometimes adds another >> layer of complexity. Also SNI is not available for services not >> secured by SSL/TLS to my knowledge. E.g. in my example for a web >> server on port 80 the hostname comes into play only to resolve the >> IP. The actual request would be "GET / HTTP/1.1" — no hostname in >> sight. > > FWIW, the assertion about HTTP is incorrect here. HTTP 1.1 defines the Host > header which is mandatory in requests which and has been used for decades to > provide name based virtual hosting sharing an IP address. > > https://datatracker.ietf.org/doc/html/rfc2616/#section-14.23 > > In practice DNS isn't even needed, an entry in your client's hosts(5) file > has been sufficient. > > —Matt You are both correct! I hadn’t realized the header was mandatory for HTTP/1.1. Thanks for pointing that out. (I wonder if curl(8) adds that header automatically? Though that is off topic for this thread…) Mike
Re: C2 state on AC/battery
On Mon, Feb 7, 2022 at 10:04 AM Jan Stary wrote: > On Feb 05 13:41:25, guent...@gmail.com wrote: > > On Sat, Feb 5, 2022 at 2:54 AM Jan Stary wrote: > > > > > This is current/amd64 on a Thinkapd T420s, dmesgs below. > > > It seems that C2 is or is not supported depending on > > > whether the machine boots on AC or on battery > > > (judging by three boots of each). > > > Is this intended? > > > > The acpicpu driver is reporting what ACPI told it; presumably the authors > > of the AML intended this change as a way to reduce power consumption. > > > > Now, ACPI provides a mechanism for the OS to tell it to notify the OS if > > the contents of the _CST table changes and at least in some cases > > acpicpu registers for that and if called it would write new acpicpu lines > > to the dmesg. > > > > If you're not seeing those when plugging/unplugging, > > I don't. > > > there are two > > possibilities: > > * does the AML on your system actually change the values and trigger the > > notify? > > * is acpicpu actually registering the callback correctly? > > > > I would suggest adding a printf() right before the aml_register_notify() > > call in acpicpu.c to see if it's actually being hit, > > Probably not: I added a printf() right there > but nothing shows in dmesg when plugging/unpluging. > That aml_register_notify() path is a *boot* time path, when acpicpu is attaching. What printf() did you add and did it appear during boot? If not, then the OS isn't registering the notify callback. Please send a report to bugs@ with sendbug as root, including the acpidump output. Philip Guenther
Re: httpd.conf: 2 interfaces, 2 listen, IPv6, only one server works
On Mon, Feb 07, 2022 at 05:23:03PM +0100, Mike Fischer said: > > Not quite true. I do use DNS and for practical applications I also > use HTTPS and SNI. But DNS is secondary and sometimes adds another > layer of complexity. Also SNI is not available for services not > secured by SSL/TLS to my knowledge. E.g. in my example for a web > server on port 80 the hostname comes into play only to resolve the > IP. The actual request would be "GET / HTTP/1.1" — no hostname in > sight. FWIW, the assertion about HTTP is incorrect here. HTTP 1.1 defines the Host header which is mandatory in requests which and has been used for decades to provide name based virtual hosting sharing an IP address. https://datatracker.ietf.org/doc/html/rfc2616/#section-14.23 In practice DNS isn't even needed, an entry in your client's hosts(5) file has been sufficient. --Matt -- Matthew Ernisse merni...@ub3rgeek.net https://www.going-flying.com/
Re: httpd.conf: 2 interfaces, 2 listen, IPv6, only one server works
W dniu 7.02.2022 o 17:23, Mike Fischer pisze: Am 06.02.2022 um 22:48 schrieb Brian Brombacher : At this point I would reconfigure httpd to use two separate ports (80, 81) for each site, or two local IP addresses (::1, ::2, I wouldn’t personally do this, I would go multi port), and then use PF rules to forward the (em0) port 80 as usual and then (em1) port 80 I would forward to rdomain 0, port 81 (example port). You mean: have only one instance of httpd listen on IPs in rdomain 0 for different ports and use PF to forward packets for IPs in rdomain 1 to these IP/port combinations in rdomain 0? I’ll give that a try in the next few days… All of this is beyond the scope of a normal setup. I would usually just do as described by others and rely on hostname rather than IP for httpd to process requests. If for some reason this isn’t feasible, I’d be curious why. This is mainly for learning. In a production setup I’d agree that this seems much too complicated. Also generally HTTPS would be used which allows for SNI to choose the virtual hosts. For services other than HTTPS that might be more difficult. There might be actual use cases for this in home/small office settings though. Buisness internet line should have static prefix On Feb 6, 2022, at 4:51 PM, Brian Brombacher wrote: From your posts I know why you don’t want to use hostnames. Not quite true. I do use DNS and for practical applications I also use HTTPS and SNI. But DNS is secondary and sometimes adds another layer of complexity. Also SNI is not available for services not secured by SSL/TLS to my knowledge. E.g. in my example for a web server on port 80 the hostname comes into play only to resolve the IP. The actual request would be "GET / HTTP/1.1" — no hostname in sight. Actually the request is: GET / HTTP/1.1 Host: example.com Host header is REQUIRED by HTTP/1.1 specification: https://datatracker.ietf.org/doc/html/rfc2616#section-14.23 HTTPS also sends host header, but SNI is still used to choose correct certificate. I can see utility in using different IPs for different sites if you don’t want to advertise that the sites are related by their IP. Yes, though in truth having the same prefix would be unavoidable and would let an outsider know that the services are related in some way. It would leave open whether the services are using the same host though. Not really, since that IP could possibly point to loadbalancer or reverse proxy, instead of end server Like I wrote this is mainly for learning at the moment. I am somewhat amazed at the subtle differences between IPv4 and IPv6. IPv6 is obviously not just IPv4 with more address space. My approach is to figure out how things work and what is possible, then for practical applications decide whether a particular solution is too complicated to maintain or to set up, or too fragile to be of long term use. I wouldn't be learning about hosting on dynamic prefix - it's not really what you would do in real world. Just set static IPs and pretend that they don't change, for the sake of learning. Or maybe your ISP could give you static prefix As for privacy my aim is to be able to leak as little information as possible to reduce any attack surface. Naturally when hosting a service on the public Internet the service itself is exposed. That can’t be helped. But anything not directly related to the service should IMHO stay hidden as much as possible. If you have a.example.com with A record 1.2.3.4 and record 2001:db8::dead:beef and b.example.com with A record 1.2.3.4 and record 2001:db8::c0:ffee, then potential attacker already can tell, that either: - 2001:db8::dead:beef and 2001:db8::c0:ffee is the same machine - 1.2.3.4 is reverse proxy or load balancer, possibly serving more sites Or, you could even use something like cloudflare to hide your IP - then your service will share IP with probably hundreds of other (unrelated) services, so IP will not tell attacker anything. Thanks! Mike -- Łukasz Moskała
Re: C2 state on AC/battery
On Feb 05 13:41:25, guent...@gmail.com wrote: > On Sat, Feb 5, 2022 at 2:54 AM Jan Stary wrote: > > > This is current/amd64 on a Thinkapd T420s, dmesgs below. > > It seems that C2 is or is not supported depending on > > whether the machine boots on AC or on battery > > (judging by three boots of each). > > Is this intended? > > > > The acpicpu driver is reporting what ACPI told it; presumably the authors > of the AML intended this change as a way to reduce power consumption. > > Now, ACPI provides a mechanism for the OS to tell it to notify the OS if > the contents of the _CST table changes and at least in some cases > acpicpu registers for that and if called it would write new acpicpu lines > to the dmesg. > > If you're not seeing those when plugging/unplugging, I don't. > there are two > possibilities: > * does the AML on your system actually change the values and trigger the > notify? > * is acpicpu actually registering the callback correctly? > > I would suggest adding a printf() right before the aml_register_notify() > call in acpicpu.c to see if it's actually being hit, Probably not: I added a printf() right there but nothing shows in dmesg when plugging/unpluging. (yes, this is on the recompiled kernel with the printf) Jan > and if it is then dump > the tables on your box and grovel around in them to see if you see > notification support on the CPU nodes. > > > Philip Guenther >
Re: httpd.conf: 2 interfaces, 2 listen, IPv6, only one server works
> Am 06.02.2022 um 22:48 schrieb Brian Brombacher : > > At this point I would reconfigure httpd to use two separate ports (80, 81) > for each site, or two local IP addresses (::1, ::2, I wouldn’t personally do > this, I would go multi port), and then use PF rules to forward the (em0) port > 80 as usual and then (em1) port 80 I would forward to rdomain 0, port 81 > (example port). You mean: have only one instance of httpd listen on IPs in rdomain 0 for different ports and use PF to forward packets for IPs in rdomain 1 to these IP/port combinations in rdomain 0? I’ll give that a try in the next few days… > All of this is beyond the scope of a normal setup. I would usually just do > as described by others and rely on hostname rather than IP for httpd to > process requests. If for some reason this isn’t feasible, I’d be curious why. This is mainly for learning. In a production setup I’d agree that this seems much too complicated. Also generally HTTPS would be used which allows for SNI to choose the virtual hosts. For services other than HTTPS that might be more difficult. There might be actual use cases for this in home/small office settings though. >> On Feb 6, 2022, at 4:51 PM, Brian Brombacher wrote: > From your posts I know why you don’t want to use hostnames. Not quite true. I do use DNS and for practical applications I also use HTTPS and SNI. But DNS is secondary and sometimes adds another layer of complexity. Also SNI is not available for services not secured by SSL/TLS to my knowledge. E.g. in my example for a web server on port 80 the hostname comes into play only to resolve the IP. The actual request would be "GET / HTTP/1.1" — no hostname in sight. > I can see utility in using different IPs for different sites if you don’t > want to advertise that the sites are related by their IP. Yes, though in truth having the same prefix would be unavoidable and would let an outsider know that the services are related in some way. It would leave open whether the services are using the same host though. Like I wrote this is mainly for learning at the moment. I am somewhat amazed at the subtle differences between IPv4 and IPv6. IPv6 is obviously not just IPv4 with more address space. My approach is to figure out how things work and what is possible, then for practical applications decide whether a particular solution is too complicated to maintain or to set up, or too fragile to be of long term use. As for privacy my aim is to be able to leak as little information as possible to reduce any attack surface. Naturally when hosting a service on the public Internet the service itself is exposed. That can’t be helped. But anything not directly related to the service should IMHO stay hidden as much as possible. Thanks! Mike
Re: NXDOMAIN on unbound with local TLD
On 2022-02-06, Laura Smith wrote: > I have a local OpenBSD setup with NSD and Unbound. > > I'm seeing a weird problem where I am getting an NXDOMAIN (per below) on my > internal "bar.corp" domain. > > My unbound config is as follows. If I do the same dig query directly against > the stub resolvers, it works with no issue. > > server: > interface: 127.0.0.1 > # extra interface: entries removed for list post > # > do-ip6: yes > # > access-control: 0.0.0.0/0 refuse > access-control: ::0/0 refuse > access-control: 127.0.0.0/8 allow > access-control: ::1 allow > access-control: 10.0.0.0/8 allow > # > hide-identity: yes > hide-version: yes > hide-version: yes > auto-trust-anchor-file: "/var/unbound/db/root.key" > prefetch: yes > prefetch-key: yes > rrset-roundrobin: yes > minimal-responses: yes > root-hints: "/var/unbound/db/named.root" > domain-insecure: "bar.corp" Not sure but you might also need domain-insecure for "corp". If that's not it, it is probably best to ask on the unbound mailing list. > local-zone: "bar.corp" nodefault > local-zone: "use-application-dns.net" always_nxdomain > remote-control: > control-enable: yes > control-use-cert: no > control-interface: /var/run/unbound.sock > stub-zone: > name: "bar.corp" > stub-addr: 10.0.0.50 > stub-addr: 10.0.1.50 > > > ; <<>> DiG 9.16.22-Debian <<>> foo.bar.corp > ;; global options: +cmd > ;; Got answer: > ;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 46113 > ;; flags: qr rd ra ad; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1 > > ;; OPT PSEUDOSECTION: > ; EDNS: version: 0, flags:; udp: 1232 > ;; QUESTION SECTION: > ;foo.bar.corp.IN A > > ;; AUTHORITY SECTION: > . 3501IN SOA a.root-servers.net. > nstld.verisign-grs.com. 2022020600 1800 900 604800 86400 > > ;; Query time: 4 msec > ;; SERVER: > ;; WHEN: Sun Feb 06 12:21:04 GMT 2022 > ;; MSG SIZE rcvd: 122 > > -- Please keep replies on the mailing list.
Re: Question about packet reassembly and pf
CC'ing back to the mailing list for a sender who ignored my request to keep replies on the list. > You answered at the Mailinglist: > "If you want to do the "reassemble tcp" things then you would need to > > use it in your ruleset, they are different to the IP packet reassembly > controlled by "set reassemble". It's a bit unfortunate that they use > the same word in the option name." > > As a NON native Speaker: Wait... WHAT?! I understood it exactly like > the Person asking the Question that if you use "set reassemble yes" it > does the Job. > > I suggest a CHANGE: > set reassemble_ip > set reassemble_tcp > set reassemble (does it all) > > If this is no Solution would you please reconsider to phrase the Manual > better. The manual is already clear. set reassemble yes | no [no-df] The reassemble option is used to enable or disable the reassembly of fragmented packets, and can be set to yes (the default) or no. If no-df is also specified, fragments with the ... reassemble tcp Statefully normalises TCP connections. reassemble tcp performs the following normalisations: TTL [...] Timestamp Modulation [...] Extended PAWS Checks [...] I suppose we could change pfctl "reassemble tcp" to "normalise tcp" (and allow "reassemble" as a synonym to avoid breaking existing configs). Not sure if it's worth it though, people using the more advanced options in PF certainly need to read the manual.
Re: Question about packet reassembly and pf
On 2022-02-07, J Doe wrote: > My question is - is it unnecessary to include "reassemble tcp" in the > scrub rule if "set reassemble yes" has already been set ? I know the > FAQ example also doesn't explicitly state "set reassemble yes", but man > notes that that is the default setting. > Stated another way - is there ever a case where I would put "set > reassemble yes" and "match in all scrub (... reassemble tcp)" ? If you want to do the "reassemble tcp" things then you would need to use it in your ruleset, they are different to the IP packet reassembly controlled by "set reassemble". It's a bit unfortunate that they use the same word in the option name. -- Please keep replies on the mailing list.