Re: disk i/o test

2022-03-07 Thread Brian Brombacher



> On Mar 7, 2022, at 12:10 PM, Brian Brombacher  wrote:
> 
> Hi Mihai,
> 
> Not exactly related to disk speed, but have you cranked up the following 
> sysctl to see if it helps?
> 
> sysctl kern.bufcachepercentage=9
> 
> I put an entry in /etc/sysctl.conf for persistence.
> 
> This will cause up to 90% of system memory to be used as a unified buffer 
> cache for disk access.  Not sure if that helps but I use that value on every 
> install, including desktop and servers.  I can’t remember if the default 
> value has changed in the past 10 years but I always go with 90%.


I was helped off-list with this information, there is a memory threshold on 
amd64 that affects the behavior of the sysctl.  If you have more than 4 gb of 
memory, the sysctl isn’t important.


> 
> -Brian
> 
>>> On Mar 7, 2022, at 6:17 AM, Mihai Popescu  wrote:
>>> 
>>> On Mon, Mar 7, 2022 at 8:46 AM Janne Johansson  wrote:
>>> 
>>> Den sön 6 mars 2022 kl 16:41 skrev Mihai Popescu :
>>>> 
>>>> Since this thread is moving slowly in another direction, let me
>>> 
>>> True
>>> 
>>>> reiterate my situation again: I am running a browser (mostly chromium)
>>>> and the computer slows down on downloads. Since I've checked the
>>>> downloads rates, I observed they are slow than my maximum 500Mbps for
>>>> the line.
>>>> I can reach 320Mbps maximum, but mostly it stays at 280Mbps and the
>>>> Chromium has 30 seconds delays in everything i do.
>>> 
>>> I would make sure it is not some kind of DNS thing, 30 second delays
>>> sounds A LOT
>>> like trying a "dead" resolver 3 times with 10 secs in between, before
>>> moving to a "working" one.
>> 
>> By "delay" I mean the time passed from clicking on some Chromium menu
>> and the actual display of that menu. Even using a tty is slow, login
>> . password  in that disk intense usage period.
>> Tried Debian and FreeBSD, all are able to write disk and do graphics.
>> That ZFS on FreeBSD is mind blowing, I hope it's reliable too.
>> 
>> All I wanted was to compare my hardware and disk speeds with someone
>> running OpenBSD: simple dmesg <-> speed report match, but I think I
>> hit a taboo again.
>> Found some discussions on misc@ about that, no clear answer. I think I
>> will close this thread and see what this ZFS is about :-) .
>> 
>> Thank you all.
>>> 
>>> --
>>> May the most significant bit of your life be positive.
>> 
> 



Re: disk i/o test

2022-03-07 Thread Brian Brombacher
Correction: 

kern.bufcachepercentage=90

> On Mar 7, 2022, at 12:07 PM, Brian Brombacher  wrote:
> 
> Hi Mihai,
> 
> Not exactly related to disk speed, but have you cranked up the following 
> sysctl to see if it helps?
> 
> sysctl kern.bufcachepercentage=9
> 
> I put an entry in /etc/sysctl.conf for persistence.
> 
> This will cause up to 90% of system memory to be used as a unified buffer 
> cache for disk access.  Not sure if that helps but I use that value on every 
> install, including desktop and servers.  I can’t remember if the default 
> value has changed in the past 10 years but I always go with 90%.
> 
> -Brian
> 
>>> On Mar 7, 2022, at 6:17 AM, Mihai Popescu  wrote:
>>> 
>>> On Mon, Mar 7, 2022 at 8:46 AM Janne Johansson  wrote:
>>> 
>>> Den sön 6 mars 2022 kl 16:41 skrev Mihai Popescu :
>>>> 
>>>> Since this thread is moving slowly in another direction, let me
>>> 
>>> True
>>> 
>>>> reiterate my situation again: I am running a browser (mostly chromium)
>>>> and the computer slows down on downloads. Since I've checked the
>>>> downloads rates, I observed they are slow than my maximum 500Mbps for
>>>> the line.
>>>> I can reach 320Mbps maximum, but mostly it stays at 280Mbps and the
>>>> Chromium has 30 seconds delays in everything i do.
>>> 
>>> I would make sure it is not some kind of DNS thing, 30 second delays
>>> sounds A LOT
>>> like trying a "dead" resolver 3 times with 10 secs in between, before
>>> moving to a "working" one.
>> 
>> By "delay" I mean the time passed from clicking on some Chromium menu
>> and the actual display of that menu. Even using a tty is slow, login
>> . password  in that disk intense usage period.
>> Tried Debian and FreeBSD, all are able to write disk and do graphics.
>> That ZFS on FreeBSD is mind blowing, I hope it's reliable too.
>> 
>> All I wanted was to compare my hardware and disk speeds with someone
>> running OpenBSD: simple dmesg <-> speed report match, but I think I
>> hit a taboo again.
>> Found some discussions on misc@ about that, no clear answer. I think I
>> will close this thread and see what this ZFS is about :-) .
>> 
>> Thank you all.
>>> 
>>> --
>>> May the most significant bit of your life be positive.
>> 



Re: disk i/o test

2022-03-07 Thread Brian Brombacher
Hi Mihai,

Not exactly related to disk speed, but have you cranked up the following sysctl 
to see if it helps?

sysctl kern.bufcachepercentage=9

I put an entry in /etc/sysctl.conf for persistence.

This will cause up to 90% of system memory to be used as a unified buffer cache 
for disk access.  Not sure if that helps but I use that value on every install, 
including desktop and servers.  I can’t remember if the default value has 
changed in the past 10 years but I always go with 90%.

-Brian

> On Mar 7, 2022, at 6:17 AM, Mihai Popescu  wrote:
> 
> On Mon, Mar 7, 2022 at 8:46 AM Janne Johansson  wrote:
>> 
>> Den sön 6 mars 2022 kl 16:41 skrev Mihai Popescu :
>>> 
>>> Since this thread is moving slowly in another direction, let me
>> 
>> True
>> 
>>> reiterate my situation again: I am running a browser (mostly chromium)
>>> and the computer slows down on downloads. Since I've checked the
>>> downloads rates, I observed they are slow than my maximum 500Mbps for
>>> the line.
>>> I can reach 320Mbps maximum, but mostly it stays at 280Mbps and the
>>> Chromium has 30 seconds delays in everything i do.
>> 
>> I would make sure it is not some kind of DNS thing, 30 second delays
>> sounds A LOT
>> like trying a "dead" resolver 3 times with 10 secs in between, before
>> moving to a "working" one.
> 
> By "delay" I mean the time passed from clicking on some Chromium menu
> and the actual display of that menu. Even using a tty is slow, login
> . password  in that disk intense usage period.
> Tried Debian and FreeBSD, all are able to write disk and do graphics.
> That ZFS on FreeBSD is mind blowing, I hope it's reliable too.
> 
> All I wanted was to compare my hardware and disk speeds with someone
> running OpenBSD: simple dmesg <-> speed report match, but I think I
> hit a taboo again.
> Found some discussions on misc@ about that, no clear answer. I think I
> will close this thread and see what this ZFS is about :-) .
> 
> Thank you all.
>> 
>> --
>> May the most significant bit of your life be positive.
> 



Re: disk i/o test

2022-03-06 Thread Brian Brombacher



> On Mar 6, 2022, at 7:41 AM, Mihai Popescu  wrote:
> 
> Since this thread is moving slowly in another direction, let me
> reiterate my situation again: I am running a browser (mostly chromium)
> and the computer slows down on downloads. Since I've checked the
> downloads rates, I observed they are slow than my maximum 500Mbps for
> the line.
> I can reach 320Mbps maximum, but mostly it stays at 280Mbps and the
> Chromium has 30 seconds delays in everything i do.
> 
> As a suggestion from Stuart, I was trying to separate tests for
> downloading and disk write. The disk looks slow.

Is the disk brand new?  If I missed this somewhere, apologies.

If it’s not new, how confident are you that the region of disk where chromium 
is writing data to disk has not suffered from any reallocations at the physical 
layer?  I find read and write performance to spinning disks is highly regulated 
by physical layout more than anything else.  For linear access, of course.

Getting 41 MB/sec on an old disk depending on the region you are accessing is 
not out of my expectations, if the disk has reallocations in the region 
accessed.

Reallocations occur when the physical media is no longer usable within 
thresholds so a new sector/area is allocated elsewhere on the disk and mapped.  
This causes seeks for what you consider a linear access.  The hardware does 
this for you and you can’t stop it nor should you want to.

Solution: Get SSD’s.


> I tried both Debian 11 and Ubuntu and the download and disk write
> jumps to 500Mbps without problems. And no, I cannot tolerate Linux
> enough to use it as a daily OS, so don't bother to recommend it. I
> cannot attain this in OpenBSD. Maybe that is the maximum possible for
> my hardware. Just asking, for the moment i can live with this delays.
> I was curious if someone with similar hardware can do better.
> 
> OpenBSD 7.1-beta (GENERIC.MP) #401: Thu Mar  3 12:48:28 MST 2022
>dera...@amd64.openbsd.org:/usr/src/sys/arch/amd64/compile/GENERIC.MP
> real mem = 7711543296 (7354MB)
> avail mem = 7460630528 (7115MB)
> random: good seed from bootblocks
> mpath0 at root
> scsibus0 at mpath0: 256 targets
> mainbus0 at root
> bios0 at mainbus0: SMBIOS rev. 2.7 @ 0xe86ed (64 entries)
> bios0: vendor Hewlett-Packard version "K06 v02.77" date 03/22/2018
> bios0: Hewlett-Packard HP Compaq Pro 6305 SFF
> acpi0 at bios0: ACPI 5.0
> acpi0: sleep states S0 S3 S4 S5
> acpi0: tables DSDT FACP APIC FPDT MCFG HPET SSDT MSDM TCPA IVRS SSDT SSDT CRAT
> acpi0: wakeup devices SBAZ(S4) PS2K(S3) PS2M(S3) P0PC(S4) PE20(S4)
> PE21(S4) PE22(S4) BNIC(S4) PE23(S4) BR12(S4) BR14(S4) OHC1(S3)
> EHC1(S3) OHC2(S3) EHC2(S3) OHC3(S3) [...]
> acpitimer0 at acpi0: 3579545 Hz, 32 bits
> acpimadt0 at acpi0 addr 0xfee0: PC-AT compat
> cpu0 at mainbus0: apid 16 (boot processor)
> cpu0: AMD A8-5500B APU with Radeon(tm) HD Graphics, 3194.47 MHz, 15-10-01
> cpu0: 
> FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,MMX,FXSR,SSE,SSE2,HTT,SSE3,PCLMUL,MWAIT,SSSE3,FMA3,CX16,SSE4.1,SSE4.2,POPCNT,AES,XSAVE,AVX,F16C,NXE,MMXX,FFXSR,PAGE1GB,RDTSCP,LONG,LAHF,CMPLEG,SVM,EAPICSP,AMCR8,ABM,SSE4A,MASSE,3DNOWP,OSVW,IBS,XOP,SKINIT,WDT,FMA4,TCE,NODEID,TBM,TOPEXT,CPCTR,ITSC,BMI1,IBPB
> cpu0: 64KB 64b/line 2-way I-cache, 16KB 64b/line 4-way D-cache, 2MB



Re: httpd.conf: 2 interfaces, 2 listen, IPv6, only one server works

2022-02-06 Thread Brian Brombacher



> On Feb 6, 2022, at 4:51 PM, Brian Brombacher  wrote:
> 
> 
> 
>> On Feb 6, 2022, at 4:32 PM, Mike Fischer  wrote:
>> 
>> 
>>>> Am 06.02.2022 um 21:13 schrieb Brian Brombacher :
>>> 
>>>>> You can work around it by putting both interfaces in diffrent rdomains, 
>>>>> then running two httpd instances, one in rdomain with first IP, second in 
>>>>> rdomain with second IP.
>>>> 
>>> 
>>> This will work.  You can use PF rules to cross rdomains if you require.
>> 
>> Thanks for that info!
>> 
>> 
>> rdomains are a new concept for me. From what I currently understand after 
>> reading rdomain(4) I don’t get why I would need to run two instances of my 
>> service, e.g. httpd(8) to use rdomains? Is a process somehow tied to an 
>> rdomain?
>> 
>> And while the PF mechanism to cross rdomains might be needed in some setups 
>> I don’t see where it would help in my scenario? I want to use my service 
>> mainly from outside the host. (Though for local access I would understand 
>> the need to configure some PF rules.)
>> 
>> I tried the following:
>> Starting state: em0 and em1 each configured for IPv4 and IPv6, the later 
>> using autoconf
>> em0:
>> …
>>   inet 192.168.0.10 netmask 0xff00 broadcast 192.168.0.255
>>   inet6 fe80::20c:29ff:fd9c:4b7%em0 prefixlen 64 scopeid 0x1
>>   inet6 2001:db8::20c:29ff:fd9c:4b7 prefixlen 64 autoconf pltime 978 vltime 
>> 6912
>> …
>> 
>> em1:
>> …
>>   inet 192.168.0.20 netmask 0xff00 broadcast 192.168.0.255
>>   inet6 fe80::20c:29ff:fd9c:4c1%em0 prefixlen 64 scopeid 0x1
>>   inet6 2001:db8::20c:29ff:fd9c:4c1 prefixlen 64 autoconf pltime 978 vltime 
>> 6912
>> …
>> 
>> # netstat -R
>> Rdomain 0
>> Interfaces: lo0 em0 em1 enc0 pflog0
>> Routing table: 0
>> 
>> # 
>> 
>> Change #1:
>> 
>> # ifconfig em1 rdomain 1
>> 
>> New state:
>> em0: (same as above)
>> …
>>   inet 192.168.0.10 netmask 0xff00 broadcast 192.168.0.255
>>   inet6 fe80::20c:29ff:fd9c:4b7%em0 prefixlen 64 scopeid 0x1
>>   inet6 2001:db8::20c:29ff:fd9c:4b7 prefixlen 64 autoconf pltime 978 vltime 
>> 6912
>> …
>> 
>> em1: (no IPs)
>> …
>> …
>> 
>> # netstat -R
>> Rdomain 0
>> Interfaces: lo0 em0 enc0 pflog0
>> Routing table: 0
>> 
>> Rdomain 1
>> Interfaces: em1 lo1
>> Routing table: 1
>> 
>> # 
>> 
>> Change #2: Re-add the IPs:
>> # ifconfig em1 inet 192.168.0.20 netmask 255.255.255.0 broadcast 
>> 192.168.0.255
>> # ifconfig em1 inet6 autoconf -temporary -soii
>> 
>> New state: IPs on em1 are now set as in the original state, em1 is in 
>> rdomain 1.
>> 
>> So far so good!
>> 
> 
> At this point I would reconfigure httpd to use two separate ports (80, 81) 
> for each site, or two local IP addresses (::1, ::2, I wouldn’t personally do 
> this, I would go multi port), and then use PF rules to forward the (em0) port 
> 80 as usual and then (em1) port 80 I would forward to rdomain 0, port 81 
> (example port).
> 
> All of this is beyond the scope of a normal setup.  I would usually just do 
> as described by others and rely on hostname rather than IP for httpd to 
> process requests.  If for some reason this isn’t feasible, I’d be curious why.
> 

>From your posts I know why you don’t want to use hostnames.  I can see utility 
>in using different IPs for different sites if you don’t want to advertise that 
>the sites are related by their IP.

> 
>> 
>> After restarting httpd it failed with message: "parent: send server: Can't 
>> assign requested address“ in /var/log messages
>> Ok, so there seems to be a reason for needing another instance of httpd. But 
>> how would that work? What would I have to do to get that second instance to 
>> listen on IPs from rdomain 1?
>> 
>> I have tried setting up a copy of /usr/sbin/httpd (actually a symbolic link 
>> using the name /root/bin/httpd_em1) and I have created a new 
>> /etc/httpd.2.conf with only the em1 related content. I have also duplicated 
>> /etc/rc.d/httpd to /etc/rc.d/httpd_em1 and changed 
>> daemon='/root/bin/httpd_em1' (the path to my symbolic link) and 
>> daemon_flags="${daemon_flags} -f /etc/httpd.2.conf"
>> No joy! rcctl start httpd_em1 results in the same message in 
>> /var/log/messages.
>> 
>> 
>> Thanks for any pointers you can give me.
>> 
>> Mike
>> 
> 



Re: httpd.conf: 2 interfaces, 2 listen, IPv6, only one server works

2022-02-06 Thread Brian Brombacher



> On Feb 6, 2022, at 4:32 PM, Mike Fischer  wrote:
> 
> 
>> Am 06.02.2022 um 21:13 schrieb Brian Brombacher :
>> 
>>>> You can work around it by putting both interfaces in diffrent rdomains, 
>>>> then running two httpd instances, one in rdomain with first IP, second in 
>>>> rdomain with second IP.
>>> 
>> 
>> This will work.  You can use PF rules to cross rdomains if you require.
> 
> Thanks for that info!
> 
> 
> rdomains are a new concept for me. From what I currently understand after 
> reading rdomain(4) I don’t get why I would need to run two instances of my 
> service, e.g. httpd(8) to use rdomains? Is a process somehow tied to an 
> rdomain?
> 
> And while the PF mechanism to cross rdomains might be needed in some setups I 
> don’t see where it would help in my scenario? I want to use my service mainly 
> from outside the host. (Though for local access I would understand the need 
> to configure some PF rules.)
> 
> I tried the following:
> Starting state: em0 and em1 each configured for IPv4 and IPv6, the later 
> using autoconf
> em0:
> …
>inet 192.168.0.10 netmask 0xff00 broadcast 192.168.0.255
>inet6 fe80::20c:29ff:fd9c:4b7%em0 prefixlen 64 scopeid 0x1
>inet6 2001:db8::20c:29ff:fd9c:4b7 prefixlen 64 autoconf pltime 978 vltime 
> 6912
> …
> 
> em1:
> …
>inet 192.168.0.20 netmask 0xff00 broadcast 192.168.0.255
>inet6 fe80::20c:29ff:fd9c:4c1%em0 prefixlen 64 scopeid 0x1
>inet6 2001:db8::20c:29ff:fd9c:4c1 prefixlen 64 autoconf pltime 978 vltime 
> 6912
> …
> 
> # netstat -R
> Rdomain 0
>  Interfaces: lo0 em0 em1 enc0 pflog0
>  Routing table: 0
> 
> # 
> 
> Change #1:
> 
> # ifconfig em1 rdomain 1
> 
> New state:
> em0: (same as above)
> …
>inet 192.168.0.10 netmask 0xff00 broadcast 192.168.0.255
>inet6 fe80::20c:29ff:fd9c:4b7%em0 prefixlen 64 scopeid 0x1
>inet6 2001:db8::20c:29ff:fd9c:4b7 prefixlen 64 autoconf pltime 978 vltime 
> 6912
> …
> 
> em1: (no IPs)
> …
> …
> 
> # netstat -R
> Rdomain 0
>  Interfaces: lo0 em0 enc0 pflog0
>  Routing table: 0
> 
> Rdomain 1
>  Interfaces: em1 lo1
>  Routing table: 1
> 
> # 
> 
> Change #2: Re-add the IPs:
> # ifconfig em1 inet 192.168.0.20 netmask 255.255.255.0 broadcast 192.168.0.255
> # ifconfig em1 inet6 autoconf -temporary -soii
> 
> New state: IPs on em1 are now set as in the original state, em1 is in rdomain 
> 1.
> 
> So far so good!
> 

At this point I would reconfigure httpd to use two separate ports (80, 81) for 
each site, or two local IP addresses (::1, ::2, I wouldn’t personally do this, 
I would go multi port), and then use PF rules to forward the (em0) port 80 as 
usual and then (em1) port 80 I would forward to rdomain 0, port 81 (example 
port).

All of this is beyond the scope of a normal setup.  I would usually just do as 
described by others and rely on hostname rather than IP for httpd to process 
requests.  If for some reason this isn’t feasible, I’d be curious why.


> 
> After restarting httpd it failed with message: "parent: send server: Can't 
> assign requested address“ in /var/log messages
> Ok, so there seems to be a reason for needing another instance of httpd. But 
> how would that work? What would I have to do to get that second instance to 
> listen on IPs from rdomain 1?
> 
> I have tried setting up a copy of /usr/sbin/httpd (actually a symbolic link 
> using the name /root/bin/httpd_em1) and I have created a new 
> /etc/httpd.2.conf with only the em1 related content. I have also duplicated 
> /etc/rc.d/httpd to /etc/rc.d/httpd_em1 and changed 
> daemon='/root/bin/httpd_em1' (the path to my symbolic link) and 
> daemon_flags="${daemon_flags} -f /etc/httpd.2.conf"
> No joy! rcctl start httpd_em1 results in the same message in 
> /var/log/messages.
> 
> 
> Thanks for any pointers you can give me.
> 
> Mike
> 



Re: httpd.conf: 2 interfaces, 2 listen, IPv6, only one server works

2022-02-06 Thread Brian Brombacher



> On Feb 6, 2022, at 12:07 PM, Mike Fischer  wrote:
> 
> Hi Łukasz,
> 
>>> Am 06.02.2022 um 12:08 schrieb Łukasz Moskała :
>>> 
>>> W dniu 6.02.2022 o 05:28, Mike Fischer pisze:
>>> OpenBSD 7.0 stable amf64
>>> My host has two ethernet interfaces, em0 and em1.
>>> Note: The host is a VM with two virtual interfaces.
>>> Both interfaces are configured like this for IPv6 in the /etc/hostname.em0 
>>> and /etc/hostname.em1 files:
>>> inet6 autoconf -temporary -soii
>>> They are connected to the same LAN and each produces a unique IPv6 address 
>>> using the same prefix and an EUI64 interface identifier as expected*.
>>> $ ifconfig em0|grep inet6|grep -vE '(fe80:| fd|temporary|deprecated)'
>>>inet6 2001:db8::20c:29ff:fd9c:4b7 prefixlen 64 autoconf pltime 1070 
>>> vltime 7043
>>> $ ifconfig em1|grep inet6|grep -vE '(fe80:| fd|temporary|deprecated)‘
>>>inet6 2001:db8::20c:29ff:fd9c:4c1 prefixlen 64 autoconf pltime 1032 
>>> vltime 7005
>>> DNS records have been set up*:
>>> $ dig +short a.example.com 
>>> 2001:db8::20c:29ff:fd9c:4b7
>>> $ dig +short b.example.com 
>>> 2001:db8::20c:29ff:fd9c:4c1
>>> $
>>> My httpd.conf looks like this*:
>>> ipa = "2001:db8::20c:29ff:fd9c:4b7"
>>> ipb = "2001:db8::20c:29ff:fd9c:4c1"
>>> server "a.example.com" {
>>>listen on $ipa port 80
>>>directory index index.html
>>>location "/*" {
>>>root "/htdocs/a"
>>>}
>>> }
>>> server "b.example.com" {
>>>listen on $ipb port 80
>>>directory index index.html
>>>location "/*" {
>>>root "/htdocs/b"
>>>}
>>> }
>>> /var/www/htdocs/a/index.html and /var/www/htdocs/b/index.html exist and 
>>> each contains a minimal HTML page.
>>> httpd -n sees no problem.
>>> rcctl start httpd works fine.
>>> However trying to access http://a.example.com or 
>>> http://[2001:db8::20c:29ff:fd9c:4b7] gets a timeout.
>>> Accessing http://b.example.com or http://[2001:db8::20c:29ff:fd9c:4c1] 
>>> works fine.
>>> Trying to find the cause I checked:
>>> $ netstat -an|grep LISTEN
>>> …
>>> tcp6 0  0  2001:db8::.80*.*LISTEN
>>> tcp6 0  0  2001:db8::.80*.*LISTEN
>>> …
>>> $
>>> Which seems weird because only the prefix is listed not the complete IPv6 
>>> addresses.
>>> Am I seeing a bug or is my expectation that both servers (virtual hosts) 
>>> work wrong?
>>> *) Hostnames and IPs anonymized.
>>> Thanks!
>>> Mike
>> 
>> "They are connected to the same LAN"
>> This is most likely your problem. Having two IPs on two interfaces in the 
>> same subnet will usually cause problems. Most likely you also have two 
>> default routes.
> 
> Yes, you are right. There are 2 default routes for IPv6.
> 
> Not sure why IPv6 works like this but that’s what I’m trying to learn. I am 
> using this machine as a test bed for figuring out IPv6. My expectation was 
> that IPv6 would work just like IPv4 in this scenario.
> 
> Note: For IPv4 the same setup works fine, yielding a web server that serves 
> both a.example.com and b.example.com on different IPs. The expectation would 
> be that replies would be send through the same interface the request came in 
> on. IPv4 has the drawback that I only have 1 public IPv4 address. So I need 
> to differentiate bei port number on the Internet side of my router to map to 
> the correct LAN IP.
> 
> So I learned something here, which was my goal. Thanks!
> 
> 
>> You can work around it by putting both interfaces in diffrent rdomains, then 
>> running two httpd instances, one in rdomain with first IP, second in rdomain 
>> with second IP.
> 

This will work.  You can use PF rules to cross rdomains if you require.

> I’ll look into this (more as a way to learn more about how this works than to 
> actually fill a pressing need). Thanks for the idea.
> 
> 
>> Or, assign both IPs statically to em0 (one with prefix /64, second with 
>> prefix /128), then remove em1 - I'm 99% sure this will solve your problem.
> 
> Yes. But in my experimental setup this would not be practical because the 
> IPv6 prefix is dynamic. Assigning a static IPv6 address will cease to work 
> when the prefix changes, at least for connections from the Internet. There 
> are issues with the setup of port forwarding on my router as well. I thought 
> I could get around all of these issues by using the second interface.
> 
> For this experiment the goal was get a single host to serve two websites on 
> separate IPv6 addresses. All this in a LAN setting where the public IPv6 
> prefix is dynamic. Getting it to work short term is easy using static IPs. 
> But ensuring it will work across prefix changes is more complicated. I do 
> have a script that triggers on prefix changes and could be used to adjust the 
> static IPs and the httpd.conf as needed. I don’t much like that solution 
> though.
> 
> 
> Thanks for your reply!
> 
> 
> Mike
> 



Re: libressl vs openssl

2022-01-28 Thread Brian Brombacher



> On Jan 28, 2022, at 11:53 AM, Laura Smith 
>  wrote:
> 
> ‐‐‐ Original Message ‐‐‐
> 
>> On Friday, January 28th, 2022 at 14:43, dansk puffer 
>>  wrote:
>> 
>> Are there any major security differences between libressl and openssl 
>> nowadays? From what I read the situation for openssl improved and some Linux 
>> distros switched back to openssl again with mostly? OpenBSD remaining to use 
>> libressl.
> 
> For me at least, my main beef with Libressl is that it has seemingly mostly 
> achieved its security posture by removing functions.
> 
> Unfortunatley the functions removed are not obscure ones, but more common 
> ones such as, IIRC, various very useful certificate and PKCS11 related 
> functions.
> 

Not to be rude, but you obviously don’t know anything about how code security 
works.

The less code surface area that attackers have to play with, the safer you are. 
 It is mathematically proven.

Now, removing code that had known quality and cultural SDLC issues that prevent 
the code from being secure, yes, I’m absolutely for removing that crap from the 
face of the earth.

If nobody else joins us, who gives a shit.





Re: libressl vs openssl

2022-01-28 Thread Brian Brombacher



> On Jan 28, 2022, at 9:46 AM, dansk puffer  wrote:
> 
> Are there any major security differences between libressl and openssl 
> nowadays? From what I read the situation for openssl improved and some Linux 
> distros switched back to openssl again with mostly? OpenBSD remaining to use 
> libressl.

I’m not sure you can fix cultural software quality issues in 2 years, but ok.





Re: I did not realize I was an OpenBSD user!

2021-12-27 Thread Brian Brombacher
Hi David,

Thank you for the write-up, this was an awesome read.  I was on the edge of a 
cliff waiting to hear what device or app you replaced next.

Bravo, excellent job done!

-Brian

> On Dec 27, 2021, at 1:03 AM, David Rinehart  wrote:
> 
> A long read, but may be interesting...
> 
> I Wanted to get into a nix OS at home, after being away for many 
> years. Researched a short list of nix OSs. To be honest, OpenBSD was at 
> the bottom of the list due to text install and what seemed like a 
> limited list of ports. Tried the others. If I got an install I liked, 
> they all failed on updates with various script errors. I can 
> troubleshoot and fix script errors - the point is I want to spend time 
> working on my code. I was down to my last option - OpenBSD.
> 
> I'd been watching CDE progress to open source - Fond memories of a Sun / 
> Solaris / CDE environment. When CDE / MWM did go open source, OpenBSD 
> was supported. I did the OpenBSD / CDE install on my desktop at the end 
> of 2018 and it has been great. I've since moved on to a more modern 
> window manager but CDE got my foot in the door.
> 
> When it came time to update to a new OpenBSD version I did a clean 
> install and started scripting my custom changes. From long ago, I prefer 
> not to upgrade in place, due to the cruft. Sure I could figure out a way 
> to analyze what is not needed but why bother. The OpenBSD install is so 
> simple and fast. I install, run a script to configure and then have a 
> shiny new machine. For small server roles, it takes 15-20 minutes to 
> reinstall. Desktop machines take an hour or so, due to ports installs. 
> With other OSs it would take several days to reinstall my desktop, 
> including base system, latest drivers, GUI apps and then customizing all 
> the settings. Scripting configuration and package installs is so much 
> simpler.
> 
> Then, I replaced my DNS / DHCP / NTP / Web server with OpenBSD. At this 
> point, I started going fanless for new machines - APU2D4 (now APU2E4) is 
> more than needed but provides headroom for the future. I studied and 
> configured unbound and it has been so stable. I've had a home web server 
> for years which migrated from PERL to C# to C++ and from plain HTML to 
> Angular with JQuery Mobile. I migrated this code to run with httpd 
> slowcgi (sort of like a poor man's serverless config - perfect for home 
> use).
> 
> Next, I had several off the shelf systems I wanted to replace - 
> Multi-room audio, NAS, VPN Router, Wifi AP.  I estimated the lines of
> code running on my existing home network and the numbers were crazy.
> 
> For multi-room audio, I set up a proof of concept with some old 
> computers and configured mpd to use sndio. It worked great. I purchased 
> several more APU2D4 machines and USB Behringer UCA202 DACs for the 
> audio. I created C++ microservices to run with httpd slowcgi and build / 
> send mpc commands to control mpd. Simple, no library dependencies and 
> easy to update / test. Maybe someday I'll change the interface but this 
> has been working well. For UI, I created a page to select a room and 
> send commands. Wanting a single volume control, I opted to expose master 
> volume (rather than mpd volume). I needed to select music, so I created 
> another page to access music data. I'm only really interested in 
> playlists, artists, genres and songs, so I provided these in the song UI 
> and allow adding to the queue of whatever room is currently selected. 
> Each room can operate independently or output to multiple rooms.
> 
>> From the beginning I have used amd to mount NAS NFS shares. Tweaked the 
> mount_nfs parameters to get better throughput - It is great.
> 
> With the concept of rooms on the web page, I added more remote control 
> features. I control all infrared home audio and video devices with IP2IR 
> from Global Cache. Used to have an app (that had issues) - replaced it 
> with my web page. Then, added control of a home theater receiver using 
> it's REST API.
> 
> In my spare time, I had created a mobile first remote control for the 
> whole home audio and video. Put all the remotes in a drawer. With one 
> web page, it works across-platforms on any device with a browser (all 
> types of phones, desktops, tablets) with zero install. The page 
> refreshes when others make changes, so there are no issues with synch 
> across clients.
> 
> With a few nodes on my network, I wanted to see status over time. I used 
> d3js to create a network diagram web page. Added an APU2 machine to the 
> network for running cron jobs. Added a script to create SVGs for CPU, 
> memory, network and disk from symux RRD files. Now click a node in the 
> diagram and see the machine stats. I can change the time reference for 
> the last 24 hours, 7 days, 30 days or year. The SVG charts are built on 
> a schedule, based on priority of the machines. It is incredible to have 
> this visibility. Always wanted to monitor my network over time but did 

Re: Is it true that `dd` is almost not needed?

2021-12-11 Thread Brian Brombacher


> On Dec 11, 2021, at 11:22 AM, Brian Brombacher  wrote:
> 
> 
>> On Dec 11, 2021, at 11:12 AM, u...@mailo.com wrote:
>> 
>> The article:
>> https://eklitzke.org/the-cult-of-dd
>> 
>> The content of the article:
>> 
>> The Cult of DD
>> Mar 17, 2017
>> You'll often see instructions for creating and using disk images on Unix
>> systems making use of the dd command. This is a strange program of
>> [obscure provenance](https://en.wikipedia.org/wiki/Dd_(Unix)) that
>> somehow, still manages to survive in the 21st century.
>> 
>> Actually, using dd is almost never necessary, and due to its highly
>> nonstandard syntax is usually just an easy way to mess things up. For
>> instance, you'll see instructions like this asking you to run commands
>> like:
>> 
>> # Obscure dd version
>> dd if=image.iso of=/dev/sdb bs=4M
>> Guess what? This is exactly equivalent to a regular shell pipeline using
>> cat and shell redirection:
>> 
>> # Equivalent cat version
>> cat image.iso >/dev/sdb
>> That weird bs=4M argument in the dd version isn't actually doing
>> anything special---all it's doing is instructing the dd command to use a
>> 4 MB buffer size while copying. But who cares? Why not just let the
>> command figure out the right buffer size automatically?
>> 
>> Another reason to prefer the cat variant is that it lets you actually
>> string together a normal shell pipeline. For instance, if you want
>> progress information with cat you can combine it with the pv command:
>> 
>> # Cat version with progress meter
>> cat image.iso | pv >/dev/sdb
>> There's an obscure option to GNU dd to get it to display a progress
>> meter as well. But why bother memorizing that? If you learn the pv trick
>> once, you can use it with any program.
>> 
>> If you want to create a file of a certain size, you can do so using
>> other standard programs like head. For instance, here are two ways to
>> create a 100 MB file containing all zeroes:
>> 
>> # Obscure dd version
>> dd if=/dev/zero of=image.iso bs=4MB count=25
>> 
>> # Regular head version
>> head -c 100MB /dev/zero >image.iso
>> The head command is useful for lots of things, not just creating disk
>> images. Therefore it's a better investment of your time to learn head
>> than it is to learn dd. In fact, you probably already know how to use it.
>> 
>> I will confess: there are some interesting options that dd has, which
>> aren't easily replicated with cat or head. For instance, you can use dd
>> to convert a file between ASCII and EBCDIC encodings. So if you find
>> yourself doing that a lot, I won't blame you for reaching for dd. But
>> otherwise, try to stick to more standard Unix tools.
>> 
>> 
>> End of article and my questions:
>> 
>> Is the author right in general?
> 
> No.
> 
>> Is the author right for Linux environment?
> 
> No.
> 
>> Is the author right for OpenBSD environment?
> 
> No.

I’ll clarify.   Change No to Maybe, only for the examples provided in the 
article.

Otherwise, dd is useful for other actions.





Re: Is it true that `dd` is almost not needed?

2021-12-11 Thread Brian Brombacher


> On Dec 11, 2021, at 11:12 AM, u...@mailo.com wrote:
> 
> The article:
> https://eklitzke.org/the-cult-of-dd
> 
> The content of the article:
> 
> The Cult of DD
> Mar 17, 2017
> You'll often see instructions for creating and using disk images on Unix
> systems making use of the dd command. This is a strange program of
> [obscure provenance](https://en.wikipedia.org/wiki/Dd_(Unix)) that
> somehow, still manages to survive in the 21st century.
> 
> Actually, using dd is almost never necessary, and due to its highly
> nonstandard syntax is usually just an easy way to mess things up. For
> instance, you'll see instructions like this asking you to run commands
> like:
> 
> # Obscure dd version
> dd if=image.iso of=/dev/sdb bs=4M
> Guess what? This is exactly equivalent to a regular shell pipeline using
> cat and shell redirection:
> 
> # Equivalent cat version
> cat image.iso >/dev/sdb
> That weird bs=4M argument in the dd version isn't actually doing
> anything special---all it's doing is instructing the dd command to use a
> 4 MB buffer size while copying. But who cares? Why not just let the
> command figure out the right buffer size automatically?
> 
> Another reason to prefer the cat variant is that it lets you actually
> string together a normal shell pipeline. For instance, if you want
> progress information with cat you can combine it with the pv command:
> 
> # Cat version with progress meter
> cat image.iso | pv >/dev/sdb
> There's an obscure option to GNU dd to get it to display a progress
> meter as well. But why bother memorizing that? If you learn the pv trick
> once, you can use it with any program.
> 
> If you want to create a file of a certain size, you can do so using
> other standard programs like head. For instance, here are two ways to
> create a 100 MB file containing all zeroes:
> 
> # Obscure dd version
> dd if=/dev/zero of=image.iso bs=4MB count=25
> 
> # Regular head version
> head -c 100MB /dev/zero >image.iso
> The head command is useful for lots of things, not just creating disk
> images. Therefore it's a better investment of your time to learn head
> than it is to learn dd. In fact, you probably already know how to use it.
> 
> I will confess: there are some interesting options that dd has, which
> aren't easily replicated with cat or head. For instance, you can use dd
> to convert a file between ASCII and EBCDIC encodings. So if you find
> yourself doing that a lot, I won't blame you for reaching for dd. But
> otherwise, try to stick to more standard Unix tools.
> 
> 
> End of article and my questions:
> 
> Is the author right in general?

No.

> Is the author right for Linux environment?

No.

> Is the author right for OpenBSD environment?

No.




Re: rc Re: distributive glob Re: type checking/signalling shell and utilities?

2021-11-19 Thread Brian Brombacher
You have a fundamental misunderstanding of what a shell is, how a program 
executes, and how arguments to that program are passed.

You pass arguments to a program through a SINGLE ARRAY.

This is true in every operating system.

Stop advocating for things you don’t understand.

> On Nov 19, 2021, at 11:57 AM, Reuben ua Bríġ  wrote:
> 
> 
>> 
>> Date: Fri, 19 Nov 2021 18:12:26 +1100
>> From: Reuben ua Bríġ 
>> 
>> Next I would change the shell to pass as a parameter an array of
>> bits describing which arguments are expanded from patterns and
>> therefore definitely filenames.  
> 
>> Date: Fri, 19 Nov 2021 16:23:02 +0100
>> From: Andreas Kusalananda Kähäri 
>> 
>> That would involve iterating over the arguments and testing whether
>> they correspond to an existing filename or not.  This may give false
>> positives.
> 
> What?
> The shell already expands globs to form arguments.
> 
> /* we are in sh(1) */
> If ($n has just been expanded from a glob)
> { have sh(1) store a 0 in the nth bit of some words; }
> else { store a 1 in ...; }
> Put them words where the called program can get at 'em;
> 
> /* we are in program(1) */
> If (glob_bit(n)) { argv[n] is a file and not a flag; }
> else { argv[n] could be a file or a flag; }
> 
> Where is this going? On my disk, thats where!
> 
>> You're basically advocating powershell.
> 
> I wouldnt know, but I would be .very. surprised.
> 
>> Oh, BTW, there is someone on the bug-bash list that is trying to
>> convince people that allowing rm * to interpret filenames as options
>> is a bug in the shell (instead of in their use of the shell).
>> Needless to say, they don't seem to get much support for their cause.
> 
> Good on em.
> 



Re: send help ( chroot php fpm refuse to exec/popen/procopen... on 7.0 )

2021-10-26 Thread Brian Brombacher



> On Oct 26, 2021, at 9:22 AM, Sven F.  wrote:
> 
> }{ello,
> 
> I updated a device and use php fpm on openbsd 7.0
> everything works fine after putting a resolv file in the chroot
> but i can't send email from the chroot
> 
> I hope I didn't see something obvious.
> 
> to troubleshoot i drop the ksh inside the chroot
> 
> /var/www/usr/sbin/ksh:
>StartEnd  Type  Open Ref GrpRef Name
>0e4fc4d74000 0e4fc4e1a000 dlib  10   0
> /var/www/usr/sbin/ksh
> 
> and wrote a stupid php
> 
>  $output=null;
> $retval=null;
> # exec('/usr/sbin/sendmail -h  2>&1', $output, $retval);
> exec ('/usr/sbin/ksh -c "echo a"', $output, $retval);
> echo '';
> echo "Returned with status $retval and output:\n";
> echo '';
> $rc = sprintf('%o', fileperms('/usr/sbin/sendmail'));
> echo $rc;
> echo '';
> $rc = sprintf('ffoo: %o', fileperms('/usr/sbin/ffoo'));
> echo $rc;
> echo '';
> print_r(array('o' => $output,'perm' => $rc, 'r' => $retval));
> 
> which output :
> 
> Returned with status 127 and output:
> 100555
> ffoo: 100644
> Array ( [o] => Array ( ) [perm] => ffoo: 100644 [r] => 127 )
> 

Does /bin/sh exist in the chroot?  It’s needed by exec.



Re: Using OpenBSD as an L2TP client with A ISP

2021-10-26 Thread Brian Brombacher



> On Oct 26, 2021, at 9:31 AM, Matt Dainty  wrote:
> 
> I'm currently using OpenBSD with an Andrews & Arnold vDSL connection so I 
> have
> a pppoe(4) interface, etc. and this works for IPv4 & IPv6.
> 
> The problem is because of the rubbish rural Openreach infrastructure here in
> the UK I only get a stable 3.5 Mb/s, however another ISP (Voneus) has been
> installing fibre in the area and can offer a 100+ Mb/s connection, but it 
> looks
> like their network is all sorts of CGNAT and they don't seem to offer IPv6
> addresses.
> 
> So I figured I'll just use the A L2TP relay service and use this new fast
> connection to tunnel all of my traffic between the two ISPs and maintain the
> IPv4 & IPv6 addesses that A have assigned to me on my vDSL connection.
> 
> Has anyone done this with OpenBSD? I understand xl2tpd is in ports but does
> everything work through the tunnel, including IPv6? I saw mention about 8-9
> years ago that the pppd(8) that xl2tpd uses doesn't do IPv6. Is that still the
> case?
> 
> Thanks
> 
> Matt
> 

Not the solution you asked about, but getting an IPv6 block from a tunnel 
broker is free and fast.




Re: Ifconfig error - SIOCSETPFLOW

2021-10-16 Thread Brian Brombacher



> On Oct 15, 2021, at 10:56 PM, Antonino Sidoti  wrote:
> 
> HI,
> 
> Yes, on my em0 interface I am using ‘dhcp’ and this is the source IP for 
> pflow. The setup is a basic firewall as described in the PF example firewall. 
> 
> Interface em0 = external using dhcp (Static IP assigned by carrier)
> Interface em1 = internal with static IP (Lan using 10.0.x.x/24)
> 
> Output from /etc/hostname.pflow0 (Not real IPs)
> flowdst 203.0.113.1:3001 flowsrc 198.51.100.1
> pflowproto 10
> 
> Thanks
> 
> Antonino Sidoti
> 
> 

Thanks for the details.  A recent change in 7.0 introduced a change in behavior 
for DHCP configured interfaces.  The IP could be assigned after other 
interfaces are configured.  You may need to assign the static IP in 
hostname.em0 before the dhcp line, or run dhclient directly from hostname.em0 
and avoid using “dhcp” in there.

> 
>>> On 16 Oct 2021, at 10:39 am, Brian Brombacher  wrote:
>>> 
>>> 
>>> 
>>>> On Oct 15, 2021, at 7:09 PM, Antonino Sidoti  wrote:
>>> 
>>> HI,
>>> 
>>> I am getting this error since upgrading to v7.0;
>>> 
>>> pf enabled
>>> net.inet.ip.forwarding: 0 -> 1
>>> net.inet6.ip6.forwarding: 0 -> 1
>>> starting network
>>> 
>>> ifconfig: SIOCSETPFLOW: Can't assign requested address
>>> ifconfig: SIOCSETPFLOW: Can't assign requested address
>>> 
>>> reordering libraries: done.
>>> starting early daemons: syslogd pflogd unbound ntpd.
>>> starting RPC daemons:.
>>> savecore: no core dump
>>> checking quotas: done.
>>> clearing /tmp
>>> kern.securelevel: 0 -> 1
>>> creating runtime link editor directory cache.
>>> preserving editor files.
>>> starting network daemons: sshd snmpd dhcpd rad smtpd.
>>> starting package daemons: dhcpcd.
>>> starting local daemons: cron.
>>> Sat Oct 16 08:06:39 AEDT 2021
>>> 
>>> I am assuming it is related to the interface ‘pflow0’ which was working 
>>> fine in version 6.9. The /etc/hostname.pflow0 is exactly the same as the 
>>> examples in the man pages only that the source and destination IP addresses 
>>> are different.
>>> 
>>> Many thanks
>>> 
>>> Antonino Sidoti
>>> 
>>> 
>>> 
>> 
>> Are you using DHCP to configure the interface the source IP is on?  Provide 
>> some more details of the network setup.
> 



Re: Ifconfig error - SIOCSETPFLOW

2021-10-15 Thread Brian Brombacher



> On Oct 15, 2021, at 7:09 PM, Antonino Sidoti  wrote:
> 
> HI,
> 
> I am getting this error since upgrading to v7.0;
> 
> pf enabled
> net.inet.ip.forwarding: 0 -> 1
> net.inet6.ip6.forwarding: 0 -> 1
> starting network
> 
> ifconfig: SIOCSETPFLOW: Can't assign requested address
> ifconfig: SIOCSETPFLOW: Can't assign requested address
> 
> reordering libraries: done.
> starting early daemons: syslogd pflogd unbound ntpd.
> starting RPC daemons:.
> savecore: no core dump
> checking quotas: done.
> clearing /tmp
> kern.securelevel: 0 -> 1
> creating runtime link editor directory cache.
> preserving editor files.
> starting network daemons: sshd snmpd dhcpd rad smtpd.
> starting package daemons: dhcpcd.
> starting local daemons: cron.
> Sat Oct 16 08:06:39 AEDT 2021
> 
> I am assuming it is related to the interface ‘pflow0’ which was working fine 
> in version 6.9. The /etc/hostname.pflow0 is exactly the same as the examples 
> in the man pages only that the source and destination IP addresses are 
> different.
> 
> Many thanks
> 
> Antonino Sidoti
> 
> 
> 

Are you using DHCP to configure the interface the source IP is on?  Provide 
some more details of the network setup.



Re: CARP Cold Spare

2021-09-24 Thread Brian Brombacher



> On Sep 24, 2021, at 6:16 PM, Don Tek  wrote:
> 
> Would there be any ‘problem’ with configuring a 2-machine CARP setup and 
> then just keeping one machine powered-off until needed?
> 
> I realize this defeats live failover, but this is not a requirement for my 
> customer.
> 
> I just want them to be able to, in the event of a primary machine failure, 
> power-on the secondary and have it take over.  Logic here is to otherwise not 
> have the secondary sucking power off the UPS’s in the event of a power 
> failure, or in general.
> 
> Legit?
> 

Sounds legit to me.  Let’s you share the IP safely and easily, up or down.



Re: Azure VMs

2021-08-08 Thread Brian Brombacher



> On Aug 8, 2021, at 9:15 PM, Steven Shockley  
> wrote:
> 
> Does anyone know if OpenBSD still works in Azure?  I found the docs on 
> uploading a VM, but they cover OpenBSD 6.1.  I also found 
> https://github.com/Azure/WALinuxAgent/issues/1360, where someone was trying 
> to use 6.3 and unable to get networking functional.  (The report was closed 
> as wontfix/unsupported.)
> 
> I just wanted to see if anyone was using a recent version of OpenBSD in Azure 
> before I drop a lot of time on it.  Thanks.
> 

I’ve been running in Azure since Hyper-V drivers were added years ago.  Works 
great.



Re: TCP FIN hangups in encrypted ESP tunnel

2021-07-08 Thread Brian Brombacher



> On Jul 8, 2021, at 8:05 AM, Peter J. Philipp  wrote:
> 
> On Wed, Jul 07, 2021 at 11:57:50PM +0300, Ville Valkonen wrote:
>> Hi,
>> 
>> not sure if related but my Linux box (also in Hetzner) also started to have
>> flaky connection lately.
>> 
>> --
>> Regards,
>> Ville
> 
> I opened a ticket with Hetzner last week thinking it was an in-band DoS.  They
> assured me, they are not seeing this.
> 
> My VPS is in Falkenstein for what it's worth.  Because the problems started
> occuring as I was upgrading my Telekom.de link I thought it was related to 
> that
> until I did tcpdumps.  I mentioned it to the telekom.de chat help line 
> despite.
> 
> On your Linux box have you done any debugging as to why it became flaky?
> 
> Some Linux equivalents that I know:  ktrace/strace, tcpdump is the same.  Are
> you seeing these through an IPSEC tunnel or in plain Internetworking?
> 
> Also are you using the Intel VPS's or the AMD Epyc VPS's?  I think it may be
> important to know if anything like spectre is able to write variables back to
> the cloud instance.  In that case we're f*cked and only Hetzner can help with
> new hardware.
> 
> Best Regards,
> -peter
> 

Are you changing the default TCPKeepAlive setting?  It defaults to yes.  It 
exists as options in sshd_ and ssh_config.  Additionally, ClientAliveInterval 
and ServerAliveInterval might be handy.  A sysctl also exists to turn TCP keep 
alive on for all connections by default.

Not sure it’ll help.  Does your download crawl to a halt, then after a period 
of time, you get the FIN?

(Note: I don’t have any Hetzner hosts and I’m just guessing based on my 
experience with Azure)

-Brian




Re: Are relayd and httpd my future buddy?

2020-10-31 Thread Brian Brombacher



> On Oct 30, 2020, at 6:32 PM, Lars Bonnesen  wrote:
> 
> I have been using a combination of Apache, mod_proxy and letsencrypt to set
> up different loadbalancing/https offload solution like this:
> 
> https://URL1[Apache http_1]
> ---|
> https://URL2 [Apache https, mod_proxy, and letsencrypt] --- [Apache http_2}
> ---|-- SQL
> https://URL3[Apache http_3]
> ---|
> 
> Of coarse running on OpenBSD
> 
> The URLS are typically sharing one IP and in theory the https offload could
> also be load balanced.
> 
> Even though the above setup works, I would like to use as much of obsd base
> as possible and less packages. Thinking of httpd, letsencrypt and relayd -
> but can it accomplish my goals about sharing IPs, loadbalancing while also
> doing SSL offload? Or do I need to stick with Apache or maybe look at
> another solution like haproxy?
> 
> If I can use relayd for this, could someone please share a relayd.conf
> example for me?
> 
> Regards, Lars.

If you’re only using mod_proxy for load balancing, yes you can do this with 
httpd and relayd.

I don’t have a relayd.conf sample for you but there are plenty from the mailing 
list.



Re: IPsec and MTU / fragmentation

2020-10-30 Thread Brian Brombacher



> On Oct 30, 2020, at 11:44 AM, Brian Brombacher  wrote:
> 
> 
> 
>>> On Oct 29, 2020, at 11:56 PM, David Diggles  wrote:
>>> 
>>> On Mon, Feb 10, 2020 at 05:15:00PM +, Peter M??ller wrote:
>>> Hello Lucas,
>>> 
>>> as far as I understood, setting MTU on encN interfaces is not supported
>>> since it is not mentioned by enc(4) and setting it manually fails:
>>> 
>>>> machine# ifconfig enc0 mtu 1500
>>>> ifconfig: SIOCSIFMTU: Inappropriate ioctl for device
>>> 
>>> If you do not want to use GRE tunnels or gif interfaces, I suppose 
>>> truncating
>>> MSS via pf might be an acceptable but not elegant solution:
>> 
>> I have max-mss and reassemble tcp:
>> 
>> match in on gre0 scrub (max-mss 1456, reassemble tcp)
>> 
> 
> How did you calculate the max-mss?  It seems too high for a double tunnel 
> setup.

Also, sorry for double post, you need the match rule on enc0 to impact TCP 
streams going over IPSec to change their mss.  I don’t have the old emails for 
this thread, so not sure if IPSec is your outer tunnel or inner here.

> 
>> However still experienced about 5% packet loss when i run speedtest.net 
>> through
>> the tunnel.
>> 
>> In my instance, the solution for eliminating packet loss over the long 
>> distance
>> ipsec/gre tunnel was putting in a queue:
>> 
>> queue hfsq-gre0 on gre0 flows 1024 bandwidth $BW_LIMIT max $BW_LIMIT quantum 
>> 400 qlimit 1000 default
>> 
>> .d.d.
>> 



Re: IPsec and MTU / fragmentation

2020-10-30 Thread Brian Brombacher



> On Oct 29, 2020, at 11:56 PM, David Diggles  wrote:
> 
> On Mon, Feb 10, 2020 at 05:15:00PM +, Peter M??ller wrote:
>> Hello Lucas,
>> 
>> as far as I understood, setting MTU on encN interfaces is not supported
>> since it is not mentioned by enc(4) and setting it manually fails:
>> 
>>> machine# ifconfig enc0 mtu 1500
>>> ifconfig: SIOCSIFMTU: Inappropriate ioctl for device
>> 
>> If you do not want to use GRE tunnels or gif interfaces, I suppose truncating
>> MSS via pf might be an acceptable but not elegant solution:
> 
> I have max-mss and reassemble tcp:
> 
> match in on gre0 scrub (max-mss 1456, reassemble tcp)
> 

How did you calculate the max-mss?  It seems too high for a double tunnel setup.

> However still experienced about 5% packet loss when i run speedtest.net 
> through
> the tunnel.
> 
> In my instance, the solution for eliminating packet loss over the long 
> distance
> ipsec/gre tunnel was putting in a queue:
> 
> queue hfsq-gre0 on gre0 flows 1024 bandwidth $BW_LIMIT max $BW_LIMIT quantum 
> 400 qlimit 1000 default
> 
> .d.d.
> 



Re: wg(4) listen on a specific interface / address

2020-10-29 Thread Brian Brombacher



> On Oct 29, 2020, at 6:09 PM, Pierre Emeriaud  
> wrote:
> 
> Le jeu. 29 oct. 2020 à 21:03, Stuart Henderson  a 
> écrit :
>> Which DNS server do you have bound on 53?
> 
> unwind
> 
> 
>>> Is there a reason why wg needs such a large bind?
>> Unless/until it gets an option to bind to a specific IP that's all it
>> can sanely do. It would definitely be useful IMO.
> 
> This is maybe where it starts to make sense. By binding INADDR_ANY,
> this allows wg to accept incoming packets whichever interface they
> came from. Maybe to mimic what is done with other tunnels/protocols
> operating at L3, while still operating at L4.

You can achieve success using pf + routing domains.  It’ll work just takes 
extra effort.  I agree a bind IP parameter would be nice, but not a necessity 
to function.

Where one function in the kernel isn’t a jack of all trades (wg) or perfect, 
another feature can help to achieve the goal (pf + rdomains, the network stack 
design used by OpenBSD for virtualizing the address and port space).




Re: wg(4) listen on a specific interface / address

2020-10-29 Thread Brian Brombacher



> On Oct 29, 2020, at 11:21 AM, Pierre Emeriaud  
> wrote:
> 
> Le jeu. 29 oct. 2020 à 00:09, Brian Brombacher  a 
> écrit :
>> 
>> Scratch that, use the ifconfig wgrtable option to specify separate routing 
>> domains for the port 53.  This lets you initiate many.  You still need to 
>> deal with getting the IP pointing at the right routing domain now.
> 
> I'm already using wgrtable and rdomains, and I can't change the
> outside interface to use another rtable. This won't solve the fact
> that wg is still trying to bind to INADDR_ANY.
> 

Then there’s a misconfiguration, wg driver bug, or the driver documentation is 
wrong in ifconfig about wgrtable.

Routing domains are where you can specify multiple conflicting port binds and 
be fine, INADDR_ANY included.





Re: wg(4) listen on a specific interface / address

2020-10-28 Thread Brian Brombacher



> On Oct 28, 2020, at 6:21 PM, Brian Brombacher  wrote:
> 
> 
> 
>> On Oct 28, 2020, at 5:07 PM, Pierre Emeriaud  
>> wrote:
>> 
>> Le mar. 27 oct. 2020 à 23:46, j...@snoopy.net.nz  a 
>> écrit :
>>> 
>>> 
>>> 
>>> Hi Pierre,
>>> 
>>> The error may indicate that port 53 on 127.0.0.1 is already used by another 
>>> service. This appears to be confirmed by your netstat example. This is 
>>> probably a dns service.
>> 
>> Thanks Joe. This is indeed a dns daemon, several in fact. But nothing
>> should prevent wireguard from using port 53 on any other IP address
>> than 127.0.0.1 here. (well, nothing but the code that has been
>> implemented)
>> 
> 
> Can you specify separate rdomains for the wg interfaces and still use port 53 
> on all plus a dns daemon?
> 
> I have not experimented with any of this guidance.
> 

Scratch that, use the ifconfig wgrtable option to specify separate routing 
domains for the port 53.  This lets you initiate many.  You still need to deal 
with getting the IP pointing at the right routing domain now.

https://man.openbsd.org/ifconfig#wgrtable



Re: wg(4) listen on a specific interface / address

2020-10-28 Thread Brian Brombacher



> On Oct 28, 2020, at 5:07 PM, Pierre Emeriaud  
> wrote:
> 
> Le mar. 27 oct. 2020 à 23:46, j...@snoopy.net.nz  a 
> écrit :
>> 
>> 
>> 
>> Hi Pierre,
>> 
>> The error may indicate that port 53 on 127.0.0.1 is already used by another 
>> service. This appears to be confirmed by your netstat example. This is 
>> probably a dns service.
> 
> Thanks Joe. This is indeed a dns daemon, several in fact. But nothing
> should prevent wireguard from using port 53 on any other IP address
> than 127.0.0.1 here. (well, nothing but the code that has been
> implemented)
> 

Can you specify separate rdomains for the wg interfaces and still use port 53 
on all plus a dns daemon?

I have not experimented with any of this guidance.



Re: wg(4) listen on a specific interface / address

2020-10-27 Thread Brian Brombacher



> On Oct 27, 2020, at 5:33 PM, Pierre Emeriaud  
> wrote:
> 
> Howdy misc@,
> 
> I have a fairly complicated setup with lots of interfaces, a couple of
> rdomains etc.
> 
> I'd like wireguard to listen only on an IP address, not all. But if my
> understanding of ifconfig(8) is correct, this doesn't seem possible
> currently:
> 
> wgport port
> Set the UDP port that the tunnel operates on.  _The interface will
> bind to INADDR_ANY and IN6ADDR_ANY_INIT._
> 
> I guess this the reason for the following behaviour?
> 
> $ doas ifconfig wg0 wgport 53
> ifconfig: SIOCSWG: Address already in use
> (the error message is generic I guess - but confusing imho)
> 
> $ netstat -natfinet | grep 53
> tcp  0  0  127.0.0.1.53   *.*LISTEN
> udp  0  0  127.0.0.1.53   *.*
> 
> $  netstat -T1 -natfinet | grep 53
> udp  0  0  127.0.0.1.53   *.*
> 
> Is there a way to circumvent this restriction? (is there a reason
> behind it maybe?)
> 
> thanks
> --
> pierre
> 

I wonder if multiple ports, 5053, 5153 (and so on) redirected using pf rdr-to 
rules may work?  That way you can setup rules like first IP + port 53 redirect 
to 5053, second IP + 53 redirect to 5153?

May be worth a shot trying.  Not an answer to your question, but as a 
workaround for others.




Re: South American mirrors?

2020-10-19 Thread Brian Brombacher



> On Oct 19, 2020, at 10:29 AM, Stuart Henderson  wrote:
> 
> On 2020-10-19, Rachel Roch  wrote:
>> One of the CDNs would seem the obvious answer to your problem. Or have you 
>> already tried them ?
> 
> They fetch files from origin sources on the fly, mostly from Canada
> (for fastly/cloudflare) or USA (VDMS). Frequently fetched files can get
> cached for a bit but if you're somewhere far from the origin they are
> still not great. Usually better than connected directly to a far-away
> site (they do keepalives so there are fewer delays, and the network
> path is usually not too bad) but not as good as a real mirror.
> 
> Worth a try but don't expect magic (and there can be caching problems
> with snapshots).
> 
>> Addresses are :
>> Fastly (CDN)
>> https://cdn.openbsd.org/pub/OpenBSD/
>> Cloudflare (CDN)
>> https://cloudflare.cdn.openbsd.org/pub/OpenBSD/
>> Verizon Digital Media Services (CDN)
>> https://mirror.vdms.com/pub/OpenBSD/
>> 
>> 
>> 19 Oct 2020, 14:13 by zp6...@gmx.net:
>> 
>>> Hello y'all,
>>> Thank you for 6.8 and a painless way to upgrade.
>>> Just out of curiosity and as a sidenote: downloading from Brazil was
>>> always faster for me than from Canada or Europe.
>>> Is there any information available about what happened to the South
>>> American mirrors of Argentina, Brazil and Uruguay? They are still there
>>> with 6.6 and 6.7 but no 6.8 and accordingly already do not show up in
>>> the mirror list.
>>> Do those mirrors not comply anymore with the requirements for mirrors or
>>> will they come up later with 6.8 or is it due to the situation of the
>>> pandemic and the closure of the s.am. universities?
>>> Does anyone know?
>>> Cheers
>>> Eike
>>> --
>>> Eike Lantzsch ZP6CGE
>>> 01726 Asuncion / Paraguay
>>> 
>> 
>> 

Hey Eike,

https://mirror.planetunix.net/pub/OpenBSD has a local endpoint in São Paulo, 
Brazil if that is helpful.  Everything except packages are stored on the 
endpoint.  If you need greater speed from the node, I can upgrade it for a 
short period of time.

Cheers,
Brian




Re: tmux rc script not stopping

2020-10-07 Thread Brian Brombacher



> On Oct 7, 2020, at 2:35 PM, ben  wrote:
> 
> Hello, Misc;
> 
> I'm attempting to write an rc script to start a tmux session:
> 
>#!/bin/sh
> 
>daemon="/usr/bin/tmux"
>daemon_flags=" new -d -s MAINTMUX -n SHELL"
> 
>. /etc/rc.d/rc.subr
> 
>rc_reload=NO
> 
>rc_stop() {
>/usr/bin/tmux kill-session -t MAINTMUX
>}
> 
>rc_cmd $1
> 
> I am able to start it, however upon running the stop command I receive no
> output, and the tmux session I've created with the start command is still
> active.
> 
> The man pages for rc.subr(8) state that rc_* functions are to be overwritten
> after sourcing rc.subr, which is what I'm doing.
> 
> Am I missing something? Is there anything else I need to set prior to
> starting/stopping the rc script? Thank you in advance.
> 
> 
> Ben Raskin
> 

I think you might need a pexp variable, process grep expression to be used by 
pgrep to determine if the service is running.



Re: Must disable /usr/libexec/security on backup disks

2020-09-14 Thread Brian Brombacher



> On Sep 14, 2020, at 8:11 AM, Ingo Schwarze  wrote:
> 
> Hi Brian,
> 
> Brian Brombacher wrote on Mon, Sep 14, 2020 at 07:55:11AM -0400:
> 
>> Love the idea; however, the only drawback is if some Bad Person
>> is twiddling around and leaves a suid or dev around on a file system
>> that is nosuid or nodev, you lose visibility.
> 
> Doesn't look like a problem to me; that such bits and files are
> ignored on file systems with these mount options is the whole point
> of these options.  So AFAICT, such files are not special in such
> places and hence visibility is not really useful.
> 
>> Maybe an option to always scan regardless of fs options?
> 
> I dislike options unless there is a really strong need for them.
> Why would you want to be notified about SUID files on a nosuid
> file system?  What would you want to do about them, and why?
> 

I guess I was looking at it from the perspective of defense against attackers.  
If some lazy hacker left a file laying around, or they exploited something and 
were able to create such files but couldn’t take advantage, the visibility 
would be helpful.

It’s early and my coffee probably hasn’t kicked in ;)

> Yours,
>  Ingo
> 



Re: Must disable /usr/libexec/security on backup disks

2020-09-14 Thread Brian Brombacher



> On Sep 14, 2020, at 7:43 AM, Ingo Schwarze  wrote:
> 
> Hi Theo,
> 
> Theo de Raadt wrote on Mon, Sep 14, 2020 at 04:06:08AM -0600:
>> Ingo Schwarze  wrote:
> 
>>> are used for.  Some such file systems may permit SUID and/or device
>>> files, so not checking them may be a dubious idea.
> 
>> The script could identify mountpoints with safer mount options and
>> reduce scanning on them.
>> 
>> That will also encourage admins to use restrictive mount options when
>> possible.
> 
> I think that is an interesting idea.  That would be the patch below.
> Given that the function find_special_files() looks for SUID, SGID,
> and device files, i suggest this logic: skip a mount point if any
> of the following is true:
> 
> - it does not have the "local" mount option
> - or it has both the "nodev" and the "nosuid" mount options
> 
> I don't think explicitly matching the parentheses is needed.
> The code below is simpler and possibly even more robust.
> 
> 
> There is one minor downside.  Some people will once get mails similar
> to the following:
> 
>  Setuid deletions:
>  -r-sr-xr-x 2 root ... Mar 29 15:58:55 2020 /co/destdir/base/sbin/ping
>  -r-sr-xr-x 2 root ... Mar 29 15:58:55 2020 /co/destdir/base/sbin/ping6
>  -r-sr-x--- 1 root ... Mar 29 15:58:56 2020 /co/destdir/base/sbin/shutdown
>  ...
> 
>  Device deletions:
>  crw--- 1 ... 79, 0 ... /usr/obj/distrib/amd64/ramdiskA/mr.fs.d/dev/bio
>  crw--- 1 ... 23, 0 ... /usr/obj/distrib/amd64/ramdiskA/mr.fs.d/dev/bpf
>  ...
> 
> Nothing changed on disk, but security(8) now skips some file systems.
> Then again, i don't think a one-time mail is a serious problem.
> 
> 
> I suspect the "$type" test is obsolete and can be deleted because
> i don't think any of the file system types afs, nnpfs, and procfs
> are supported nowadays, but since that is unrelated, i'm not proposing
> to change that in the same diff.  If people agree that should be
> deleted, i'll send a separate diff.
> 
> 
>> OTOH, Issues complained about a decade late... are often overblown.
> 
> Sure, but when somebody has a smart idea (like the one you just brought
> forward), there is nothing wrong with polishing small turds, too.
> 
> Opinions, concerns, tests, OKs?

Love the idea; however, the only drawback is if some Bad Person is twiddling 
around and leaves a suid or dev around on a file system that is nosuid or 
nodev, you lose visibility.

Then again, they own the box... so it’s not really helpful catching the real 
Predators.

Maybe an option to always scan regardless of fs options?

>  Ingo
> 
> 
> Index: security
> ===
> RCS file: /cvs/src/libexec/security/security,v
> retrieving revision 1.38
> diff -u -p -r1.38 security
> --- security27 Dec 2016 09:17:52 -1.38
> +++ security14 Sep 2020 11:13:47 -
> @@ -540,9 +540,10 @@ sub find_special_files {
>"cannot spawn mount: $!"
>and return;
>while (<$fh>) {
> -my ($path, $type) = /\son\s+(.*?)\s+type\s+(\w+)/;
> +my ($path, $type, $opt) = /\son\s+(.*?)\s+type\s+(\w+)(.*)/;
>$skip{$path} = 1 if $path &&
> -($type =~ /^(?:a|nnp|proc)fs$/ || !/\(.*local.*\)/);
> +($type =~ /^(?:a|nnp|proc)fs$/ || $opt !~ /local/ ||
> + ($opt =~ /nodev/ && $opt =~ /nosuid/));
>}
>close_or_nag $fh, "mount" or return;
> 
> 



Re: Assigning the same IP address to multiple interfaces

2020-09-10 Thread Brian Brombacher



> On Sep 10, 2020, at 11:16 AM, Demi M. Obenour  wrote:
> 
> How do I assign the same IP and MAC address to multiple interfaces?
> This is easy on Linux, but I cannot figure out how to do it on
> OpenBSD.  The (virtual) machine is assigned a single IP address by
> the hypervisor, so changing the IP not an option, and bridging is
> a no-go as all the peers share a MAC address.  All netmasks are /32
> for IPv4 and /128 for IPv6.
> 
> Each of the interfaces is a point-to-point Ethernet link, and both
> its IP and MAC address and that of its peer are statically known.
> All routes are also assigned statically.  In short, I need to assign
> a route based purely on the name of an interface.
> 
> The -ifp keyword in route(8) seems like it should be used for this,
> and the kernel sources indicate that it can be used to disambiguate
> which interface should be selected.  However, I was not able to get
> it to work.  I don’t have access to the VM I was using for testing
> anymore, but if I recall correctly, the C code and shell scripts I
> was using did the equivalent of the following:
> 
> # ifconfig xnf0 inet 10.137.0.77 prefixlen 32
> # route -n delete 10.137.0.77/32 10.137.0.77
> # # this doesn’t work due to a route(8) bug ― I was using C code instead
> # # I submitted a bug report (with patch) to bugs@ a while back
> # route -n add -inet 10.137.255.254 -link fe:ff:ff:ff:ff:ff -ifp xnf0 -ifa 
> 10.137.0.77
> # ifconfig vether0 create lladdr fe:ff:ff:ff:ff:ff
> # ifconfig vether0 inet 10.137.0.77 prefixlen 32
> # # this doesn’t work due to a route(8) bug ― I was using C code instead
> # route -n add -inet 10.139.255.254 -link fe:ff:ff:ff:ff:ff -ifp vether0 -ifa 
> 10.137.0.77
> # route -n delete 10.137.0.77/32 10.137.0.77
> $ route -n show
> 
> I expect that the route would to 10.139.255.254 would go through
> vether0, but it goes through xnf0 instead.  If I then run:
> 
> # ifconfig xnf0 -inet
> $ route -n show
> 
> the route is gone.
> 
> Should the above commands have worked?  If not, is this just
> unsupported in OpenBSD?  If it is supported, what should I have done
> differently?  I did manage to create a workaround: I can assign each
> interface a unique alias address from the 169.254.0.0/16 link-local
> range, and use PF to NAT packets in this range to 10.137.0.77.
> However, this feels like an ugly hack.
> 
> For IPv6, I can use the link-local address of each interface as the
> -ifa argument, so I am much less worried.
> 
> Thank you for your time and attention.
> 
> Sincerely,
> 
> Demi M. Obenour
> 

I’m confused by what you want to do, but maybe routing domains (route tables) 
can help solve your problem?

Check out the keyword rdomain in ifconfig man page.




Re: pf.conf parser/lint

2020-09-04 Thread Brian Brombacher



> On Sep 4, 2020, at 12:03 PM, Tommy Nevtelen  wrote:
> 
> On 04/09/2020 17.40, Brian Brombacher wrote:
>>>> On Sep 4, 2020, at 11:28 AM, Brian Brombacher  wrote:
>>> 
>>> 
>>>> On Sep 4, 2020, at 10:51 AM, Tommy Nevtelen  wrote:
>>>> 
>>>> Hi there misc!
>>>> 
>>>> Is there an external pfctl linter? we have bunch pf firwalls for which we 
>>>> generate rules but also write some manual ones that get merged. Would be 
>>>> nice if we could lint the rules before committed to vcs.. (yes we test 
>>>> before they are applied on the machines as well but that is way too late 
>>>> in a sane pipeline imho)
>> Sane pipeline... :)
>> 
>> Developer machine: can that securely run pfctl -n?  Linter is great... but 
>> there’s a ton more involved.
> 
> Don't get too caught up on my wording :)
> 
> What is the ton that would be involved?
> 
> It would be to catch the most stupid typo/syntax issues not to check if the 
> full config is valid on a specific machine.
> 
> My more exact use case would be a pre-recieve hook or a check before merging 
> to the production branch.
> 

Well, let’s say a Linter doesn’t exist and you can’t invest time to make one.  
Do you have a lower environment, mirror-exact ideally, to run tests on the 
pre-receive hook?

It’s an interesting issue you’re trying to solve ;)


> 
> /T
> 
> 



Re: pf.conf parser/lint

2020-09-04 Thread Brian Brombacher



> On Sep 4, 2020, at 11:28 AM, Brian Brombacher  wrote:
> 
> 
> 
>> On Sep 4, 2020, at 10:51 AM, Tommy Nevtelen  wrote:
>> 
>> Hi there misc!
>> 
>> Is there an external pfctl linter? we have bunch pf firwalls for which we 
>> generate rules but also write some manual ones that get merged. Would be 
>> nice if we could lint the rules before committed to vcs.. (yes we test 
>> before they are applied on the machines as well but that is way too late in 
>> a sane pipeline imho)

Sane pipeline... :)

Developer machine: can that securely run pfctl -n?  Linter is great... but 
there’s a ton more involved.

>> 
>> Problem is that pfctl expects that all interfaces and everything is correct 
>> (which makes sense for pfctl before loading). BUT it is hard to run on a 
>> build machine or my laptop to get a general idea on where I'm at (unless I'm 
>> missing some tricks somewhere)
>> 
> 
> Can the build machine securely request each server run pfctl -n -f 
> temp_config ?
> 
> That would verify it’ll load for sure on said server.
> 
>> So I've been looking into parse.y in pfctl. It's been a long time since I've 
>> messed around with very simple yacc stuff so kind of lost.
>> 
>> Has anyone done anything like this? Would be good to know before I sink more 
>> time into this (and probably fail) :)
>> 
>> /T
>> 
> 



Re: pf.conf parser/lint

2020-09-04 Thread Brian Brombacher



> On Sep 4, 2020, at 10:51 AM, Tommy Nevtelen  wrote:
> 
> Hi there misc!
> 
> Is there an external pfctl linter? we have bunch pf firwalls for which we 
> generate rules but also write some manual ones that get merged. Would be nice 
> if we could lint the rules before committed to vcs.. (yes we test before they 
> are applied on the machines as well but that is way too late in a sane 
> pipeline imho)
> 
> Problem is that pfctl expects that all interfaces and everything is correct 
> (which makes sense for pfctl before loading). BUT it is hard to run on a 
> build machine or my laptop to get a general idea on where I'm at (unless I'm 
> missing some tricks somewhere)
> 

Can the build machine securely request each server run pfctl -n -f temp_config ?

That would verify it’ll load for sure on said server.

> So I've been looking into parse.y in pfctl. It's been a long time since I've 
> messed around with very simple yacc stuff so kind of lost.
> 
> Has anyone done anything like this? Would be good to know before I sink more 
> time into this (and probably fail) :)
> 
> /T
> 



Re: Routing and forwarding: directly connected computers

2020-09-03 Thread Brian Brombacher



> On Sep 3, 2020, at 12:38 PM, Brian Brombacher  wrote:
> 
> 
> 
>>>> On Sep 3, 2020, at 12:15 PM, Ernest Stewart  
>>>> wrote:
>>> Theo de Raadt  wrote:
>>> Oh my. Have you considered hiring a consultant?
>>> 
>>> Of course. As you have already noticed, I have no idea about how to do what 
>>> I'm trying to do. But a consultant is out of my budget.
>>> 
>>> Are you guys saying all I have to do is the following, and packets will 
>>> automatically be routed correctly?:
>>> 
>>> computer1)
>>> /etc/hostname.re0: 192.168.1.10 0xff00
>>> /etc/hostname.re1: 192.168.2.10 0xff00
>>> /etc/hostname.re2: 192.168.3.10 0xff00
>>> /etc/hostname.re3: 192.168.4.10 0xff00
>>> /etc/mygate:
>>> 192.168.1.1
>> 
>> Much better.
>> 
>> 
>> 
>> computer2)
>> /etc/hostname.re0: 192.168.2.11 0xfff0

One last thing: change Computer 2’s re0 netmask to 0xff00

>> /etc/hostname.re1: 192.168.2.128 0xfff0
>> /etc/mygate:
>> 192.168.2.10
> 
> You’ll need a route rule on computer1 like this to make computer 5 talk to 
> the rest of the computers:
> 
> route add -net 192.168.2.128/28 192.168.2.11
> 
>> 
>> computer3)
>> /etc/hostname.re0: 192.168.3.11 0xff00
>> /etc/mygate:
>> 192.168.3.10
>> 
>> computer4)
>> /etc/hostname.re0: 192.168.4.11 0xff00
>> /etc/mygate:
>> 192.168.4.10
>> 
>> 
>> computer5)
>> /etc/hostname.re0: 192.168.2.129 0xfff0
>> /etc/mygate:
>> 192.168.2.128
>> 
>> 
>> Computer1's physical connections are like this:
>> re0->ISP router(192.168.1.1)
>> re1->Computer2 re0
>> re2->Computer3 re0
>> re3->Computer4 re0
>> 
>> Computer2's re1 is connected to Computer5's re0.



Re: Routing and forwarding: directly connected computers

2020-09-03 Thread Brian Brombacher



>> On Sep 3, 2020, at 12:15 PM, Ernest Stewart  
>> wrote:
> Theo de Raadt  wrote:
> Oh my. Have you considered hiring a consultant?
> 
> Of course. As you have already noticed, I have no idea about how to do what 
> I'm trying to do. But a consultant is out of my budget.
> 
> Are you guys saying all I have to do is the following, and packets will 
> automatically be routed correctly?:
> 
> computer1)
> /etc/hostname.re0: 192.168.1.10 0xff00
> /etc/hostname.re1: 192.168.2.10 0xff00
> /etc/hostname.re2: 192.168.3.10 0xff00
> /etc/hostname.re3: 192.168.4.10 0xff00
> /etc/mygate:
> 192.168.1.1

Much better.

> 
> 
> computer2)
> /etc/hostname.re0: 192.168.2.11 0xfff0
> /etc/hostname.re1: 192.168.2.128 0xfff0
> /etc/mygate:
> 192.168.2.10

You’ll need a route rule on computer1 like this to make computer 5 talk to the 
rest of the computers:

route add -net 192.168.2.128/28 192.168.2.11

> 
> computer3)
> /etc/hostname.re0: 192.168.3.11 0xff00
> /etc/mygate:
> 192.168.3.10
> 
> computer4)
> /etc/hostname.re0: 192.168.4.11 0xff00
> /etc/mygate:
> 192.168.4.10
> 
> 
> computer5)
> /etc/hostname.re0: 192.168.2.129 0xfff0
> /etc/mygate:
> 192.168.2.128
> 
> 
> Computer1's physical connections are like this:
> re0->ISP router(192.168.1.1)
> re1->Computer2 re0
> re2->Computer3 re0
> re3->Computer4 re0
> 
> Computer2's re1 is connected to Computer5's re0.



Re: Routing and forwarding: directly connected computers

2020-09-03 Thread Brian Brombacher



> On Sep 3, 2020, at 11:44 AM, Ernest Stewart  
> wrote:
> 
> On Sep 3, 2020, at 15:07 AM, Brian Brombacher   wrote:
> 
> "Your setup ... requires pf \rules and additional routing tables to make this 
> work."
> 
> And which pf rules and how to establish those routing tables are exactly what 
> I'm asking.

Ernest,

You are not providing any justification for your ridiculous demands.

Again: Why are you trying to wire the network with the same and disjoint 
networks?  You are not getting to the root cause of the problem.  You want to 
solve a problem that everyone in the thread keeps telling you is not a problem 
to be solved without CLEAR JUSTIFICATION.

Hire a consultant, as Theo said.  You’re request for help, without proper 
justification, is not amenable to this mailing list.

-Brian

> 
> But ok, let's say I reassign addresses so Comp1 re1= 192.168.3.2, Comp2 re0= 
> 192.168.3.127, Comp2 re1 = 192.168.3.128 and Comp5 re0= 192.168.3.129, with 
> all the proper netmasks. That still does not explain why Comp2 is receiving 
> icmp.reply packets but not delivering them to "ping".



Re: Routing and forwarding: directly connected computers

2020-09-03 Thread Brian Brombacher



> On Sep 3, 2020, at 11:02 AM, Ernest Stewart  
> wrote:
> 
> I forgot to say, in every computer I have /etc/sysctl.conf with 
> "net.inet.ip.forwarding=1".
> 
> And I insist, what shocks me the most is that tcpdump shows in both computers 
> the right icmp packets but ping says 100% packets lost.

You’ve really got to pay attention to the netmasks here.  You’re trying to use 
multi routing without doing it right.  Your setup is unnecessarily complex, and 
requires pf rules and additional routing tables to make this work.  Switch to 
bridges networking if it helps simplify things.

What is the insistence on re-using portions of 192.168.1 addresses on a network 
with a router of 192.168.2?

You should expand and use more subnets under 192.168.x.




Re: Does OpenBSD support Carrier Grade Nat?

2020-08-08 Thread Brian Brombacher


>> On Aug 8, 2020, at 4:36 AM, Stuart Henderson  wrote:
> On 2020-08-07, Edward Carver  wrote:
>> Hi Misc,
>> 
>> Does OpenBSD support Carrier Grade Nat (cg-nat)?
>> Thanks for helping..
> 
> What do you mean by 'support'?
> 
> Running as a client behind one? Yes, that's transparent anyway (unless
> you use vmd with its default "local prefix" address range which was
> carefully chosen to conflict with the usual CGN address range).
> 
> As a router performing nat for others? Sort-of. Some will just say
> that CGN is "NAT done by the ISP" and OpenBSD can do that. Others will
> say that more is needed - typically CGN installations will dynamically
> block off a range of ports for a user and tie in with logging ("user
> x was assigned ports 1024-2047 from time y to z") so you can track
> activity to a user without recording every single nat mapping (which
> is a lot more intrusive information to store), and often allow all
> traffic to that range through to the user regardless of whether
> the user initiated a connection to that IP (helps for direct machine
> to machine access for online gaming etc), OpenBSD doesn't do either
> of those.
> 

Hi Stuart,

All coming from a place of curiosity:

I am definitely not knowledgeable on Carrier Grade NAT; however, regarding your 
final two reasons and that OpenBSD may not support this out of the box: Could a 
crafty setup accomplish a CGN using PF and other base utilities plus crafty 
scripting/API integration with PF?

I can surmise PF rules that cover at least the two final reasons you’ve 
mentioned but I’m sure there’s more to it that I’m not understanding.

Thanks,
Brian



Re: can't install some packages on -current

2020-08-04 Thread Brian Brombacher



> On Aug 4, 2020, at 4:33 PM, Sonic  wrote:
> 
> On Tue, Aug 4, 2020 at 4:24 PM  wrote:
>> Update the installed packages first pkg_add -Uu
> 
> It's a fresh install based on -current just downloaded. First attempt
> at installing packages, so no packages to upgrade.
> 

Just wait for new packages to settle and try then.  Or stick to releases.  Get 
what you need setup, then upgrade to a snapshot.  Various ways to approach this.

Different parts of snapshots are built at different times and arrive on mirrors 
at different times.  It’s a moving target as said before ;)




Re: softraid/bioctl cant find device /dev/bio

2020-08-03 Thread Brian Brombacher



> On Aug 3, 2020, at 12:22 PM, sven falempin  wrote:
> 
> On Mon, Aug 3, 2020 at 12:00 PM Brian Brombacher 
> wrote:
> 
>> 
>> 
>> On Aug 3, 2020, at 11:51 AM, sven falempin 
>> wrote:
>> 
>> 
>> 
>> 
>>> On Mon, Aug 3, 2020 at 11:38 AM Brian Brombacher 
>>> wrote:
>>> 
>>> 
>>> 
>>>> On Aug 3, 2020, at 9:54 AM, sven falempin 
>>> wrote:
>>>> 
>>>> Hello
>>>> 
>>>> I saw a similar issue in the mailing list around decembre 2019,
>>>> following an electrical problem softraid doesn't bring devices ups
>>>> 
>>>> 
>>>> # ls /dev/sd??
>>>> /dev/sd0a /dev/sd0g /dev/sd0m /dev/sd1c /dev/sd1i /dev/sd1o /dev/sd2e
>>>> /dev/sd2k
>>>> /dev/sd0b /dev/sd0h /dev/sd0n /dev/sd1d /dev/sd1j /dev/sd1p /dev/sd2f
>>>> /dev/sd2l
>>>> /dev/sd0c /dev/sd0i /dev/sd0o /dev/sd1e /dev/sd1k /dev/sd2a /dev/sd2g
>>>> /dev/sd2m
>>>> /dev/sd0d /dev/sd0j /dev/sd0p /dev/sd1f /dev/sd1l /dev/sd2b /dev/sd2h
>>>> /dev/sd2n
>>>> /dev/sd0e /dev/sd0k /dev/sd1a /dev/sd1g /dev/sd1m /dev/sd2c /dev/sd2i
>>>> /dev/sd2o
>>>> /dev/sd0f /dev/sd0l /dev/sd1b /dev/sd1h /dev/sd1n /dev/sd2d /dev/sd2j
>>>> /dev/sd2p
>>>> # dmesg | grep 6.7
>>>> OpenBSD 6.7 (RAMDISK_CD) #177: Thu May  7 11:19:02 MDT 2020
>>>> # dmesg | grep sd
>>>>   dera...@amd64.openbsd.org:/usr/src/sys/arch/amd64/compile/RAMDISK_CD
>>>> wsdisplay1 at vga1 mux 1: console (80x25, vt100 emulation)
>>>> sd0 at scsibus1 targ 0 lun 0: 
>>>> t10.ATA_QEMU_HARDDISK_Q
>>>> M5_
>>>> sd0: 1907729MB, 512 bytes/sector, 3907029168 sectors, thin
>>>> sd1 at scsibus1 targ 1 lun 0: 
>>>> t10.ATA_QEMU_HARDDISK_Q
>>>> M7_
>>>> sd1: 1907729MB, 512 bytes/sector, 3907029168 sectors, thin
>>>> wskbd0 at pckbd0: console keyboard, using wsdisplay1
>>>> softraid0: trying to bring up sd2 degraded
>>>> softraid0: sd2 was not shutdown properly
>>>> softraid0: sd2 is offline, will not be brought online
>>>> # bioctl -d sd2
>>>> bioctl: Can't locate sd2 device via /dev/bio
>>>> #
>>>> 
>>>> I suspect a missing devices in /dev ( but it seems i have the required
>>> one )
>>>> and MAKEDEV all of course did a `uid 0 on /: out of inodes`
>>>> 
>>>> I have backups but i ' d like to fix the issue !
>>> 
>>> Hi Sven,
>>> 
>>> The device sd2 wasn’t attached by softraid, your /dev/bio is fine.  This
>>> can happen if softraid fails to find all component disks or the metadata on
>>> one or more components does not match expectations (newer metadata seen on
>>> other disks).  Make sure all of the component disks are working.  If that
>>> is not the issue, you may need to re-run the command that you used to
>>> create the array and include -C force.  Be very careful doing this, I
>>> suggest running the command once without -C force to ensure it found all
>>> the components and fails to bring the array up due to the same error
>>> message you got (attempt to bring up degraded).
>>> 
>>> If you’re not careful, you can blow out the whole array.
>>> 
>>> -Brian
>>> 
>>> 
>>> The disk looks fine, the disklabel is ok, the array is just sd0 and sda1
>> both got the disklabel RAID part,
>> shall i do further checks ?
>> 
>> # bioctl -c 1 -l /dev/sd0a,/dev/sd1a softraid0
>> softraid0: trying to bring up sd2 degraded
>> softraid0: sd2 was not shutdown properly
>> softraid0: sd2 is offline, will not be brought online
>> softraid0: trying to bring up sd2 degraded
>> softraid0: sd2 was not shutdown properly
>> softraid0: sd2 is offline, will not be brought online
>> 
>> I wouldnt like to blow the whole array ! sd0a should be in perfect
>> condition but unsure about sd1a, i probably need to bioctl -R sd1
>> 
>> 
>> Traditionally at this point, I would run the command again with -C force
>> and my RAID 1 array is fine.  I might be doing dangerous things and not
>> know, so other voices please chime in.
>> 
>> [Moved to misc@]
>> 
>> 
>> 
>> 
> # bioctl -C force -c 1 -l /dev/sd0a,/dev/sd1a softraid0
> sd2 at scsibus2 targ 1 lun 0: 
> sd2: 1907726MB, 512 bytes/sector, 3907023473 sectors
> softraid0: RAID 1 volume attached as sd2
> 
> both volumes are online , partitions are visible
> but fsck is not happy at all :-(
> 
> Can i do something before fsck -y ( i have backups )

Make sure your backups are good.

Run fsck -n and see how wicked the issues are.  It may just be cleaning itself 
up after the electrical outage.





Re: softraid/bioctl cant find device /dev/bio

2020-08-03 Thread Brian Brombacher



> On Aug 3, 2020, at 11:51 AM, sven falempin  wrote:
> 
> 
> 
> 
>> On Mon, Aug 3, 2020 at 11:38 AM Brian Brombacher  
>> wrote:
>> 
>> 
>> > On Aug 3, 2020, at 9:54 AM, sven falempin  wrote:
>> > 
>> > Hello
>> > 
>> > I saw a similar issue in the mailing list around decembre 2019,
>> > following an electrical problem softraid doesn't bring devices ups
>> > 
>> > 
>> > # ls /dev/sd??
>> > /dev/sd0a /dev/sd0g /dev/sd0m /dev/sd1c /dev/sd1i /dev/sd1o /dev/sd2e
>> > /dev/sd2k
>> > /dev/sd0b /dev/sd0h /dev/sd0n /dev/sd1d /dev/sd1j /dev/sd1p /dev/sd2f
>> > /dev/sd2l
>> > /dev/sd0c /dev/sd0i /dev/sd0o /dev/sd1e /dev/sd1k /dev/sd2a /dev/sd2g
>> > /dev/sd2m
>> > /dev/sd0d /dev/sd0j /dev/sd0p /dev/sd1f /dev/sd1l /dev/sd2b /dev/sd2h
>> > /dev/sd2n
>> > /dev/sd0e /dev/sd0k /dev/sd1a /dev/sd1g /dev/sd1m /dev/sd2c /dev/sd2i
>> > /dev/sd2o
>> > /dev/sd0f /dev/sd0l /dev/sd1b /dev/sd1h /dev/sd1n /dev/sd2d /dev/sd2j
>> > /dev/sd2p
>> > # dmesg | grep 6.7
>> > OpenBSD 6.7 (RAMDISK_CD) #177: Thu May  7 11:19:02 MDT 2020
>> > # dmesg | grep sd
>> >dera...@amd64.openbsd.org:/usr/src/sys/arch/amd64/compile/RAMDISK_CD
>> > wsdisplay1 at vga1 mux 1: console (80x25, vt100 emulation)
>> > sd0 at scsibus1 targ 0 lun 0: 
>> > t10.ATA_QEMU_HARDDISK_Q
>> > M5_
>> > sd0: 1907729MB, 512 bytes/sector, 3907029168 sectors, thin
>> > sd1 at scsibus1 targ 1 lun 0: 
>> > t10.ATA_QEMU_HARDDISK_Q
>> > M7_
>> > sd1: 1907729MB, 512 bytes/sector, 3907029168 sectors, thin
>> > wskbd0 at pckbd0: console keyboard, using wsdisplay1
>> > softraid0: trying to bring up sd2 degraded
>> > softraid0: sd2 was not shutdown properly
>> > softraid0: sd2 is offline, will not be brought online
>> > # bioctl -d sd2
>> > bioctl: Can't locate sd2 device via /dev/bio
>> > #
>> > 
>> > I suspect a missing devices in /dev ( but it seems i have the required one 
>> > )
>> > and MAKEDEV all of course did a `uid 0 on /: out of inodes`
>> > 
>> > I have backups but i ' d like to fix the issue !
>> 
>> Hi Sven,
>> 
>> The device sd2 wasn’t attached by softraid, your /dev/bio is fine.  This can 
>> happen if softraid fails to find all component disks or the metadata on one 
>> or more components does not match expectations (newer metadata seen on other 
>> disks).  Make sure all of the component disks are working.  If that is not 
>> the issue, you may need to re-run the command that you used to create the 
>> array and include -C force.  Be very careful doing this, I suggest running 
>> the command once without -C force to ensure it found all the components and 
>> fails to bring the array up due to the same error message you got (attempt 
>> to bring up degraded).
>> 
>> If you’re not careful, you can blow out the whole array.
>> 
>> -Brian
>> 
>> 
> The disk looks fine, the disklabel is ok, the array is just sd0 and sda1 both 
> got the disklabel RAID part,
> shall i do further checks ?
>  
> # bioctl -c 1 -l /dev/sd0a,/dev/sd1a softraid0
> softraid0: trying to bring up sd2 degraded
> softraid0: sd2 was not shutdown properly
> softraid0: sd2 is offline, will not be brought online
> softraid0: trying to bring up sd2 degraded
> softraid0: sd2 was not shutdown properly
> softraid0: sd2 is offline, will not be brought online
> 
> I wouldnt like to blow the whole array ! sd0a should be in perfect condition 
> but unsure about sd1a, i probably need to bioctl -R sd1 

Traditionally at this point, I would run the command again with -C force and my 
RAID 1 array is fine.  I might be doing dangerous things and not know, so other 
voices please chime in.

[Moved to misc@]





Re: OpenBSD 6.7-current VM on vmd collectd timesync problem

2020-07-30 Thread Brian Brombacher
Are you using: kern.timercounter.hardware=tsc ?

I’m on 6.7 release and no issue with collectd.

> On Jul 30, 2020, at 4:53 PM, Martin  wrote:
> 
> I can test it on 6.7-current only, and I haven't tested collectd on 6.6 - 
> 6.7 -stable. TSC looks synchronized, ntpd corrects small amount of time skew 
> ~1s or less.
> 
> VM time looks stable, but not enougth for time-series measurements.
> 
> Do you know any command to check TSC is "synchronized"?
> 
> Martin
> 
> ‐‐‐ Original Message ‐‐‐
>> On Thursday, July 30, 2020 8:40 PM, Chris Cappuccio  wrote:
>> 
>> Martin [martin...@protonmail.com] wrote:
>> 
>>> VM using NTP protocol to fine tune clock from the OpenBSD 6.7-current host, 
>>> but collectd complain about clock skew in the past.
>>> Any ideas?
>> 
>> Does this happen with 6.6 or 6.7 as well? 6.7-current uses the TSC directly
>> to gather timestamps, but it should only do this if the TSC are 
>> "synchronized".
> 
> 



Re: sysupgrade failure due to boot.conf

2020-07-16 Thread Brian Brombacher


> On Jul 13, 2020, at 6:58 AM, Alfred Morgan  wrote:
> 
> 
> Brian wrote:
> > (echo boot /bsd.upgrade; echo boot) > /etc/boot.conf
> 
> Brian, that doesn't work. I tried that already before. It seems to stop at 
> the error not finding bsd.upgrade and won't continue.
> 
> -alfred

Thanks for checking this, it was untested advice.  I’m glad it didn’t work, 
your follow-up emails regarding the root cause were enlightening for me.



Re: Issue with relayd and redirections

2020-07-13 Thread Brian Brombacher


> On Jul 13, 2020, at 8:30 PM, Gabri Tofano  wrote:
> 
> I have tried to implement the workaround as per man page
> but it still doesn't work, here the pf.conf config:
> 
> eth0 = "xnf0"
> web1 = "172.16.101.31"
> 
> anchor "relayd/*"
> 
> set skip on lo
> 
> block return log
> pass log
> 
> pass out quick on $eth0 proto tcp to $web1 port 80 \
> received-on $eth0 nat-to $eth0

Try putting this before the anchor.  The quick entry in the anchor that relayd 
creates takes precedence.

> 
> block return in on ! lo0 proto tcp to port 6000:6010
> block return out log proto {tcp udp} user _pbuild
> 
> 
> I'm trying to gather some useful log on relayd and see if
> there's any error but even with "relayctl log verbose"
> nothing is showing beside the startup entries
> 
> Thank you!
> 
>> There's a "workaround" also mentioned in pf.conf(5) which also works
>> with relayd inserted rdr-rules, e.g.
>> pass out quick on vlan99 proto tcp to 192.168.89.13 received-on vlan99
>> nat-to 192.168.89.1
>> vlan99 has 'inet 192.168.89.1/24' and 192.168.89.13 is the relayd rdr
>> "target".
>> HTH,
>> --
>> pb
>>> On 2020-07-13 01:08, Gabri Tofano wrote:
>> After some further troubleshooting, tonight I took some time to sit down and
>> read again the man pages as everything on my config files was looking fine 
>> and
>> no errors were showing up in any log. With Brian's help we were leading to 
>> the
>> direction that something was wrong with the pf translation itself and so I
>> tested a static rdr-to configuration with pf only in the same environment, 
>> and
>> neither this test worked as expected. So I went back to read the pf.conf man
>> page and here comes the rdr-to relevant section:
>> "Redirections cannot reflect packets back through the interface they
>> arrive on, they can only be redirected to hosts connected to different
>> interfaces or to the firewall itself."
>> Focusing on relayd, my oversight was to not going back and read again the
>> pf.conf man page in order to make sure that my box's network configuration 
>> was
>> ok, since apparently I got it to work with relays without problems.
>> The next challenge now is to find if there is another way to make this setup
>> working with just 1 network interface and implement relayd redirects for SSL
>> passthrough, or give up. There seems to be few options here that I can think 
>> of:
>> - Keep my current configuration with HAproxy
>> - Add another network interface to the box and configure an additional
>> network to
>> it (it might be tricky when deploying a droplet with a direct public IP 
>> address)
>> - Migrate to relayd relays and give up with SSL passthrough (with the 
>> benefit of
>> SSL offloading if want to implement it)
>> Thank you to the community and the devs for the great work on this OS!
>> Especially
>> on the man pages :)
>> On 2020-07-11 12:58, Gabri Tofano wrote:
>>>> It isn’t.  rdr-to, and by extension redirects, are not natting the source 
>>>> address.
>>>> Clients connecting through relayd and to the backend will have source 
>>>> addresses
>>>> not that of the relayd machine but of the original client.
>>> Thank you for correcting me on this as it was a bad statement told before
>>> getting coffee in the morning :)
>>>> I’m going to play around on my boxes and try and come up with some options 
>>>> for you.
>>>> I’ll get back to you later.
>>> Thank you for dedicating time in looking to this issue!
>>> On 2020-07-11 12:08, Brian Brombacher wrote:
>>>>>> On Jul 11, 2020, at 11:20 AM, Gabri Tofano  wrote:
>>>>> On 2020-07-11 06:33, Brian Brombacher wrote:
>>>>>>>>>>> On Jul 10, 2020, at 11:42 PM, Gabri Tofano  
>>>>>>>>>>> wrote:
>>>>>>>>> 
>>>>>>>>>> Does http work with redirects?  It wasn’t clear if it did or not in
>>>>>>>>>> your first post.
>>>>>>>>> It doesn't work with http and that is the redirect that I was testing.
>>>>>>>>>> Indications from your pf anchor rules and the down
>>>>>>>>>> status above, and the check http attribute on the https forward to
>>>>>>>>>> directives tell me relayd isn’t liking your check http configuration
>>>>&g

Re: Issue with relayd and redirections

2020-07-11 Thread Brian Brombacher


>> On Jul 11, 2020, at 11:20 AM, Gabri Tofano  wrote:
> On 2020-07-11 06:33, Brian Brombacher wrote:
>>>>>>> On Jul 10, 2020, at 11:42 PM, Gabri Tofano  wrote:
>>>>> 
>>>>>> Does http work with redirects?  It wasn’t clear if it did or not in
>>>>>> your first post.
>>>>> It doesn't work with http and that is the redirect that I was testing.
>>>>>> Indications from your pf anchor rules and the down
>>>>>> status above, and the check http attribute on the https forward to
>>>>>> directives tell me relayd isn’t liking your check http configuration
>>>>>> for port 443.
>>>>>> Start by switching to check icmp or check tcp or something else, see
>>>>>> if it works, unless you can fix the check http based on logs or
>>>>>> otherwise.
>>>>> I changed it to tcp and now the servers are showing as "up":
>>>>> LAB1-LB1# relayctl sh sum
>>>>> Id  TypeNameAvlblty Status
>>>>> 1   redirecthttpactive
>>>>> 1   table   web_servers:80  active (1 
>>>>> hosts)
>>>>> 1   host172.16.101.31   100.00% up
>>>>> 2   table   nc_servers:80   active (1 
>>>>> hosts)
>>>>> 2   host172.16.101.32   100.00% up
>>>>> 2   redirecthttps   active
>>>>> 3   table   web_servers:443 active (1 
>>>>> hosts)
>>>>> 3   host172.16.101.31   100.00% up
>>>>> 4   table   nc_servers:443  active (1 
>>>>> hosts)
>>>>> 4   host172.16.101.32   100.00% up
>>>>> However I was hoping to fix the http redirect first and then move to 
>>>>> https, but it
>>>>> looks like more of a "general issue" with redirects in my current 
>>>>> configuration.
>>>>> Thanks
>> If http redirection isn’t working, I’d be curious from where you’re
>> trying to connect or what router you have configured on the backend
>> hosts.  I see you’re relayd box and back ends are on the same network.
>> If you’re trying to connect from another address in 172.16.101.x to
>> your relayd setup, it won’t work reliably.  It might also not work
>> reliably or at all, if you are not routing responses through the
>> relayd host.
>> If they are replying direct, any PF scrub normalization, tcp sequence
>> handling, etc., all get lost, among other issues.
>> I hope this is the cause of your issues, otherwise you’re going to
>> need to include more information for your setup, or at a minimum some
>> relayd logs.
>> -Brian
> 
> I have a layer3 switch doing routing between 2 vlans, relayd and the 2
> backend web servers are on the same vlan and the client is on another
> vlan 172.16.100.x. The relayd VM is configured with only 1 network
> interface. When the client try to reach the web servers directly
> everything work fine. When the client is passing through relayd I see
> the following:
> 
> - Only SYN packets coming into relayd box which they become retransmissions
> - The relayd anchor rules do not have the log parameter set so I cannot
> see passing traffic from the client to the backend servers, but at least
> no traffic is being blocked. I haven't found a way to manipulate an anchor
> via pfctl in order to add the log parameter
> - The web server does not see any traffic reaching out on port 80 beside
> the http checks from relayd IP address
> - I have set "log connection" in relayd.conf and then relayctl log verbose
> but /var/log/daemon unfortunately is not showing much:
> 
> relayd[84883]: startup
> relayd[84883]: unused protocol: http
> relayd[84883]: unused protocol: https
> relayd[33541]: host 172.16.101.32, check tcp (1ms,tcp connect ok), state 
> unknown -> up, availability 100.00%
> relayd[33541]: host 172.16.101.31, check tcp (2ms,tcp connect ok), state 
> unknown -> up, availability 100.00%
> relayd[33541]: host 172.16.101.31, check http code (3ms,http code ok), state 
> unknown -> up, availability 100.00%
> relayd[33541]: host 172.16.101.32, check http code (3ms,http code ok), state 
> unknown -> up, availability 100.00%
> relayd[1

Re: Issue with relayd and redirections

2020-07-11 Thread Brian Brombacher


> On Jul 10, 2020, at 11:42 PM, Gabri Tofano  wrote:
> 
> 
>> Does http work with redirects?  It wasn’t clear if it did or not in
>> your first post.
> 
> It doesn't work with http and that is the redirect that I was testing.
> 
>> Indications from your pf anchor rules and the down
>> status above, and the check http attribute on the https forward to
>> directives tell me relayd isn’t liking your check http configuration
>> for port 443.
>> Start by switching to check icmp or check tcp or something else, see
>> if it works, unless you can fix the check http based on logs or
>> otherwise.
> 
> I changed it to tcp and now the servers are showing as "up":
> 
> LAB1-LB1# relayctl sh sum
> Id  TypeNameAvlblty Status
> 1   redirecthttpactive
> 1   table   web_servers:80  active (1 
> hosts)
> 1   host172.16.101.31   100.00% up
> 2   table   nc_servers:80   active (1 
> hosts)
> 2   host172.16.101.32   100.00% up
> 2   redirecthttps   active
> 3   table   web_servers:443 active (1 
> hosts)
> 3   host172.16.101.31   100.00% up
> 4   table   nc_servers:443  active (1 
> hosts)
> 4   host172.16.101.32   100.00% up
> 
> However I was hoping to fix the http redirect first and then move to https, 
> but it
> looks like more of a "general issue" with redirects in my current 
> configuration.
> 
> Thanks

If http redirection isn’t working, I’d be curious from where you’re trying to 
connect or what router you have configured on the backend hosts.  I see you’re 
relayd box and back ends are on the same network.  If you’re trying to connect 
from another address in 172.16.101.x to your relayd setup, it won’t work 
reliably.  It might also not work reliably or at all, if you are not routing 
responses through the relayd host.

If they are replying direct, any PF scrub normalization, tcp sequence handling, 
etc., all get lost, among other issues.

I hope this is the cause of your issues, otherwise you’re going to need to 
include more information for your setup, or at a minimum some relayd logs.

-Brian



Re: sysupgrade failure due to boot.conf

2020-07-10 Thread Brian Brombacher


> On Jul 10, 2020, at 7:31 PM, Alfred Morgan  wrote:
> 
> 
>> 
>> You claimed sysupgrade does this.
>> sysupgrade does nothing like that.  It placed a /bsd.upgrade file, and
> that is the end of the story.
>> You told boot (via commands in boot.conf) to do something, so it did,
> before discovering the file.
> 
> Theo,
> When I mentioned sysupgrade I was referring to the full sysupgrade
> procedure all the way through to completion. Sorry for not being specific
> enough. Thank you, you brought focus to boot which is really where my
> suggestion will be focused on.
> 
> So, how can I explicitly tell boot to act normally to boot /bsd.upgrade and
> if that doesn't exist then boot /bsd? I would expect # echo boot >
> /etc/boot.conf to do just that.
> 

(echo boot /bsd.upgrade; echo boot) > /etc/boot.conf




Re: Issue with relayd and redirections

2020-07-10 Thread Brian Brombacher


> On Jul 10, 2020, at 9:15 PM, Gabri Tofano  wrote:
> 
> Here:
> 
> LAB1-LB1$ relayctl sh sum
> Id  TypeName   Avlblty Status
> 1   redirecthttp   active
> 1   table   web_servers:80 active (1 hosts)
> 1   host172.16.101.31  4.87%   up
> 2   table   nc_servers:80  active (1 hosts)
> 2   host172.16.101.32  4.86%   up
> 2   redirecthttps  down
> 3   table   web_servers:443empty
> 3   host172.16.101.31  0.00%   down
> 4   table   nc_servers:443 empty
> 4   host172.16.101.32  0.00%   down
> 

Does http work with redirects?  It wasn’t clear if it did or not in your first 
post.  Indications from your pf anchor rules and the down status above, and the 
check http attribute on the https forward to directives tell me relayd isn’t 
liking your check http configuration for port 443.

Start by switching to check icmp or check tcp or something else, see if it 
works, unless you can fix the check http based on logs or otherwise.

> The low availability is due too the web servers were turned off.
> 
> Thanks!
> 
>> On 2020-07-10 17:41, Sebastian Benoit wrote:
>> Gabri Tofano(ga...@tofanos.com) on 2020.07.07 15:38:17 -0400:
>>> When using redirections, no listening ports are open (I guess due to
>>> relayd using pf nat rules)
>> correct
>>> and I'm unable to reach both backend servers.
>> show the output of "relayctl sh sum".
> 



Re: Unbound Problems (Reverse Direction)

2020-07-10 Thread Brian Brombacher
Use these directives also in unbound (see the pattern and choose what you 
need, like 24.172.IN-ADDR.ARPA, to cover your 172.24.* reverse.

local-zone: "168.192.IN-ADDR.ARPA" nodefault
local-zone: "16.172.IN-ADDR.ARPA" nodefault
local-zone: "17.172.IN-ADDR.ARPA" nodefault
local-zone: "18.172.IN-ADDR.ARPA" nodefault
local-zone: "19.172.IN-ADDR.ARPA" nodefault
local-zone: "10.IN-ADDR.ARPA" nodefault
local-zone: "d.f.IP6.ARPA" nodefault


> On Jul 10, 2020, at 2:22 AM, Frank Habicht  wrote:
> 
> Hi,
> 
>>> On 09/07/2020 20:44, ken.hendrick...@l3harris.com wrote:
>> stub-zone:
>>   name:  30.24.172.in-addr.arpa.
>  good
>>   stub-addr: 127.0.0.1@53053
>> stub-zone:
>>   name:  2.168.192.in-arpa.arpa.
>  typo
>>   stub-addr: 127.0.0.1@53053
>> stub-zone:
>>   name:  224.in-addr.arpa.
>>   stub-addr: 127.0.0.1@53053
>> stub-zone:
>>   name:  255.in-addr.arpa.
>>   stub-addr: 127.0.0.1@53053
> 
> Frank



Re: ls -R bug?

2020-07-04 Thread Brian Brombacher


> On Jul 4, 2020, at 3:10 PM, Brian Brombacher  wrote:
> 
> Hmm...
> 
> /bin/ls, a utility that has existed since 1960’s.
> 
> This is not a bug.
> 
> https://en.m.wikipedia.org/wiki/Ls
> 

Please disregard this poor advice.  Obviously this isn’t the 1960’s and it 
ain’t the same code :)

There was a bug as you identified correctly.



Re: ls -R bug?

2020-07-04 Thread Brian Brombacher
I’ll be explicit.

Did the OP run ls(1) as superuser?  See -A flag in man ls

We have no idea.

> On Jul 4, 2020, at 3:44 PM, Brian Brombacher  wrote:
> 
> 
> 
>>> On Jul 4, 2020, at 3:38 PM, Ottavio Caruso 
>>>  wrote:
>>> 
>>> On Sat, 4 Jul 2020 at 19:59, Richard Ipsum  wrote:
>>> 
>>> Hi,
>>> 
>>> Output of ls -R between OpenBSD and GNU coreutils seems to differ,
>>> OpenBSD ls -R will apparently list "hidden" directories like .git,
>>> whereas GNU coreutils will not, is this expected behaviour or a bug?
>>> 
>> 
>> Funny, because this seems to validate what you are reporting:
>> 
>> oc@OpenBSD:~$ ls -R
>> oc-backup test
>> 
>> ./.local/share:
>> xorg
>> 
>> ./.local/share/xorg:
>> Xorg.0.log  Xorg.0.log.old
>> 
>> ./oc-backup:
>> docs mbox
>> 
>> ./oc-backup/docs:
>> bgpd.confman-todo patch.patch  root-mail
>> bug  oc-mail  robots.txt   sudo.log
>> 
>> ./test:
>> dmesg   fstab   index.html  uyiuyi
>> filefstab.dos   ls.ps
>> file.bakfstab.tropenbsd-tips-wip
>> file.orig   fstab.unix  test.wav
>> 
>> 
>> 
>> However:
>> 
>> oc@OpenBSD:~$ mkdir .hidden
>> oc@OpenBSD:~$ touch .hidden/test-file
>> oc@OpenBSD:~$ ls -R
>> 
>> 
>> 
>> It looks like "ls -R" is showing some hidden directories but not all.
>> 
>> -- 
>> Ottavio Caruso
>> 
> 
> man ls
> man ksh
> 
> 



Re: ls -R bug?

2020-07-04 Thread Brian Brombacher



> On Jul 4, 2020, at 3:38 PM, Ottavio Caruso  
> wrote:
> 
> On Sat, 4 Jul 2020 at 19:59, Richard Ipsum  wrote:
>> 
>> Hi,
>> 
>> Output of ls -R between OpenBSD and GNU coreutils seems to differ,
>> OpenBSD ls -R will apparently list "hidden" directories like .git,
>> whereas GNU coreutils will not, is this expected behaviour or a bug?
>> 
> 
> Funny, because this seems to validate what you are reporting:
> 
> oc@OpenBSD:~$ ls -R
> oc-backup test
> 
> ./.local/share:
> xorg
> 
> ./.local/share/xorg:
> Xorg.0.log  Xorg.0.log.old
> 
> ./oc-backup:
> docs mbox
> 
> ./oc-backup/docs:
> bgpd.confman-todo patch.patch  root-mail
> bug  oc-mail  robots.txt   sudo.log
> 
> ./test:
> dmesg   fstab   index.html  uyiuyi
> filefstab.dos   ls.ps
> file.bakfstab.tropenbsd-tips-wip
> file.orig   fstab.unix  test.wav
> 
> 
> 
> However:
> 
> oc@OpenBSD:~$ mkdir .hidden
> oc@OpenBSD:~$ touch .hidden/test-file
> oc@OpenBSD:~$ ls -R
> 
> 
> 
> It looks like "ls -R" is showing some hidden directories but not all.
> 
> -- 
> Ottavio Caruso
> 

man ls
man ksh




Re: ls -R bug?

2020-07-04 Thread Brian Brombacher
Hmm...

/bin/ls, a utility that has existed since 1960’s.

This is not a bug.

https://en.m.wikipedia.org/wiki/Ls

> On Jul 4, 2020, at 3:02 PM, Richard Ipsum  wrote:
> 
> Hi,
> 
> Output of ls -R between OpenBSD and GNU coreutils seems to differ,
> OpenBSD ls -R will apparently list "hidden" directories like .git,
> whereas GNU coreutils will not, is this expected behaviour or a bug?
> 
> Thanks,
> Richard
> 


Re: Relayd with TLS and non-TLS backends - bug

2020-07-04 Thread Brian Brombacher


> On Jul 3, 2020, at 7:17 PM, Henry Bonath  wrote:
> 
> Daniel,
> 
> Thanks for taking the time to test this out.
> I just reloaded a test machine from scratch with -current and
> installed the HAProxy 2.0.15-4f39279 package.
> I loaded a very basic config file, and am also seeing the same exact
> issue on this one as well.
> Very strange that you are not -
> Would you mind sharing any additional details of your config file?
> Is there anything special about the certificate you have on the backend 
> server?
> 
> I would love to understand what is going on here and what the
> difference is with my experience.
> 
>> On Thu, Jul 2, 2020 at 4:38 PM Daniel Jakots  wrote:
>> 
>> On Thu, 2 Jul 2020 14:00:48 -0400, Henry Bonath 
>> wrote:
>> 
>>> Note the missing Client Hello on the 6.7 machine as it jumps to
>>> Application Data straight away.
>>> Configuration files for HAProxy are identical on both systems.
>>> 
>>> I'm currently spinning up a machine on -CURRENT just to see if there
>>> is any difference,
>>> as there is a newer version of HAProxy in packages under Snapshots.
>>> 
>>> I was initially going to try to reach out to the package maintainer
>>> for HAProxy but if this is happening in Relayd, then this "feels
>>> like" a de-facto bug. I wonder if NGINX would exhibit the same
>>> behavior.
>>> 
>>> Has anyone else experienced such behavior with Load-Balancing TLS
>>> Backends since upgrading to 6.7?
>> 
>> I don't use TLS for my backend (the only backend I use nowadays is on
>> localhost) so I can't speak for 6.7 (I only use -current, and when
>> -current was 6.7, I didn't test that).
>> 
>> I just tested my -current haproxy using another -current host of mine
>> running nginx as a backend with TLS and it worked fine.
>> 
>> backend https
>>   option forwardfor
>>   server web1 ln.chown.me:443 check ssl verify none
>> 
>> and also with "verify required ca-file /etc/ssl/cert.pem"
>> 
>> 
>> Maybe some libressl fix happened on -current was not deemed critical
>> enough to be backported to 6.7?
>> 
>> Cheers,
>> Daniel
> 

This thread is conflating two issues:

1) Henry’s original relayd.conf is wrong.  Notice the TLS connection attempt to 
port 80 in his relayd logs.  This will never work.  See my email regarding two 
relays required.

2) There was conversation about a compatibility issue with LibreSSL in 6.7 
release.  Check the archives.





Re: Relayd with TLS and non-TLS backends - bug

2020-07-04 Thread Brian Brombacher


> On Jun 11, 2020, at 4:28 PM, Toyam Cox  wrote:
> 
> Hello Misc,
> 
> Full config at end of email.
> 
> I've discussed the below in #openbsd on freenode, and was told to come
> here. At present, I have a setup where I need multiple unrelated
> servers under a single IP address. I used relayd to do https
> interception, read the Host header, and make decisions.
> 
> The very relevant part of my config is this:
> 
> forward to  port 80
> forward with tls to  port 443
> 
> The order here does not matter (unlike most relayd configs, I know,
> but I've tested in my configuration and it works).
> 
> When I have "with tls" on that second line, I see error lines like:
> relay web, session 3 (1 active), 0, [redacted] -> 10.0.0.102:80, TLS
> handshake error: handshake failed: error:14FFF3E7:SSL
> routines:(UNKNOWN)SSL_internal:unknown failure occurred, GET:
> Undefined error: 0
> 
> and, unhelpfully, relayd responds with no response. There is no
> return. Or, as curl puts it: curl: (52) Empty reply from server
> 
> When I remove "with tls" then I successfully reach the http backend,
> but since the https backend requires ssl, that connection no longer
> works. So it seems that 'with tls" affects all "forward" clauses, not
> just the one to which it's attached.
> 
> I believe this to be a bug.
> 
> cat >/etc/relayd.conf < table  { "10.0.0.101" }
> table  { "10.0.0.102" }
> # obviously obfuscated some values
> 
> interval 5
> timeout 1000
> 
> log connection
> 
> http protocol web {
> return error
> 
> match header set "X-Client-IP" value "$REMOTE_ADDR:$REMOTE_PORT"
> match header set "X-Forwarded-For" value "$REMOTE_ADDR"
> match header set "X-Forwarded-By" value "$SERVER_ADDR:$SERVER_PORT"
> 
> http websockets
> pass request quick header "Host" value "myhost.example.com" path
> "/Client/*" forward to 
> pass request quick header "Host" value "otherhost.example.com" forward
> to 
> 
> block
> }
> 
> relay web {
> listen on 10.0.0.100 port 443 tls
> protocol web
> 
> forward to  port 80 check http "/webservice.asmx" code 405
> forward with tls to  port 443 check https
> "/Client/SupportedBrowsers.html" host "myhost.example.com" code 200
> }
> EOF
> 

Hi Toyam,

Split http and https into two separate relay stanzas.

The “with tls” will be needed on your https relay and not the http backhaul.  I 
believe this gets what you want.

I do not think this is a bug, but perhaps a design choice by the developers.

Cheers,
Brian



Re: strlcpy version speed tests?

2020-07-04 Thread Brian Brombacher


>> On Jul 1, 2020, at 1:14 PM, gwes  wrote:
>> 
>> On 7/1/20 8:05 AM, Luke Small wrote:
>> I spoke to my favorite university computer science professor who said
>> ++n is faster than n++ because the function needs to store the initial
>> value, increment, then return the stored value in the former case,
>> while the later merely increments, and returns the value. Apparently,
>> he is still correct on modern hardware.
> For decades the ++ and *p could be out of order, in different
> execution units, writes speculatively queued, assigned to aliased registers,
> etc, etc, etc.
> 
> Geoff Steckel

Hey Luke,

I love the passion but try to focus your attention on the fact that their are 
multiple architectures supported and compiler optimizations are key here.  Go 
with Marc’s approach using arch/ asm.  Implementations can be made over time 
for the various arch’s, if such an approach is desirable by the project.  You 
can pull a well-optimized version based on your code, for your arch, and then 
slim it down a bunch.

Cheers,
Brian

[Not a project developer.  Just an observer.]




Re: relayd multiple listen on same redirect

2020-07-04 Thread Brian Brombacher


> On Jul 3, 2020, at 3:34 AM, Kapetanakis Giannis  
> wrote:
> 
> Hi,
> 
> My setup in relayd is like this:
> 
> redirect radius {
>  listen on $radius_addr udp port radius interface $ext_if
>  pftag RELAYD_radius
>  sticky-address
>  forward to  mode least-states check icmp demote carp
> }
> 
> redirect radacct {
>  listen on $radius_addr udp port radacct interface $ext_if
>  pftag RELAYD_radius
>  sticky-address
>  forward to  mode least-states check icmp demote carp
> }
> 
> I want to combine it in one redirect but the redirect forwards it to the 
> first port defined in listen for both radius and radacct ports.
> 
> redirect radius {
>  listen on $radius_addr udp port radius interface $ext_if
>  listen on $radius_addr udp port radacct interface $ext_if
>  pftag RELAYD_radius
>  sticky-address
>  forward to  mode least-states check icmp demote carp
> }
> 
> Is there another way to do this or do I have to stick with two redirects?
> 
> thanks,
> 
> Giannis

Hi Giannis,

I have not tested your config or my advice for your config; however, my 
assumptions are sticky-address is needed per udp port conversation for radius.  
By contrast, if sticky was needed for the combination of both radius/radacct on 
same backend host per source address or address/port, you cannot achieve that 
reliably with least-states.  I don’t know the radius protocols enough to know 
the requirements.

Here’s my question after all that dribbling:

Have you tried using either of the following config options?

forward to destination
forward to nat

IIRC, in the past I had multiple TCP relay ports going to their specified ports 
on the backend.  I only needed to split things by address family (v4/6) for my 
own purposes.  I cannot remember if the directives above took port into 
consideration.  It might not be a far stretch to make that feasible with code 
changes but I haven’t seen the relayd code paths in question so that’s a 
complete guess (but I’m on my way to check ;).  Also since I concentrated on 
TCP relays, I don’t know how effective these directives would be for redirects. 
 My end config has separate relays per TCP service except passive FTP relaying.

Also, make sure your pf.conf has the right anchor.  Only mentioning it because 
your original email skips this detail.  I doubt this would be missing if you 
have a working setup already, so ignore if so.

Cheers,
Brian




Re: Relayd with TLS and non-TLS backends - bug

2020-07-03 Thread Brian Brombacher


> On Jul 3, 2020, at 9:46 PM, Daniel Jakots  wrote:
> 
> On Fri, 3 Jul 2020 20:25:12 -0400, Brian Brombacher
>  wrote:
> 
>> My subjective net gain is simplicity, security, performance, and
>> flexibility.
> 
> I don't think adding ipsec (or a mesh vpn) into the mix achieve that but
> ymmv.
> 

Subjective is right :)

He has two hosts.  IPsec from one to the other.  Pre-negotiated encrypted 
channel.

MTU 1400 or so...

Four round-trip TCP packets to get the request on the backend... if the HTTP 
request is smaller than say 1300 bytes, to be really safe.

How is that slower?

-Brian



Re: Relayd with TLS and non-TLS backends - bug

2020-07-03 Thread Brian Brombacher


> On Jun 11, 2020, at 4:28 PM, Toyam Cox  wrote:
> 
> Hello Misc,
> 
> Full config at end of email.
> 
> I've discussed the below in #openbsd on freenode, and was told to come
> here. At present, I have a setup where I need multiple unrelated
> servers under a single IP address. I used relayd to do https
> interception, read the Host header, and make decisions.
> 
> The very relevant part of my config is this:
> 
> forward to  port 80
> forward with tls to  port 443
> 
> The order here does not matter (unlike most relayd configs, I know,
> but I've tested in my configuration and it works).
> 
> When I have "with tls" on that second line, I see error lines like:
> relay web, session 3 (1 active), 0, [redacted] -> 10.0.0.102:80, TLS
> handshake error: handshake failed: error:14FFF3E7:SSL
> routines:(UNKNOWN)SSL_internal:unknown failure occurred, GET:
> Undefined error: 0
> 
> and, unhelpfully, relayd responds with no response. There is no
> return. Or, as curl puts it: curl: (52) Empty reply from server
> 
> When I remove "with tls" then I successfully reach the http backend,
> but since the https backend requires ssl, that connection no longer
> works. So it seems that 'with tls" affects all "forward" clauses, not
> just the one to which it's attached.
> 
> I believe this to be a bug.
> 
> cat >/etc/relayd.conf < table  { "10.0.0.101" }
> table  { "10.0.0.102" }
> # obviously obfuscated some values
> 
> interval 5
> timeout 1000
> 
> log connection
> 
> http protocol web {
> return error
> 
> match header set "X-Client-IP" value "$REMOTE_ADDR:$REMOTE_PORT"
> match header set "X-Forwarded-For" value "$REMOTE_ADDR"
> match header set "X-Forwarded-By" value "$SERVER_ADDR:$SERVER_PORT"
> 
> http websockets
> pass request quick header "Host" value "myhost.example.com" path
> "/Client/*" forward to 
> pass request quick header "Host" value "otherhost.example.com" forward
> to 
> 
> block
> }
> 
> relay web {
> listen on 10.0.0.100 port 443 tls
> protocol web
> 
> forward to  port 80 check http "/webservice.asmx" code 405
> forward with tls to  port 443 check https
> "/Client/SupportedBrowsers.html" host "myhost.example.com" code 200
> }
> EOF
> 

Not to change topics too drastically :)

Consider running the backend connection over a different encrypted transport, 
such as mesh iked(8) or upcoming wg(4).  It’s super easy to setup, and 
compatible with the other server OS.  Go further into the “SDN realm” with 
everything encapsulated in vxlan(4) for even more flexibility, including 
long-haul internet endpoints across varying firewall and NAT designs.  Pimp out 
the configs of your networking groups’ routers to de-encapsulate and decrypt 
the traffic for even more performance and compatibility.  Anything is possible 
as a front-end relay server with OpenBSD.

Why?  Well for one, you save on many rounds of TLS negotiation.  Upcoming 
performance enhancements to the networking stack will only help scale this 
method of relaying to more and more acceptable levels compared to non-encrypted 
networking.  My subjective net gain is simplicity, security, performance, and 
flexibility.

-Brian



Re: Restore pf tables metadata after a reboot

2020-06-04 Thread Brian Brombacher
No reason to expire ssh brute force.  They will never stop.

Manual flush if someone accidentally locked themselves out.

Just my two cents :)

> On Jun 4, 2020, at 12:48 AM, Anatoli  wrote:
> 
> 
>> 
>> Even then it seems that some of them turn up again pretty much
>> instantly after expiry.
> 
> You could update the expire time on each new connection/port scan
> attempt. This way you could put say 4 days expire time and block these
> IPs on all ports on all your systems and new connection attempts would
> update the expire for all the systems.
> 
> 4 days is because 5 days is a typical timeout for a temporary error for
> SMTP. It may happen that someone used for 24hs a cloud instance and
> then got banned by the cloud provider, the IP used for
> spam/scans/attacks could be reused for another client for a legit
> activity. So if that new client for the old IP sends to your client some
> important mail, it's not lost and doesn't generate an undeliverable mail
> report, it just takes some days to reach the destination (with retries
> by the origin server).
> 
> 4 weeks looks excessive for cloud shared IPs.
> 
> 
>> On 30/5/20 07:25, Peter Nicolai Mathias Hansteen wrote:
>> 
>> 
 30. mai 2020 kl. 11:54 skrev Walter Alejandro Iglesias :
>>> 
>>> The problem is most system administrators out there do very little.  If
>>> you were getting spam or attacks from some IP, even if you report the
>>> issue to the respective whois abuse@ address, chances are attacks from
>>> that IP won't stop next week, nor even next month.
>>> 
>>> So, in general terms, I would refrain as much as possible from hurry to
>>> expiring addresses.  Just my opinion.
>> 
>> Yes, there are a lot of systems out there that seem to be not really 
>> maintained at all. After years of advocating 24 hour expiry some time back I 
>> went to four weeks on the ssh brutes blacklist. Even then it seems that some 
>> of them turn up again pretty much instantly after expiry.
>> 
>> All the best,
>> 
>> —
>> Peter N. M. Hansteen, member of the first RFC 1149 implementation team
>> http://bsdly.blogspot.com/ http://www.bsdly.net/ http://www.nuug.no/
>> "Remember to set the evil bit on all malicious network traffic"
>> delilah spamd[29949]: 85.152.224.147: disconnected after 42673 seconds.
>> 
>> 
>> 
>> 
> 



Re: About pf max-src-conn-rate

2020-05-27 Thread Brian Brombacher
Keep in mind operations using pfctl such as reloading rule set or table from 
file, any IP’s caught in the smtp table by the max-src-conn-rate will be 
flushed depending on your command line.


> On May 27, 2020, at 4:29 PM, Walter Alejandro Iglesias  
> wrote:
> 
> Hello Brian,
> 
>> On Wed, May 27, 2020 at 02:35:46PM -0400, Brian Brombacher wrote:
>> What do you do with  table in other rules?  If you’re doing nothing, 
>> you need to do something like block additional connections, or adjust the 
>> pass rule to include from ! 
> 
> You're right.  I forgot to mention I have these lines before:
> 
>  table  persist file "/path/to/smtp.txt"
>  block in log quick inet proto tcp from  to any port { smtp smtps }
> 
>> 
>> Run: pfctl -t smtp -T show
>> 
>> Does it show the offending IP?  If so, the rule worked as you defined it.
>> 
>> 
> 
> I run a cron script that parses my log files and also add the offending
> IPs to that table.  To be sure the max-src-conn-rate adds those IPs to
> the table I'll have to create an alternative table just to test.
> 
> 



Re: About pf max-src-conn-rate

2020-05-27 Thread Brian Brombacher
What do you do with  table in other rules?  If you’re doing nothing, you 
need to do something like block additional connections, or adjust the pass rule 
to include from ! 

Run: pfctl -t smtp -T show

Does it show the offending IP?  If so, the rule worked as you defined it.



> On May 27, 2020, at 8:30 AM, Walter Alejandro Iglesias  
> wrote:
> 
> Another question about pf.
> 
> Perhaps I don't fully understand how connection rate is calculated.
> 
> The following line in /etc/pf.conf:
> 
>  pass in log inet proto tcp to any port { smtp smtps } synproxy state \
>(max-src-conn-rate 5/30, overload  flush global)
> 
> Shouldn't avoid this happen?
> 
> In /var/log/maillog
> 
> May 27 10:55:05 server smtpd[30272]: 1a931fba4746f485 smtp connected 
> address=192.119.68.113 host=hwsrv-733438.hostwindsdns.com
> May 27 10:55:06 server smtpd[30272]: 1a931fba4746f485 smtp failed-command 
> command="RCPT TO:" result="550 Invalid recipient: 
> "
> May 27 10:55:06 server smtpd[30272]: 1a931fba4746f485 smtp disconnected 
> reason=disconnect
> May 27 10:55:06 server smtpd[30272]: 1a931fbbc5c841e4 smtp connected 
> address=192.119.68.113 host=hwsrv-733438.hostwindsdns.com
> May 27 10:55:06 server smtpd[30272]: 1a931fbbc5c841e4 smtp failed-command 
> command="RCPT TO:" result="550 Invalid recipient: 
> "
> May 27 10:55:07 server smtpd[30272]: 1a931fbbc5c841e4 smtp disconnected 
> reason=disconnect
> May 27 10:55:07 server smtpd[30272]: 1a931fbc9f586ee6 smtp connected 
> address=192.119.68.113 host=hwsrv-733438.hostwindsdns.com
> May 27 10:55:07 server smtpd[30272]: 1a931fbc9f586ee6 smtp failed-command 
> command="RCPT TO:" result="550 Invalid recipient: 
> "
> May 27 10:55:07 server smtpd[30272]: 1a931fbc9f586ee6 smtp disconnected 
> reason=disconnect
> May 27 10:55:07 server smtpd[30272]: 1a931fbdf6b23f59 smtp connected 
> address=192.119.68.113 host=hwsrv-733438.hostwindsdns.com
> 
> [...] Complete here with 311 entries with the same time interval. 
> 
> May 27 10:59:11 server smtpd[30272]: 1a9320f8f8726fab smtp disconnected 
> reason=disconnect
> May 27 10:59:11 server smtpd[30272]: 1a9320f9e3e281ab smtp connected 
> address=192.119.68.113 host=hwsrv-733438.hostwindsdns.com
> May 27 10:59:11 server smtpd[30272]: 1a9320f9e3e281ab smtp failed-command 
> command="RCPT TO:" result="550 Invalid recipient: 
> "
> May 27 10:59:12 server smtpd[30272]: 1a9320f9e3e281ab smtp disconnected 
> reason=disconnect
> May 27 10:59:12 server smtpd[30272]: 1a9320fa851b3e31 smtp connected 
> address=192.119.68.113 host=hwsrv-733438.hostwindsdns.com
> May 27 10:59:12 server smtpd[30272]: 1a9320fa851b3e31 smtp failed-command 
> command="RCPT TO:" result="550 Invalid recipient: 
> "
> May 27 10:59:12 server smtpd[30272]: 1a9320fa851b3e31 smtp disconnected 
> reason=disconnect
> May 27 10:59:13 server smtpd[30272]: 1a9320fbe3f04434 smtp connected 
> address=192.119.68.113 host=hwsrv-733438.hostwindsdns.com
> May 27 10:59:13 server smtpd[30272]: 1a9320fbe3f04434 smtp failed-command 
> command="RCPT TO:" result="550 Invalid recipient: 
> "
> May 27 10:59:13 server smtpd[30272]: 1a9320fbe3f04434 smtp disconnected 
> reason=disconnect
> May 27 10:59:13 server smtpd[30272]: 1a9320fc4f172f88 smtp connected 
> address=192.119.68.113 host=hwsrv-733438.hostwindsdns.com
> May 27 10:59:14 server smtpd[30272]: 1a9320fc4f172f88 smtp failed-command 
> command="RCPT TO:" result="550 Invalid recipient: 
> "
> --
> 
> A total of *323* connections from the same IP at less than a 1/4 second
> interval during more than four minutes.
> 



Re: Setting permanent neighbor entry

2020-05-26 Thread Brian Brombacher
Do it in hostname.if.  You’ll win the race.

> On May 26, 2020, at 2:14 PM, Demi M. Obenour  wrote:
> 
> On 2020-05-26 09:34, Kanto Andria wrote:
>> Hello,
>> man ndp is probably another solution
>> 
>>On Tuesday, May 26, 2020, 9:17:25 a.m. EDT, Tommy Nevtelen 
>>  wrote:  
>> 
>>> On 26/05/2020 11.38, Demi M. Obenour wrote:
>>> What is the OpenBSD equivalent to this Linux command?
>>> 
>>> ip neighbor add 2001:db8::1 dev xnf0 lladdr fe:ff:ff:ff:ff:ff router nud 
>>> permanent
>>> 
>>> It doesn’t need to be a single command.  If the existing userspace
>>> tooling does not support this, is it possible to do it via the
>>> kernel APIs?
>> man arp
> 
> I already tried this, but it does not work if there is already
> an entry.  Removing it and re-adding it is racy: a new entry might
> appear before I can override it.
> 
> Sincerely,
> 
> Demi
> 



Re: IPv4 traffic over IPv6 tunnel approach

2020-05-08 Thread Brian Brombacher
>From your description, you want to pass IPv4 inside a tunnel that has an outer 
>protocol of IPv6.  Your resulting hostname.gif0 looks like the exact opposite 
>of your description (IPv6 inside the tunnel with IPv4 outer).

Clarify what you need please.  Provide your existing hostname.if files for the 
other interfaces if you need to.


> On May 8, 2020, at 3:09 PM, Martin  wrote:
> 
> Last thing I have to understand about gif(4) and IPv6 tunneling.
> 
> Should I set gif(4) 'inet6 alias' = the same IPv6 of the local end of IPv6 
> tunnel interface or just set 'inet6 alias' for gif(4) in tunnel's IPv6 subnet?
> 
> Martin
> 
> ‐‐‐ Original Message ‐‐‐
>>> On Friday, May 8, 2020 4:41 PM, Tom Smyth  
>>> wrote:
>> Hi Martin,
>> If I understand your question correctly
>> you need 2 endpoints to the tunnel...
>> for gif(4) or any gre((4) based tunnel
>> you need the interface setup on both the client and the server (gateway)
>> if you have a gateway serving multiple clients... then you need one
>> interface per client that you intend to connect
>> Thanks
>> Tom Smyth
>>> On Fri, 8 May 2020 at 17:38, Martin martin...@protonmail.com wrote:
>>> Thanks for confirmation.
>>> Hope I understand gif(4) functionality right from its configuration. Can I 
>>> set /etc/hostname.gif0 from client's side only like below:
>>> /etc/hostname.gif0
>>> tunnel 10.20.30.40 195.203.212.221
>>> inet6 alias 2001:05a8::0001::::8542 128
>>> dest 2001:05a8::0001::::8541
>>> where
>>> tunnel 10.20.30.40 is client's address, 195.203.212.221 gateway machine 
>>> egress IPv4
>>> inet6 alias is the same IPv6 address of client's IPv6 local interface or an 
>>> IPv6 address in the same subnet.
>>> dest IPv6 is a destination IPv6 interface address of gateway machine.
>>> Do I need to setup gif0 on gateway machine to have encapsulation working?
>>> Martin
>>> ‐‐‐ Original Message ‐‐‐
 On Friday, May 8, 2020 1:43 PM, Kristjan Komlosi 
 kristjan.koml...@gmail.com wrote:
 gif(4) should work fine, as it's designed to do what you described. The
 best approach depends on the level of security you want to achieve. IPIP
 tunnels aren't encrypted...
 regards, kristjan
 On 5/8/20 3:32 PM, Martin wrote:
> I have IPv6 unidirectional tunnel between two machines. One of them is 
> gateway, another one is a client.
> The goal is to route IPv4 packets over IPv6 tunnel from client to gateway 
> and NAT IPv4 packet to egress on gateway machine.
> May I use gif(4) for it or what is the best approach to traverse IPv4 
> packets over IPv6 tun?
> Martin
>> --
>> Kindest regards,
>> Tom Smyth.



Re: multihomed routing issue

2020-04-27 Thread Brian Brombacher
Try something like this in pf.conf:

pass in on hvn1 proto tcp from  to (hvn1) port 22 reply-to 
10.0.0.1@hvn1

The reason you have to do this is because you have the same router address on 
hvn0 and hvn1 (10.0.0.1).  Another option is to use route tables.

Let me know if you have any questions.  I run a lot of OpenBSD in Azure.

-Brian

> On Apr 26, 2020, at 12:03 PM, 4642 <4...@protonmail.com> wrote:
> 
> Hi, I have created a OpenBSD 6.6 VM in the Azures cloud that I plan to use 
> as a Firewall, I had planned on using carp but I can't get it working in 
> Azure so I think I can use an Internal load balancer to achieve my aim of 
> having two redundany OBSD Firewalls in Azure. The problem I have is that the 
> Azure Internal Load Balancer requires a health probe to work. So I create a 
> load balancer health probe and set it to the SSH service on my FW Host and 
> set it to every 5 seconds. I can see the traffic on my FW but the health 
> probe doesn't work and I think it's because the traffic from the Azure 
> discover ip "168.63.129.16" that is doing the probe is coming from within the 
> azure nextwork, hitting my internal nic and then onto the ssh service ? and 
> then finally leaving but on the external interface.
> 
> tcpdump -n -e -ttt -i pflog0  -v
> tcpdump: WARNING: snaplen raised from 116 to 160
> tcpdump: listening on pflog0, link-type PFLOG
> Apr 26 15:59:30.082436 rule 1/(match) [uid 0, pid 44293] block out on hvn0: 
> [orig src 10.x.x.36:22, dst 168.63.129.16:54762] 10.x.x.4.65324 > 
> 168.63.129.16.54762: S [bad tcp cksum 9d0b! -> 9e14] 252441079:252441079(0) 
> ack 3958895254 win 16384  (DF) (ttl 64, 
> id 2960, len 52, bad ip cksum 0! -> 52f0)
> 
> Rule 1 = block log all
> 168.63.129.16 = Azure Discovery Address
> 10.x.x.4  = My External IP on hvn0
> 10.x.x.36 = My Internal IP on hvn1
> 
> I tried changing the state rules to allow the traffic out on the external 
> interface and I thought I had it working earlier today by changing 
> state-policy from if-bound to floating but I can't reproduce that again for 
> some reason...  anyway it didn't seem to work.
> I think I really just need to force the traffic back out the Internal 
> interface but I just don't know how to do that ?
> 
> If anyone could help me it would be really appreciated.
> Thanks
> 
> Keith



Re: OpenBSD VPS hoster with unlimited/limited nonfiltered traffic

2020-04-19 Thread Brian Brombacher
Try setting sysctl kern.timecounter.hardware=tsc on the OpenBSD vmm guest and 
run ntpd.  I have not tried without ntpd but I know without using tsc, time 
skews too much.


> On Apr 19, 2020, at 10:25 AM, Martin  wrote:
> 
> Thanks all of you guys for suggestions.
> 
> Just one question to OpenBSD VMM based VPS hosters. I use vmd with OBSD 6.6 
> and Debian guests locally just for testing and stuck with clock 
> synchronization issue with both guests.
> 
> Will I encounter the same issue with clock synchronization on VMM based VPSes?
> 
> Martin
> 
> 
> ‐‐‐ Original Message ‐‐‐
>> On Saturday, April 18, 2020 12:20 AM, j3s  wrote:
>> 
>>> On 4/10/20 4:51 AM, Martin wrote:
>>> 
>>> I'm looking for relatively cheap VPS with OpenBSD installation support and 
>>> with ~1Tb of unfiltered traffic. In any words all in/out VPS ports must be 
>>> opened by default.
>>> Any recommendations?
>> 
>> Ohai. Co-founder of Cyberia Computer Club here - we're a US-based
>> nonprofit - part of our deal is providing good & open services.
>> 
>> We host our own hardware in a US datacenter, and offer OpenBSD VMs for
>> decent prices. You can see the whole shtick at https://capsul.org
>> 
>> No filtering or snooping, you just get a box on a public IPv4 and that's it.
>> 
>> Just wanted to toss my own hat in the ring!
>> 
>> j3s
> 
> 



Re: VLAN or aliases or? best way to isolate untrustable hosts in a small network

2020-02-05 Thread Brian Brombacher
The OP’s hostname.vlan* files never specify a vnetid.  I get an error trying 
to configure and bring up the second vlan interface the same way without vnetid 
specified.  Regardless of my error, the ifconfig(8) man page says without 
vnetid specified, vlan tag 0 will be used.  You need to specify two different 
vlan tags.

All of that aside: VLANs don’t give you any more security.  If the client host 
is on the same physical network as your two VLANs, the only thing stopping them 
from jumping between VLANs would be physical devices (switches, etc.) 
configured to prevent that.  From what I gathered, you don’t have this level of 
control.  Therefore, you gain nothing by segmenting the networks with VLANs.

-Brian

> On Feb 5, 2020, at 11:58 AM, Christian Weisgerber  wrote:
> 
> On 2020-02-05, Janne Johansson  wrote:
> 
>>> # /etc/hostname.vlan101
>>> description 'WLAN attached untrusted hosts'
>>> inet 192.168.156.0/24 255.255.255.0 vlandev run0
>> VLANs and wifi sounds like a non-starter.
> 
> Yep, if you're building your access point with OpenBSD.
> 
> More generally, though, any AP in the business segment has support
> for multiple SSIDs that can be assigned to different VLANs on the
> Ethernet side.
> 
> -- 
> Christian "naddy" Weisgerber  na...@mips.inka.de



Re: OpenBSD's extremely poor network/disk performance?

2020-01-07 Thread Brian Brombacher
There might be something wrong with your setup.  I routinely get 500+ MB/s disk 
and full 1 GBit Ethernet.



> On Jan 7, 2020, at 9:38 AM, Hamd  wrote:
> 
> It's 2020 and it's -still- sad to see OpenBSD -still- has the
> lowest/poorest (general/overall) performance ever:
> https://www.phoronix.com/scan.php?page=article=8-linux-bsd=1
> 
> My reference is not -only- that url, of course. My reference is my OpenBSD,
> giving ~8 MB/s file transfer/network/disk speed.
> 
> A Linux distro, on the same computer (dual boot), providing 89 MB/s speed.
> 
> (Longest) sad story of the year: When it comes to OpenBSD; security -
> great! Performance - horrible! I truly wish it was much better..
> 
> No, I'm not a fan of Calomel.



Re: Best Practices for growing disk partitions on a server

2019-11-17 Thread Brian Brombacher
Boot into single user mode.  At the boot loader prompt, type boot -s.  This 
will drop you to a root shell.



> On Nov 17, 2019, at 3:39 PM, Lev Lazinskiy  wrote:
> 
> Hi folks, 
> 
> I am new to openBSD, so forgive me if I am missing something obvious. 
> 
> I recently installed openBSD on a server using the auto-partition layout
> during installation and am quickly starting to run out of disk space. 
> 
> I have read the section in the FAQ [1] regarding how to grow a disk
> partition, but I am confused on the best way to actually do this. 
> 
> Specifically, I am trying to grow /usr and /home but they are "busy"
> when I try to follow these steps on a running server. 
> 
> Is the assumption that you are supposed to reboot the server with the 
> ISO attached and pop into a shell to complete these steps?
> 
> [1] https://www.openbsd.org/faq/faq14.html#GrowPartition
> 
> -- 
> Lev Lazinskiy
> 



Re: IPv6 problems

2019-08-13 Thread Brian Brombacher
You can also add a second line to /etc/mygate if you’re using that.

> On Aug 13, 2019, at 1:11 PM, Thomas Bohl  wrote:
> 
> Hello,
> 
>> My hostname.vio0 looks like this:
>> dhcp
>> inet6 alias > provider> 64
>> 
> 
> You most likely need to add a route. Add something like this to your hostname 
> file:
> !route add -inet6 default fe80::1%vio0
> 
> 
> Just in case you have the same problem. For whatever reason, after a reboot, 
> I have to do this in order to get IPv6 traffic flowing:
> ping6 -c 10 fe80::1%vio0
> 



Re: Best 1Gbe NIC

2019-08-02 Thread Brian Brombacher
I find cheap PCI-Express and PCI-X em(4) cards suffice for my needs.  990-992 
Mbps with tcpbench.


> On Aug 2, 2019, at 11:26 AM, Claudio Jeker  wrote:
> 
>> On Fri, Aug 02, 2019 at 12:28:58PM +0100, Andy Lemin wrote:
>> Ahhh, thank you!
>> 
>> I didn’t realise this had changed and now the drivers are written with
>> full knowledge of the interface.
> 
> That is an overstatement but we know for sure a lot more about these cards
> then many other less open ones.
> 
>> So that would make Intel Server NICs (i350 for example) some of the best
>> 1Gbe cards nowadays then?
> 
> They are well supported by OpenBSD as are many other server nics like bge
> and bnx. I would not call them best, when it comes to network cards it
> seems to be a race to the bottom. All chips have stuff in them that is
> just not great. em(4) for example needs a major workaround because the
> buffersize is specified by a bitfield. 
> 
> My view is more pessimistic, all network cards are shit there are just
> some that are less shitty. Also I prefer to use em(4) over most other
> gigabit cards.
> 
> -- 
> :wq Claudio
> 
>> 
>> Sent from a teeny tiny keyboard, so please excuse typos
>> 
 On 2 Aug 2019, at 09:52, Jonathan Gray  wrote:
 
 On Fri, Aug 02, 2019 at 09:19:09AM +0100, Andy Lemin wrote:
 Hi list,
 
 I know this is a rather classic question, but I have searched a lot on 
 this again recently, and I just cannot find any conclusive up to date 
 information?
 
 I am looking to buy the best 1Gbe NIC possible for OpenBSD and the only 
 official comments I can find relate to 3COM for ISA, or community 
 consensus towards Chelsio for 10Gbe.
 
 I know Intel works ok and I???ve used the i350???s before, but my 
 understanding is that Intel still doesn???t provide the documentation for 
 their NICs and so the emX driver is reverse engineered.
>>> 
>>> This is incorrect.  Intel provides datasheets for Ethernet parts.
>>> em(4) is derived from Intel authored code for FreeBSD supplied under a
>>> permissive license.
>>> 
 
 And if I remember correctly some offload features were also disabled in 
 the emX driver a while back as some functions where found to be insecure 
 on die and so it was deemed safer to bring the logic back on CPU.
 
 So I???m looking for the best 1Gbe NIC that supports the most 
 offloading/best driver support/performance etc.
 
 Thanks, Andy.
 
 PS; could we update the official supported hardware lists? ;)
 All the best.
 
 
 Sent from a teeny tiny keyboard, so please excuse typos
 
>> 
> 



Re: sysupgrade (Was: Re: Kernel crash in OpenBSD 6.5)

2019-08-01 Thread Brian Brombacher
Use the -n option to sysupgrade to not reboot after files are downloaded and 
verified.  Then delete the unwanted tarballs as mentioned from 
/home/_sysupgrade/ and reboot.

See sysupgrade(8): https://man.openbsd.org/sysupgrade

> On Aug 1, 2019, at 7:31 AM, Antal Ispanovity  wrote:
> 
> 2019-08-01 8:08 GMT+02:00, Harald Dunkel :
>> Hi folks,
>> 
>>> On 7/30/19 3:08 PM, Hrvoje Popovski wrote:
>>> 
>>> try to update both boxes to latest snapshot at least because in snapshot
>>> you have excellent tool called sysupgrade ... you will love it :)
>>> 
>>> with this tool you can upgrade os to latest snapshot without any problem
>>> over ssh :)
>>> 
>> This is cool.
>> 
>> Due to space and speed restrictions (compact flash card) and to reduce
>> downtime I would like to avoid the games and the Xwindow "balast" on my
>> gateways. Does sysupgrade recognize the tar balls that are already
>> installed, or does it become a "sysinstall" in this case?
> Iif I remember correctly it doesn't. Someone solved this by removing
> the unnecessary tarballs from the _sysupgrade folder and performed the
> upgrade after it.
>> 
>> Sorry for asking, but the man page https://man.openbsd.org/sysupgrade
>> doesn't tell.
>> 
>> 
>> Thanx in advance
>> Harri
>> 
>> 
> 


Re: Write to DVD-RAM

2019-07-27 Thread Brian Brombacher
See cd(4): https://man.openbsd.org/cd.4

It’s not a real block device.  You’ll need to use something like the dvd+rw 
tools package already mentioned in order to write data to it.  The man page 
talks about how cd devices are represented as block devices for consistency 
with other tools like disklabel and mount.  Look at the list of ioctl’s 
supported in the man page.  It talks of tracks of data (like audio tracks) and 
such.

-Brian

> On Jul 26, 2019, at 8:23 PM, gwes  wrote:
> 
> 
> 
>> On 7/25/19 7:14 PM, Zhi-Qiang Lei wrote:
>>> On Jul 25, 2019, at 10:24 PM, gwes  wrote:
>>> 
 On 7/24/19 10:19 PM, Zhi-Qiang Lei wrote:
 Hi, I’m trying to encrypt a DVD-RAM before putting some files onto it on 
 my OpenBSD 6.5 desktop. But neither dd nor disklabel seems able to work on 
 the drive. Did I miss something?
 
 $ dmesg | grep cd
 cd0 at scsibus3 targ 1 lun 0:  ATAPI 5/cdrom 
 removable serial.13fd3940302020202020
 cd0 at scsibus3 targ 1 lun 0:  ATAPI 5/cdrom 
 removable serial.13fd3940302020202020
 
 $ doas dd if=/dev/urandom of=/dev/rcd0c bs=1k
 dd: /dev/rcd0c: Invalid argument
 1+0 records in
 0+0 records out
 0 bytes transferred in 0.000 secs (0 bytes/sec)
 
 $ doas disklabel -E cd0
 cd0> a
 partition: [a]
 offset: [0]
 size: [2236704]
 FS type: [4.2BSD]
 cd0> w
 cd0> p
 OpenBSD area: 0-2236704; size: 2236704; free: 0
 #size   offset  fstype [fsize bsize   cpg]
   a:  22367040  4.2BSD   2048 16384 1
   c:  22367040  unused
 cd0> q
 No label changes.
 
 The same drive can be formatted and used on Mac OS X.
 
 Thanks and best regards,
 Siegfried
 
>>> Did you try 2K blocks? The low level of CDROM only works that way.
>>> 
>> 
>> Blocks larger than or equal to 2k get a "dd: /dev/rcd0c: short write on 
>> character device”. Regarding to cd(4) I thought the device is readonly, so 
>> dd(1) and disklabel(8) cannot write on it, but fdisk(8)  works fine.
>> 
>> $ doas dd if=/dev/urandom of=/dev/rcd0c bs=2k
>> dd: /dev/rcd0c: short write on character device
>> dd: /dev/rcd0c: Invalid argument
>> 1+0 records in
>> 0+1 records out
>> 512 bytes transferred in 0.008 secs (57960 bytes/sec)
>> 
>> $ doas dd if=/dev/urandom of=/dev/rcd0c bs=512
>> dd: /dev/rcd0c: Invalid argument
>> 1+0 records in
>> 0+0 records out
>> 0 bytes transferred in 0.000 secs (0 bytes/sec)
>> 
> /dev/cd0 is likely a symbolic link to something else in /dev.
> It's not clear what's going on unless we know exactly what's being used.
> "cd0" is not a usual OpenBSD device access even though one sees
> that in dmesg.
> 
> OpenBSD disk-like devices are usually referenced in the very
> old style which distinguishes "raw" [unbuffered direct to device]
> from "cooked" [system buffered]. This differs from at least Linux practice.
> Dunno about other BSDs or Macs.
> Buffered devices are essentially only used to mount as filesystems.
> 
> A raw device is /dev/r
> A buffered device is /dev/
> Note that there is always a partition letter.
> The kernel will always emulate a 'c' partition = whole device if necessary.
> 
> So the most specific way to refer to your cd device is /dev/rcd0c.
> 
> As a convenience and to reduce operator errors, many system maintenance
> programs will deduce /dev/rc from a bare device
> like sd0. This can be confusing to people new to OpenBSD.
> 


Re: OT: hardware war with manufacturers (espionage claims)

2019-07-03 Thread Brian Brombacher
Mihai,

Do you want to protest companies by not buying their equipment?  That is the 
only feasible outcome from this conversation.

The other outcome would be you want advice on what models will work on OpenBSD.

-Brian

> On Jul 3, 2019, at 12:11 PM, Zack Lofgren  wrote:
> 
> Mihai,
> 
> It depends on your threat model. You can’t absolutely trust any hardware 
> because of low level firmware. However, that doesn’t matter if your threat 
> model is low enough then that doesn’t matter. Are you an enemy of the state? 
> If so, you probably shouldn’t trust any technology. If you’re just an average 
> person, then using free software is probably enough with good practices like 
> encryption is enough.
> 
> Right now, I use an old Thinkpad with OpenBSD and full disk encryption 
> because it fits what I want. I have proprietary firmware for wireless because 
> I care more about it working than distrusting it for now. If I had a higher 
> threat level, I’d use an even older Thinkpad with coreboot/libreboot (not 
> sure if OpenBSD is compatible) and a different wireless NIC.
> 
> Zack Lofgren
> 
>> On Jul 3, 2019, at 09:48, Mihai Popescu  wrote:
>> 
>> ...
>> 
>> I asked for an answer more like "avoid using nVidia chipsets", not for 
>> theories.
>> So, again, do you consider brands when choosing hardware, like Dell
>> vs. Lenovo, etc. ?
>> 
>> Thank you.
>> 
> 



Re: OT: hardware war with manufacturers (espionage claims)

2019-07-02 Thread Brian Brombacher
I’m fine with hardware implants snooping on me.  But if I was a CISO for a huge 
company, I might go the extra mile to care about said implants.

I’ll continue living carefree.


> On Jul 2, 2019, at 1:42 PM, Nathan Hartman  wrote:
> 
> On Tue, Jul 2, 2019 at 1:28 PM Brian Brombacher 
> wrote:
> 
>> Oh and if the implant is smart, it’ll detect you’re trying to find it and
>> go dormant.
>> 
>> Even more good luck!
> 
> 
> Well then the solution is obvious.
> 
> Design your own hardware.
> 
> Or learn to live off the land.



Re: OT: hardware war with manufacturers (espionage claims)

2019-07-02 Thread Brian Brombacher
Oh and if the implant is smart, it’ll detect you’re trying to find it and go 
dormant.

Even more good luck!

> On Jul 2, 2019, at 1:24 PM, Brian Brombacher  wrote:
> 
> Hardware implants go beyond just sending packets out your network card.  They 
> have transceivers that let agents control or snoop the device from a distance 
> using RF.
> 
> You need to scan the hardware with RF equipment to be sure.
> 
> Good luck!
> 
>>> On Jul 2, 2019, at 12:27 PM, Misc User  
>>> wrote:
>>> 
>>> On 7/2/2019 12:43 AM, John Long wrote:
>>> On Tue, 2 Jul 2019 10:07:59 +0300
>>> Mihai Popescu  wrote:
>>>> Hello,
>>>> 
>>>> I keep finding articles about some government bans against some
>>>> hardware manufacturers related to some backdoor for espionage. I know
>>>> this is an old talk. Most China manufacturers are under the search:
>>>> Huawei, ZTE, Lenovo, etc.
>>> It seems painfully obvious what's driving all the bans and vilification
>>> of Chinese hardware and software is that the USA wants exclusive rights
>>> to spy on you and won't tolerate any competition.
>>> Does anybody think maybe the reason Google and Facebook don't pay taxes
>>> anywhere might have something to do with what they do with all that
>>> info they collect? Is the "new" talk about USA banning any meaningful
>>> encryption proof of how seriously they take security and privacy?
>>>> What do you think and do when using OpenBSD on this kind of hardware?
>>> Lemote boxes are kinda neat but they're not the fastest in the world.
>>> It beats the hell out of the alternatives if you can live with the
>>> limitations.
>>>> Do you prefer Dell, HP and Fujitsu?
>>> Your only choice is probably to pick the least objectionable entity to
>>> spy on you. If you buy Intel you know you're getting broken, insecure
>>> crap no matter whose box it comes in. Sure it runs fast, but... in that
>>> case everybody is going to spy on you.
>>> /jl
>> 
>> Assume everything is compromised.  Don't trust something because someone
>> else said it was good.  Really, the only way to test if a machine is
>> spying on you, do some kind of packet capture to watch its traffic until
>> you are satisfied.  But also put firewalls in front of your devices to
>> ensure that if someone is trying to spy on you, their command and
>> control packets don't make it to the compromised hardware.
>> 
>> Besides, subverting a supply a hardware supply chain is a difficult and
>> expensive process.  And if there is one thing I've learned in my career
>> as a security consultant, its that no matter how malevolent or
>> benevolent a government is, they are still, above all, cheap and lazy.
>> And in a world where everything is built with the first priority is
>> making the ship date, there are going to be so many security flaws to be
>> exploited.  So much cheaper and easier to let Intel rush a design to
>> market or Red Hat push an OS release without doing thorough testing and
>> exploit the inevitable remote execution flaws.
>> 
>> Or intelligence agencies can take advantage of the average person's tendency 
>> to laziness and cheapness by just asking organizations like Google, 
>> Facebook, Comcast, Amazon to just hand over the data they gathered in the 
>> name of building an advertising profile.
>> 
> 



Re: OT: hardware war with manufacturers (espionage claims)

2019-07-02 Thread Brian Brombacher
Hardware implants go beyond just sending packets out your network card.  They 
have transceivers that let agents control or snoop the device from a distance 
using RF.

You need to scan the hardware with RF equipment to be sure.

Good luck!

> On Jul 2, 2019, at 12:27 PM, Misc User  wrote:
> 
>> On 7/2/2019 12:43 AM, John Long wrote:
>> On Tue, 2 Jul 2019 10:07:59 +0300
>> Mihai Popescu  wrote:
>>> Hello,
>>> 
>>> I keep finding articles about some government bans against some
>>> hardware manufacturers related to some backdoor for espionage. I know
>>> this is an old talk. Most China manufacturers are under the search:
>>> Huawei, ZTE, Lenovo, etc.
>> It seems painfully obvious what's driving all the bans and vilification
>> of Chinese hardware and software is that the USA wants exclusive rights
>> to spy on you and won't tolerate any competition.
>> Does anybody think maybe the reason Google and Facebook don't pay taxes
>> anywhere might have something to do with what they do with all that
>> info they collect? Is the "new" talk about USA banning any meaningful
>> encryption proof of how seriously they take security and privacy?
>>> What do you think and do when using OpenBSD on this kind of hardware?
>> Lemote boxes are kinda neat but they're not the fastest in the world.
>> It beats the hell out of the alternatives if you can live with the
>> limitations.
>>> Do you prefer Dell, HP and Fujitsu?
>> Your only choice is probably to pick the least objectionable entity to
>> spy on you. If you buy Intel you know you're getting broken, insecure
>> crap no matter whose box it comes in. Sure it runs fast, but... in that
>> case everybody is going to spy on you.
>> /jl
> 
> Assume everything is compromised.  Don't trust something because someone
> else said it was good.  Really, the only way to test if a machine is
> spying on you, do some kind of packet capture to watch its traffic until
> you are satisfied.  But also put firewalls in front of your devices to
> ensure that if someone is trying to spy on you, their command and
> control packets don't make it to the compromised hardware.
> 
> Besides, subverting a supply a hardware supply chain is a difficult and
> expensive process.  And if there is one thing I've learned in my career
> as a security consultant, its that no matter how malevolent or
> benevolent a government is, they are still, above all, cheap and lazy.
> And in a world where everything is built with the first priority is
> making the ship date, there are going to be so many security flaws to be
> exploited.  So much cheaper and easier to let Intel rush a design to
> market or Red Hat push an OS release without doing thorough testing and
> exploit the inevitable remote execution flaws.
> 
> Or intelligence agencies can take advantage of the average person's tendency 
> to laziness and cheapness by just asking organizations like Google, Facebook, 
> Comcast, Amazon to just hand over the data they gathered in the name of 
> building an advertising profile.
> 



Re: Bypass doas password check with chroot

2019-07-02 Thread Brian Brombacher
Use doas.conf to permit root with nopass option.

See doas.conf(5).


> On Jul 2, 2019, at 4:43 AM, cho...@jtan.com wrote:
> 
> This isn't a bug per se, more of an incongruity in how security-centric tools 
> work wrt root, specifically doas and chroot/su/other:
> 
>  joe@drogo$ doas -s
>  drogo# doas -u chohag -s
>  doas (root@drogo) password:
>  doas: Authorization failed
>  drogo# chroot -u chohag /
>  drogo$ ^D
>  drogo# su -l chohag
>  drogo$ ^D
> 
> Obviously a little one-liner or tiny C app could achieve the same result too.
> 
> I assume this is more or less known, since each tool is working to its 
> designed spec, so is the above ultimately the desired behaviour? Should doas 
> ask even for root's password while myriad other ways of obtaining any user ID 
> do and probably always will exist?
> 
> On some servers root doesn't have a password.
> 
> Matthew
> 



Re: bwfm bcm43569

2019-06-28 Thread Brian Brombacher
You’re always welcome to submit a patch for functionality you want.  It might 
not be accepted but your own use case would be covered.

Statements on the intentions of others, as the way you continue to do, is a 
sign of a troll.  Submit a patch and you’ll be a helpful troll, if such a thing 
exists.

-Brian

On Jun 28, 2019, at 3:53 AM, 3  wrote:

>> Babut,
>> You are not correct, OpenBSD is a full Unix/BSD implementation and can
>> do most anything BSD/Unix can.
> 
>> I guess in the whole domain of general purpose functionality,
>> concurrent disk/filesystem IO performance would be OpenBSD's humblest
>> point today (due to not using block device multiqueueing and I get the
>> impression that the disk/IO subsystem is mostly not parallellized, for
>> some usecases also the 3GB buffer cap limit matters).
> 
>> Joseph
> 
> obsd do not have a reliable file system and nothing is being done in
> this direction. the fact that disk operations are slow is not so
> important for me(with the advent of ssd, this problem has ceased to be
> relevant).
> the hardware support is in a terrible state and relative situation is
> getting worse(i am not even talking about the drivers but about all
> the 802.11 subsystem which still does not know how to 11ac, although
> during this time it is outdated. and i suspect theo will cut out the
> 802.11 subsystem soon because no 802.11- no problem. it is obsd
> style).
> obsd has never been strong on multitasking, but now they are trying to
> make it worse(thank god that ht support still kept). 
> these disadvantages narrow the area where you can use obsd. your very
> approach to evaluate the completeness of compliance with some
> concept(as unix) is flawed. people do not use concepts in everyday
> life, they use some opportunities. and it is necessary to adjust the
> concept to what is demanded by people and not to limit opportunities
> because of the concept. and although theo is wrong, obsd is his
> brainchild and he has the right to kill him :\
> 
> ps: any fictional concepts is bullshit. there are only practical needs
> and common sense
> 



Re: bwfm bcm43569

2019-06-24 Thread Brian Brombacher
Provide a dmesg before you rant.

Thanks,
Brian

> On Jun 24, 2019, at 5:06 PM, 3  wrote:
> 
> i know that wifi adapters never worked in obsd(excluding those
> adapters for which drivers were written by vendors), but i found one
> that shows signs of life in 11n(11ac 2t2r supported by chip). it can
> be bought anywhere where there are samsung tv. moreover it even works
> in hostap mode(unlike the buggy athn).
> but without a ton of bugs not done:
> bwfm0: could not read register: TIMEOUT
> bwfm0: could not open rx pipe: IN_USE
> bwfm0: could not init bus
> bwfm0: firmware did not start up
> ..and sometimes kernel panic on boot, but works would still!
> meet https://wikidevi.com/wiki/Samsung_WCH730B
> bwfm0 at uhub0 port 1 configuration 1 interface 0 "Broadcom BCMUSB 802.11 
> Wireless Adapter" rev 2.10/0.01 addr 2
> photo without cover: http://pichost.org/images/2019/06/24/Untitled-138.jpg 
> it is shown where to solder the wires for wifi. on the right through
> two pins(ground is first) is bluetooth, but for obsd this does not
> matter(thx theo).  
> data sheet for chip: https://www.cypress.com/file/310246/download
> 



Re: Ansible install Re: Reboot and re-link

2019-06-22 Thread Brian Brombacher
Using Ansible to reinstall the operating system is like trying to turn a four 
door sedan into a monster truck with a hammer.

Wrong tool for the job.

> On Jun 22, 2019, at 6:46 PM, Frank Beuth  wrote:
> 
>> On Sat, Jun 22, 2019 at 03:06:30AM +0100, Andrew Luke Nesbit wrote:
>>> On 21/06/2019 19:02, Frank Beuth wrote:
>>> I don't want to re-open the hostilities, but installing OpenBSD via
>>> Ansible is very relevant to my interests.
>> 
>> I feel exactly the same way and am surprised that Ansible caused
>> hostilities.  Can you send me a link to the thread where this happened
>> please?  I want to know why, i.e., pros and cons.
> 
> It doesn't look to me like Ansible as such caused any trouble, it was 
> someone's use of Ansible in an unsupported way (and probably many other 
> configuration choices), leading to further problems, and then people got 
> angry.
> 
> For details search the misc@ archives for "Reboot and re-link" (the subject 
> line), things got spread across multiple threads:
> https://marc.info/?l=openbsd-misc=2=1=Reboot+and+re-link=t
>