Potential bug in /usr/sbin/ndp lines 316-364 with RTF_LLINFO

2021-07-08 Thread Vladimir Nikishkin
Hello, everyone.

I believe, ndp.c has a bug.

1. Line 319 defines a fresh m_rtmsg, and does not initialise it.
2. Therefore m_rtmsg.m_rtm should be empty or zero. (Or constant, I am
not very sure)
3. Line 329 defines rtm and makes it a pointer to a fresh (empty or
constant) m_rtmsg.m_rtm.
4. Nothing uses either m_rtmsg or rtm until lines 363+364.
5. On line 363, `if` checks that rtm->rtm_flags & RTF_LLINFO is true.
Effectively, it is checking that m_rtmsg.r_rtm.rtm_flags has some bit set.

This check is either checking an uninitialised, or a constant value. (I
am not very sure how fresh structures are initialised in OpenBSD) In
either case, it is not useful.

In effect, `ndp -s  ` always fails, because this check is
always false.

-- 
Your sincerely,
Vladimir Nikishkin (MiEr, lockywolf)
(Laptop)



Re: PF annoying messages

2021-07-08 Thread Cameron Simpson
On 07Jul2021 10:59, Pierre Dupond <76nem...@gmx.ch> wrote:
> I am setting up a firewall with PF. The strategy used is quite 
> common:
>   set block-policy return
>   set loginterface none
>   set skip on lo0
>   match in all scrub (random-id reassemble tcp)
>   block log

I think this sets _both_ block and log as the packet acceptance state.  
_Not_ "log if I block" i.e. a pass rule will still log.

Try putting just "block" here, and annotating only the rules you want to 
log with "log".

I was going to suggest a final "block log", but that will only work if 
all your pass rules have "quick", preventing further rules from 
applying.

Cheers,
Cameron Simpson 



Re: TCP FIN hangups in encrypted ESP tunnel

2021-07-08 Thread Andre Stoebe
Hi Peter,

it's not just you, I have similar problems since around July 1, but with a
netcup server.

Since then, downloading a bigger file from the netcup server using scp or rsync
fails pretty consistently. Normal ssh sessions or other stuff like imap or xmpp
remain stable, as far as I can tell.

I run the scp/rsync over wg, but it doesn't matter, happens over pppoe too.

Like you, I also spent the last evenings looking for mistakes on my side,
besides having this working for years. So now I guess the problem is on their
side or somewhere in between?

I see the following when the file transfer fails:

192.168.100.1 is my router, where I run "scp 192.168.100.2:dump.gz ."
192.168.100.2 is the netcup server

237470  28.285237 192.168.100.1 -> 192.168.100.2 TCP 56 12534 -> 22 [ACK] 
Seq=55922 Ack=195360998 Win=120512 Len=0 TSval=2630531475 TSecr=89901171
237471  28.285242 192.168.100.2 -> 192.168.100.1 SSHv2 1424 Server: Encrypted 
packet (len=1368)
237472  28.285260 192.168.100.2 -> 192.168.100.1 SSHv2 1424 Server: Encrypted 
packet (len=1368)
237473  28.285288 192.168.100.1 -> 192.168.100.2 TCP 56 12534 -> 22 [ACK] 
Seq=55922 Ack=195363734 Win=117824 Len=0 TSval=2630531475 TSecr=89901171
237474  28.285293 192.168.100.2 -> 192.168.100.1 SSHv2 1424 Server: Encrypted 
packet (len=1368)
237475  28.285311 192.168.100.2 -> 192.168.100.1 SSHv2 1424 Server: Encrypted 
packet (len=1368)
237476  28.285339 192.168.100.1 -> 192.168.100.2 TCP 56 12534 -> 22 [ACK] 
Seq=55922 Ack=195366470 Win=115072 Len=0 TSval=2630531475 TSecr=89901171
237477  28.285348 192.168.100.2 -> 192.168.100.1 SSHv2 1424 Server: [TCP 
Previous segment not captured] , Encrypted packet (len=1368)
237478  28.285382 192.168.100.1 -> 192.168.100.2 TCP 68 [TCP Dup ACK 237476#1] 
12534 -> 22 [ACK] Seq=55922 Ack=195366470 Win=115072 Len=0 TSval=2630531475 
TSecr=89901171 SLE=195367838 SRE=195369206
237479  28.285498 192.168.100.1 -> 192.168.100.2 TCP 68 [TCP Window Update] 
12534 -> 22 [ACK] Seq=55922 Ack=195366470 Win=123264 Len=0 TSval=2630531475 
TSecr=89901171 SLE=195367838 SRE=195369206
237480  28.285863 192.168.100.2 -> 192.168.100.1 SSHv2 1424 Server: Encrypted 
packet (len=1368)
237481  28.285906 192.168.100.1 -> 192.168.100.2 TCP 68 [TCP Dup ACK 237476#2] 
12534 -> 22 [ACK] Seq=55922 Ack=195366470 Win=123264 Len=0 TSval=2630531475 
TSecr=89901171 SLE=195367838 SRE=195370574
237482  28.285914 192.168.100.2 -> 192.168.100.1 SSHv2 1424 Server: Encrypted 
packet (len=1368)
237483  28.285941 192.168.100.1 -> 192.168.100.2 TCP 68 [TCP Dup ACK 237476#3] 
12534 -> 22 [ACK] Seq=55922 Ack=195366470 Win=123264 Len=0 TSval=2630531475 
TSecr=89901171 SLE=195367838 SRE=195371942
237484  28.285946 192.168.100.2 -> 192.168.100.1 SSHv2 1424 Server: Encrypted 
packet (len=1368)
237485  28.285973 192.168.100.1 -> 192.168.100.2 TCP 68 [TCP Dup ACK 237476#4] 
12534 -> 22 [ACK] Seq=55922 Ack=195366470 Win=123264 Len=0 TSval=2630531475 
TSecr=89901171 SLE=195367838 SRE=195373310
237486  28.285979 192.168.100.2 -> 192.168.100.1 SSHv2 1424 Server: Encrypted 
packet (len=1368)
237487  28.286006 192.168.100.1 -> 192.168.100.2 TCP 68 [TCP Dup ACK 237476#5] 
12534 -> 22 [ACK] Seq=55922 Ack=195366470 Win=123264 Len=0 TSval=2630531475 
TSecr=89901171 SLE=195367838 SRE=195374678
237488  28.286016 192.168.100.2 -> 192.168.100.1 SSHv2 1424 Server: Encrypted 
packet (len=1368)
237489  28.286044 192.168.100.1 -> 192.168.100.2 TCP 68 [TCP Dup ACK 237476#6] 
12534 -> 22 [ACK] Seq=55922 Ack=195366470 Win=123264 Len=0 TSval=2630531475 
TSecr=89901171 SLE=195367838 SRE=195376046
237490  28.286054 192.168.100.2 -> 192.168.100.1 SSHv2 1424 Server: Encrypted 
packet (len=1368)
237491  28.286081 192.168.100.1 -> 192.168.100.2 TCP 68 [TCP Dup ACK 237476#7] 
12534 -> 22 [ACK] Seq=55922 Ack=195366470 Win=123264 Len=0 TSval=2630531475 
TSecr=89901171 SLE=195367838 SRE=195377414
237492  28.286343 192.168.100.1 -> 192.168.100.2 TCP 68 [TCP Window Update] 
12534 -> 22 [ACK] Seq=55922 Ack=195366470 Win=131456 Len=0 TSval=2630531475 
TSecr=89901171 SLE=195367838 SRE=195377414
237493  28.286421 192.168.100.1 -> 192.168.100.2 TCP 68 [TCP Window Update] 
12534 -> 22 [ACK] Seq=55922 Ack=195366470 Win=139648 Len=0 TSval=2630531475 
TSecr=89901171 SLE=195367838 SRE=195377414
237494  28.287076 192.168.100.2 -> 192.168.100.1 TCP 56 22 -> 12534 [FIN, ACK] 
Seq=195377414 Ack=55922 Win=16384 Len=0 TSval=89901171 TSecr=2630531475
237495  28.287141 192.168.100.1 -> 192.168.100.2 TCP 68 [TCP Dup ACK 237476#8] 
12534 -> 22 [ACK] Seq=55922 Ack=195366470 Win=139648 Len=0 TSval=2630531475 
TSecr=89901171 SLE=195367838 SRE=195377414
237496  28.288062 192.168.100.1 -> 192.168.100.2 TCP 68 [TCP Window Update] 
12534 -> 22 [ACK] Seq=55922 Ack=195366470 Win=147712 Len=0 TSval=2630531475 
TSecr=89901171 SLE=195367838 SRE=195377414
237497  28.288586 192.168.100.1 -> 192.168.100.2 SSHv2 104 Client: Encrypted 
packet (len=36)
237498  28.295439 192.168.100.2 -> 192.168.100.1 SSHv2 1424 Server: [TCP Fast 
Retransmission] , 

Re: X11 SIGSEGV on VirtualBox

2021-07-08 Thread Andrew Daugherity
On Fri, Jun 18, 2021 at 3:24 PM Chris Narkiewicz  wrote:
>
> I'm trying to run xenodm on VirtualBox VM.
> VirtualBox 6.1.16_Ubuntu r140961 running on Ubuntu 20.04 with Intel
> card. VM uses VMSVGA display with NO 3D acceleration.
>
> Fresh OpenBSD 6.9 install, but I tried latest snapshot - same problem.
>
> When starting Xorg server, it crashes with SIGSEGV. Does anybody know
> why it happens? How can I generate some actionable debug output, such
> as stacktrace, to help identify root cause?

See the "How to get a core file out of the X server?" section of the
Xenocara README [1].  You can then load Xorg and the core file into
gdb/lldb.  I think ports egdb may do better in some cases? Others who
are more knowledgeable can weigh in on that.

Potential workarounds: use the "vesa" X driver instead of "vmware"?  I
think VBox supports that but I don't remember.  Another option is
efifb/wsfb, which of course requires configuring the VM for UEFI mode
and reinstalling.  Both probably have lower performance though.

-Andrew

[1] https://github.com/openbsd/xenocara



Re: TCP FIN hangups in encrypted ESP tunnel

2021-07-08 Thread Brian Brombacher



> On Jul 8, 2021, at 8:05 AM, Peter J. Philipp  wrote:
> 
> On Wed, Jul 07, 2021 at 11:57:50PM +0300, Ville Valkonen wrote:
>> Hi,
>> 
>> not sure if related but my Linux box (also in Hetzner) also started to have
>> flaky connection lately.
>> 
>> --
>> Regards,
>> Ville
> 
> I opened a ticket with Hetzner last week thinking it was an in-band DoS.  They
> assured me, they are not seeing this.
> 
> My VPS is in Falkenstein for what it's worth.  Because the problems started
> occuring as I was upgrading my Telekom.de link I thought it was related to 
> that
> until I did tcpdumps.  I mentioned it to the telekom.de chat help line 
> despite.
> 
> On your Linux box have you done any debugging as to why it became flaky?
> 
> Some Linux equivalents that I know:  ktrace/strace, tcpdump is the same.  Are
> you seeing these through an IPSEC tunnel or in plain Internetworking?
> 
> Also are you using the Intel VPS's or the AMD Epyc VPS's?  I think it may be
> important to know if anything like spectre is able to write variables back to
> the cloud instance.  In that case we're f*cked and only Hetzner can help with
> new hardware.
> 
> Best Regards,
> -peter
> 

Are you changing the default TCPKeepAlive setting?  It defaults to yes.  It 
exists as options in sshd_ and ssh_config.  Additionally, ClientAliveInterval 
and ServerAliveInterval might be handy.  A sysctl also exists to turn TCP keep 
alive on for all connections by default.

Not sure it’ll help.  Does your download crawl to a halt, then after a period 
of time, you get the FIN?

(Note: I don’t have any Hetzner hosts and I’m just guessing based on my 
experience with Azure)

-Brian




Re: TCP FIN hangups in encrypted ESP tunnel

2021-07-08 Thread Peter J. Philipp
On Thu, Jul 08, 2021 at 12:18:09PM -0400, Brian Brombacher wrote:
[..]

> Are you changing the default TCPKeepAlive setting?  It defaults to yes.  It 
> exists as options in sshd_ and ssh_config.  Additionally, ClientAliveInterval 
> and ServerAliveInterval might be handy.  A sysctl also exists to turn TCP 
> keep alive on for all connections by default.
> 

I didn't change that setting but I have this on pod's sshd_config:

#TCPKeepAlive yes

> Not sure it???ll help.  Does your download crawl to a halt, then after a 
> period of time, you get the FIN?


So what I'm doing is 'scp pod:Backup/*gz .' to the local directory on arda
(local host).

In the tcpdump the packets go by fairly rapidly in fact there is a lot of
packets before the FP (fin packet) even makes it through to arda (because there
is at least a dozen packets in flight).

One note here:  I applied the IPSEC to escape any in-band attacks, however when
I did that last week, the very first time I tried to scp my backups the
connection did indeed crawl to a halt.  It was just sitting there.  I worried
back then that someone spoofed WIN 0's into the IPSEC'ed stream somehow but
I applied anti-spoof rules on IPSEC so that can't happen and the fear was
probably unfounded in retrospect.  I still don't have an explanation for myself
for that though.  Perhaps it was the missing scrub to lower the MTU on enc0
which isn't adjustable?   Anyhow last week then I did manage to download the
4 GB of backups, but doing so this week proved to not work.


> (Note: I don???t have any Hetzner hosts and I???m just guessing based on my 
> experience with Azure)
> 
> -Brian

Thanks.  I looked over my firewall rules on enc0 a little and did notice I have
a scrub rule that changes the mss to 1240 I'm not sure tough if that will
cause this behaviour.  It wasn't used anyhow when I just did the plain scp
without wrapping IPSEC around it, and then it still FIN'ed and subsequent
RST'.

Best Regards,
-peter



bsd.rd upgrade can't installboot on fde

2021-07-08 Thread Nicola Dell'Uomo
Hi,

upgrading from current 6.9 GENERIC#99 amd64 to current 6.9 GENERIC#108 amd64 
end up with this error:

[snip]
Making all device nodes... done
installboot: write: No space left on device

Failed to install bootblocks.
You will not be able to boot OpenBSD from sd1.

[end]

In this case sd1 is the encrypted softraid(4) device.

After that system reboots as usual: However I have to hash again /bsd in order 
to have KARL working.

These are my partitions:

puffy$ df -h
Filesystem Size Used Avail Capacity Mounted on
/dev/sd1a 1.9G 160M 1.7G 8% /
/dev/sd1k 825G 338G 446G 43% /home
/dev/sd1d 7.8G 13.1M 7.3G 0% /tmp
/dev/sd1f 7.8G 4.9G 2.5G 67% /usr
/dev/sd1g 11.6G 240M 10.8G 2% /usr/X11R6
/dev/sd1h 23.2G 8.1G 14.0G 37% /usr/local
/dev/sd1j 7.8G 1.5G 5.9G 20% /usr/obj
/dev/sd1i 4.8G 1.2G 3.4G 27% /usr/src
/dev/sd1e 35.8G 260M 33.8G 1% /var

Any idea about what is happening?


Re: TCP FIN hangups in encrypted ESP tunnel

2021-07-08 Thread Peter J. Philipp
On Wed, Jul 07, 2021 at 11:57:50PM +0300, Ville Valkonen wrote:
> Hi,
> 
> not sure if related but my Linux box (also in Hetzner) also started to have
> flaky connection lately.
> 
> --
> Regards,
> Ville

I opened a ticket with Hetzner last week thinking it was an in-band DoS.  They
assured me, they are not seeing this.

My VPS is in Falkenstein for what it's worth.  Because the problems started
occuring as I was upgrading my Telekom.de link I thought it was related to that
until I did tcpdumps.  I mentioned it to the telekom.de chat help line despite.

On your Linux box have you done any debugging as to why it became flaky?

Some Linux equivalents that I know:  ktrace/strace, tcpdump is the same.  Are
you seeing these through an IPSEC tunnel or in plain Internetworking?

Also are you using the Intel VPS's or the AMD Epyc VPS's?  I think it may be
important to know if anything like spectre is able to write variables back to
the cloud instance.  In that case we're f*cked and only Hetzner can help with
new hardware.

Best Regards,
-peter