Re: odd pf divert-packet problem

2024-02-08 Thread Peter J. Philipp



On 2/8/24 10:18, Stuart Henderson wrote:

On 2024/02/08 09:19, Peter J. Philipp wrote:

On 2/7/24 20:15, Janne Johansson wrote:

pass in log quick on wg1 inet proto udp from 192.168.178.1 to any port = 5060 sc
rub (reassemble tcp) divert-packet port 2

The mix of udp and tcp reassembly seems interesting there.

Yeah it does, but it is added on both stern (which works)
and superpod (which doesn't).  Since this is not such a big
problem I'm gonna rest on it, and perhaps move the
divert'ing entirely to stern.  The reason being is that the
incoming SIP packets are not fragmented, as they are not
really (or ever) big enough.  So my phone setup works on
SDP'ing outgoing SIP packets.

I think that's a red herring.

"reassemble tcp" is poorly named and does not actually deal with
reassembling fragmented packets, see the paragraphs following this in
pf.conf(5) -

reassemble tcp
Statefully normalises TCP connections.  reassemble
tcp performs the following normalisations:

the things done by "reassemble tcp" *only* apply to TCP packets.


In other works there is no way to remove the reassemble tcp
scrub option as it's not in my rules to begin with.

It is added automatically for divert-packet rules.

I would start by adding "match log(matches)" to the top of pf.conf and
monitor the pflog0 interface to make sure packets are matched by the
intended rules. (tcpdump -neipflog0)



I immediately thought this was good advice.   Also after giving my reply 
some time and thinking about it I think I don't need sipdiv itself on 
superpod.  The reason being that incoming packets never were 
problematic, and what's coming in is coming through a wireguard from the 
fritzbox at my parents house, and it is leaving through wireguard to my 
home.  Since the MTU of both wireguards is at 1420 if it goes in it must 
go out.  I'm glad I still built sipdiv because stern is before the 
wireguards in my home.  It requires shrinking the SIP with SDP.  
Thankfully the fritzboxes understand SDP, it's weird to find a knob for 
this (I couldn't) on FritzOS! itself.


So I have a .pcap that I'd share with anyone why the early quick rule 
does not get called.  It has to do with nat rules on the wireguard 
facing my parents house, they create a state and once it's there I'm 
reasoning that NAT states don't get diverted.  I don't think there is 
any NAT rules on stern that are similar.


Thanks and Best Regards to all who replied!

-peter

PS: please forgive the thunderbird formatting.  I still haven't set up a 
way to get inbound mail into a mutt client, after setting up a mail host 
that has no sshd but rather is controlled on console.




Re: odd pf divert-packet problem

2024-02-08 Thread Stuart Henderson
On 2024/02/08 09:19, Peter J. Philipp wrote:
> 
> On 2/7/24 20:15, Janne Johansson wrote:
> > > pass in log quick on wg1 inet proto udp from 192.168.178.1 to any port = 
> > > 5060 sc
> > > rub (reassemble tcp) divert-packet port 2
> > The mix of udp and tcp reassembly seems interesting there.
> 
> Yeah it does, but it is added on both stern (which works)
> and superpod (which doesn't).  Since this is not such a big
> problem I'm gonna rest on it, and perhaps move the
> divert'ing entirely to stern.  The reason being is that the
> incoming SIP packets are not fragmented, as they are not
> really (or ever) big enough.  So my phone setup works on
> SDP'ing outgoing SIP packets.

I think that's a red herring.

"reassemble tcp" is poorly named and does not actually deal with
reassembling fragmented packets, see the paragraphs following this in
pf.conf(5) -

reassemble tcp
   Statefully normalises TCP connections.  reassemble
   tcp performs the following normalisations:

the things done by "reassemble tcp" *only* apply to TCP packets.

> In other works there is no way to remove the reassemble tcp
> scrub option as it's not in my rules to begin with.

It is added automatically for divert-packet rules.

I would start by adding "match log(matches)" to the top of pf.conf and
monitor the pflog0 interface to make sure packets are matched by the
intended rules. (tcpdump -neipflog0)



Re: odd pf divert-packet problem

2024-02-08 Thread Peter J. Philipp



On 2/7/24 20:15, Janne Johansson wrote:

pass in log quick on wg1 inet proto udp from 192.168.178.1 to any port = 5060 sc
rub (reassemble tcp) divert-packet port 2

The mix of udp and tcp reassembly seems interesting there.



Hi Janne,

Yeah it does, but it is added on both stern (which works) and superpod 
(which doesn't).  Since this is not such a big problem I'm gonna rest on 
it, and perhaps move the divert'ing entirely to stern.  The reason being 
is that the incoming SIP packets are not fragmented, as they are not 
really (or ever) big enough.  So my phone setup works on SDP'ing 
outgoing SIP packets.


In other works there is no way to remove the reassemble tcp scrub option 
as it's not in my rules to begin with.


Best Regards,

-peter

PS: excuse any formatting problem I'm doing this on thunderbird.




Re: odd pf divert-packet problem

2024-02-07 Thread Janne Johansson
> pass in log quick on wg1 inet proto udp from 192.168.178.1 to any port = 5060 
> sc
> rub (reassemble tcp) divert-packet port 2

The mix of udp and tcp reassembly seems interesting there.


-- 
May the most significant bit of your life be positive.



odd pf divert-packet problem

2024-02-07 Thread Peter J. Philipp
Hi,

I have two hosts bounded by a wireguard:  superpod(7.4/arm64) and 
stern (snapshot of today/riscv64).

I have utilized a program that I rewrote yesterday and this morning that I
call sipdiv, because it reads SIP signalling off a divert socket.

The code is publically available since today:

https://github.com/pbug44/misc/tree/main/sipdiv

I'm running into problems with the 7.4 host (superpod).  It doesn't read off
the divert socket for some reason and I want to show the pf rules to start
for this.  Perhaps you can find the problem immediately.

superpod# ps auxww|grep sipdiv
root 14841  0.0  0.0   248   516 ??  Ip 10:38AM0:00.00 sipdiv -c
root 76341  0.0  0.0   204   384 p4  R+/17:36PM0:00.00 grep sipdiv
superpod# fstat -p 14841
USER CMD  PID   FD MOUNTINUM  MODE R/WSZ|DV
root sipdiv 14841 text /usr/local77788  -r-xr-xr-x r17944
root sipdiv 14841   wd /   2  drwxr-xr-x r  512
root sipdiv 14841   tr /home  942651  -rw---rw   64
root sipdiv 148410 /   52857  crw-rw-rw-rw null
root sipdiv 148411 /   52857  crw-rw-rw-rw null
root sipdiv 148412 /   52857  crw-rw-rw-rw null
root sipdiv 148413* internet raw divert 0xff800b0d1818

So you see descriptor "tr" which has a ktrace.out file of 64 bytes and it's
not growing.  And there is no compacting being done by this proxy, it boggles
me.

Now the pf rules are very simple in their structure.  I'm not going to list the
anchors because it's a quick rule at the beginning that should match.

superpod# pfctl -srules 
block return log all
pass all flags S/SA 
block return in on ! lo0 proto tcp from any to any port 6000:6010   
block return out log proto tcp all user = 55
block return out log proto udp all user = 55   
pass in log quick on wg1 inet proto udp from 192.168.178.1 to any port = 5060 sc
rub (reassemble tcp) divert-packet port 2   
anchor "esp" all
anchor "nat6" all 
...
... and so on.

Since this is a quick rule I'd think it would be caught the very first time,
but it doesn't.  It gets skipped.

I have cleared the states with this logic:

superpod# history 1|grep awk
381 pfctl -ss -vv|grep -A2  192\.168\.178\.1 | grep id | awk '{print $2}'
382 pfctl -ss -vv|grep -A2  192\.168\.178\.1 | grep id | awk '{print $2}' | 
while read i ; do pfctl -k id -k $i; done

I'm at the end of wits here.  Any help?  dmesg follows:

The other host (stern) has a similar rule and it works no complaints.

Best Regards,
-peter


OpenBSD 7.4 (GENERIC.MP) #2: Fri Dec  8 15:42:08 MST 2023

r...@syspatch-74-arm64.openbsd.org:/usr/src/sys/arch/arm64/compile/GENERIC.MP
real mem  = 4185800704 (3991MB)
avail mem = 3976454144 (3792MB)
random: good seed from bootblocks
mainbus0 at root: ACPI
psci0 at mainbus0: PSCI 1.0, SMCCC 1.1
efi0 at mainbus0: UEFI 2.7
efi0: EDK II rev 0x1
smbios0 at efi0: SMBIOS 3.0.0
smbios0: vendor Hetzner version "2017" date 11/11/2017
smbios0: Hetzner vServer
cpu0 at mainbus0 mpidr 0: ARM Neoverse N1 r3p1
cpu0: 64KB 64b/line 4-way L1 PIPT I-cache, 64KB 64b/line 4-way L1 D-cache
cpu0: 1024KB 64b/line 8-way L2 cache
cpu0: 
DP,RDM,Atomic,CRC32,SHA2,SHA1,AES+PMULL,LRCPC,DPB,ASID16,PAN+ATS1E1,LO,HPDS,VH,HAFDBS,CSV3,CSV2,SBSS+MSR
cpu1 at mainbus0 mpidr 1: ARM Neoverse N1 r3p1
cpu1: 64KB 64b/line 4-way L1 PIPT I-cache, 64KB 64b/line 4-way L1 D-cache
cpu1: 1024KB 64b/line 8-way L2 cache
cpu1: 
DP,RDM,Atomic,CRC32,SHA2,SHA1,AES+PMULL,LRCPC,DPB,ASID16,PAN+ATS1E1,LO,HPDS,VH,HAFDBS,CSV3,CSV2,SBSS+MSR
apm0 at mainbus0
agintc0 at mainbus0 shift 4:4 nirq 288 nredist 2 ipi: 0, 1, 2: 
"interrupt-controller"
agintcmsi0 at agintc0
agtimer0 at mainbus0: 25000 kHz
acpi0 at mainbus0: ACPI 5.1
acpi0: sleep states
acpi0: tables DSDT FACP APIC GTDT MCFG SPCR DBG2 IORT BGRT
acpi0: wakeup devices
acpimcfg0 at acpi0
acpimcfg0: addr 0x401000, bus 0-255
acpiiort0 at acpi0
"ACPI0007" at acpi0 not configured
"ACPI0007" at acpi0 not configured
pluart0 at acpi0 COM0 addr 0x900/0x1000 irq 33
pluart0: console
"LNRO0015" at acpi0 not configured
"LNRO0015" at acpi0 not configured
"QEMU0002" at acpi0 not configured
"LNRO0005" at acpi0 not configured
"LNRO0005" at acpi0 not configured
"LNRO0005" at acpi0 not configured
"LNRO0005" at acpi0 not configured
"LNRO0005" at acpi0 not configured
"LNRO0005" at acpi0 not configured
"LNRO0005" at acpi0 not configured
"LNRO0005" at acpi0 not configured
"LNRO0005" at acpi0 not configured
"LNRO0005" at acpi0 not configured
"LNRO0005" at acpi0 not configured
"LNRO0005" at acpi0 not configured
"LNRO0005" at acpi0 not configured
"LNRO000

Re: PF divert-packet does not work with IPv6, only IPv4

2020-05-18 Thread Logan Dunbar
Hello Sashan,

I completely forgot about testing link-scope addresses. I implemented your rule 
and sure enough, was able to get IPv6 connectivity. I greatly appreciate your 
help for that.

I further experimented with that rule and modified it to only divert-packet on 
link-scope addresses:
pass out on $lan inet6 from fe80::/64 to fe80::/64 divert-packet port 700

As I expected, I lost IPv6 connectivity again. Using that divert program I 
rewrote, I see a ton of "sendto: Network is unreachable", but this only happens 
with link-scope addresses.

(OpenBSD Router)  (Client)
fe80::8ac:2eff:fec7:50da:34304 -> fe80::4d13:8090:55de:5d25:53174
a.out: sendto: Network is unreachable

Likewise, I am unable to ping any link-scope addresses from the router on the 
$lan side. However, I can ping any link-scope address from the client.

This very well could be due to a problem that I introduced when rewriting the 
divert program to do IPv6. However, when I use this rule:
pass out on $lan inet6 from fe80::/64 to fe80::/64 divert-packet port 700

I am unable to obtain any IPv6 connectivity and if I disconnect the client and 
reconnect it to this network, I won't even get a global IPv6 address, using the 
divert program or Suricata. Which leads me to believe that the "sendto: Network 
is unreachable" is occurring on Suricata and the divert program.

Thanks,
Logan Dunbar

‐‐‐ Original Message ‐‐‐
On Monday, May 18, 2020 5:05 AM, Alexandr Nedvedicky 
 wrote:

> Hello Logan,
>
> I had no time to try it out yet. there is one thing, which caught my eye in
> your description. See my in-line question further below.
>
> On Mon, May 18, 2020 at 04:21:05AM +, Logan Dunbar wrote:
>
> > I had to forward this in because my ISP blocks SMTP, apologies if the 
> > formatting is incorrect.
> >
> > > Synopsis: PF divert-packet does not work with IPv6, only IPv4
> > > Category: kernel
> > > Environment:
> > > System : OpenBSD 6.7
> > > Details : OpenBSD 6.7-current (GENERIC.MP) #194: Sun May 17 09:52:26 MDT 
> > > 2020
> > > dera...@amd64.openbsd.org:/usr/src/sys/arch/amd64/compile/GENERIC.MP
> >
> > Architecture: OpenBSD.amd64
> > Machine : amd64
> >
> > > Description:
> > > Recently, I have set up Suricata on OpenBSD and was able to get it to 
> > > work with IPv4 using divert-packet. However, when I attempted to use IPv6 
> > > using divert-packet, I lost all connectivity.
> > > How-To-Repeat:
> > > When I used this rule:
> > > pass out on $lan inet divert-packet port 700
> >
> > It worked with only IPv4, as it should, but it diverted perfectly.
> > When I attempted this rule:
> > pass out on $lan inet6 divert-packet port 700
>
> perhaps you may want to adjust the rule a bit to ignore link-scope
> addresses:
>
> pass out on $lan inet6 from !fe80::/64 to !fe80::/64 divert-packet port 700
>
> modification above may help to get your IPv6 connectivity back.
>
> Hope it helps
> regards
> sashan




Re: PF divert-packet does not work with IPv6, only IPv4

2020-05-18 Thread Alexandr Nedvedicky
Hello Logan,

I had no time to try it out yet. there is one thing, which caught my eye in
your description. See my in-line question further below.

On Mon, May 18, 2020 at 04:21:05AM +, Logan Dunbar wrote:
> I had to forward this in because my ISP blocks SMTP, apologies if the 
> formatting is incorrect.
> 
> >Synopsis: PF divert-packet does not work with IPv6, only IPv4
> >Category: kernel
> >Environment:
> System  : OpenBSD 6.7
> Details : OpenBSD 6.7-current (GENERIC.MP) #194: Sun May 17 09:52:26 MDT 
> 2020
> dera...@amd64.openbsd.org:/usr/src/sys/arch/amd64/compile/GENERIC.MP
> 
> Architecture: OpenBSD.amd64
> Machine : amd64
> >Description:
> Recently, I have set up Suricata on OpenBSD and was able to get it to work 
> with IPv4 using divert-packet. However, when I attempted to use IPv6 using 
> divert-packet, I lost all connectivity.
> >How-To-Repeat:
> When I used this rule:
> pass out on $lan inet divert-packet port 700
> 
> It worked with only IPv4, as it should, but it diverted perfectly.
> 
> When I attempted this rule:
> pass out on $lan inet6 divert-packet port 700

perhaps you may want to adjust the rule a bit to ignore link-scope
addresses:

pass out on $lan inet6 from !fe80::/64 to !fe80::/64 divert-packet port 700

modification above may help to get your IPv6 connectivity back.


Hope it helps
regards
sashan



PF divert-packet does not work with IPv6, only IPv4

2020-05-18 Thread Logan Dunbar
I had to forward this in because my ISP blocks SMTP, apologies if the 
formatting is incorrect.

>Synopsis: PF divert-packet does not work with IPv6, only IPv4
>Category: kernel
>Environment:
System  : OpenBSD 6.7
Details : OpenBSD 6.7-current (GENERIC.MP) #194: Sun May 17 09:52:26 MDT 
2020
dera...@amd64.openbsd.org:/usr/src/sys/arch/amd64/compile/GENERIC.MP

Architecture: OpenBSD.amd64
Machine : amd64
>Description:
Recently, I have set up Suricata on OpenBSD and was able to get it to work with 
IPv4 using divert-packet. However, when I attempted to use IPv6 using 
divert-packet, I lost all connectivity.
>How-To-Repeat:
When I used this rule:
pass out on $lan inet divert-packet port 700

It worked with only IPv4, as it should, but it diverted perfectly.

When I attempted this rule:
pass out on $lan inet6 divert-packet port 700

I lost all IPv6 connectivity.

Thinking the problem could be with Suricata, I rewrite the divert(4) IPv4 
example program to support IPv6 and I still encountered the same problem. 
According to the program, it looked like the IPv6 was being diverted, but I 
still had no IPv6 connectivity. Which leads me to believe that there could be 
something wrong with divert-packet in the re-insertion process after being 
diverted when using IPv6. Although, I am a horrible programmer, so it is quite 
possible that the program itself is flawed. However, both Suricata and this 
program do not work with IPv6.

Here is the divert(4) program rewritten for IPv6:
https://pastebin.com/6Vm7WUVE
>Fix:


dmesg:
OpenBSD 6.7-current (GENERIC.MP) #194: Sun May 17 09:52:26 MDT 2020
dera...@amd64.openbsd.org:/usr/src/sys/arch/amd64/compile/GENERIC.MP
real mem = 6423400448 (6125MB)
avail mem = 6216110080 (5928MB)
mpath0 at root
scsibus0 at mpath0: 256 targets
mainbus0 at root
bios0 at mainbus0: SMBIOS rev. 2.8 @ 0xbf9cb000 (11 entries)
bios0:
bios0: QEMU Standard PC (i440FX + PIIX, 1996)
acpi0 at bios0: ACPI 1.0
acpi0: sleep states S3 S4 S5
acpi0: tables DSDT FACP APIC SSDT HPET SRAT BGRT
acpi0: wakeup devices
acpitimer0 at acpi0: 3579545 Hz, 24 bits
acpimadt0 at acpi0 addr 0xfee0: PC-AT compat
cpu0 at mainbus0: apid 0 (boot processor)
cpu0: AMD Opteron(TM) Processor 6272, 2100.39 MHz, 15-01-02
cpu0: 
FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,MMX,FXSR,SSE,SSE2,HTT,SSE3,PCLMUL,SSSE3,CX16,SSE4.1,SSE4.2,x2APIC,POPCNT,DEADLINE,AES,XSAVE,AVX,HV,NXE,MMXX,FFXSR,PAGE1GB,RDTSCP,LONG,LAHF,CMPLEG,SVM,AMCR8,ABM,SSE4A,MASSE,3DNOWP,OSVW,XOP,FMA4,CPCTR,TSC_ADJUST,IBPB,VIRTSSBD
cpu0: 64KB 64b/line 2-way I-cache, 64KB 64b/line 2-way D-cache, 512KB 64b/line 
16-way L2 cache, 16MB 64b/line 16-way L3 cache
cpu0: ITLB 255 4KB entries direct-mapped, 255 4MB entries direct-mapped
cpu0: DTLB 255 4KB entries direct-mapped, 255 4MB entries direct-mapped
cpu0: smt 0, core 0, package 0
mtrr: Pentium Pro MTRR support, 8 var ranges, 88 fixed ranges
cpu0: apic clock running at 999MHz
cpu1 at mainbus0: apid 1 (application processor)
cpu1: AMD Opteron(TM) Processor 6272, 2100.11 MHz, 15-01-02
cpu1: 
FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,MMX,FXSR,SSE,SSE2,HTT,SSE3,PCLMUL,SSSE3,CX16,SSE4.1,SSE4.2,x2APIC,POPCNT,DEADLINE,AES,XSAVE,AVX,HV,NXE,MMXX,FFXSR,PAGE1GB,RDTSCP,LONG,LAHF,CMPLEG,SVM,AMCR8,ABM,SSE4A,MASSE,3DNOWP,OSVW,XOP,FMA4,CPCTR,TSC_ADJUST,IBPB,VIRTSSBD
cpu1: 64KB 64b/line 2-way I-cache, 64KB 64b/line 2-way D-cache, 512KB 64b/line 
16-way L2 cache, 16MB 64b/line 16-way L3 cache
cpu1: ITLB 255 4KB entries direct-mapped, 255 4MB entries direct-mapped
cpu1: DTLB 255 4KB entries direct-mapped, 255 4MB entries direct-mapped
cpu1: smt 0, core 0, package 1
cpu2 at mainbus0: apid 2 (application processor)
cpu2: AMD Opteron(TM) Processor 6272, 2100.11 MHz, 15-01-02
cpu2: 
FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,MMX,FXSR,SSE,SSE2,HTT,SSE3,PCLMUL,SSSE3,CX16,SSE4.1,SSE4.2,x2APIC,POPCNT,DEADLINE,AES,XSAVE,AVX,HV,NXE,MMXX,FFXSR,PAGE1GB,RDTSCP,LONG,LAHF,CMPLEG,SVM,AMCR8,ABM,SSE4A,MASSE,3DNOWP,OSVW,XOP,FMA4,CPCTR,TSC_ADJUST,IBPB,VIRTSSBD
cpu2: 64KB 64b/line 2-way I-cache, 64KB 64b/line 2-way D-cache, 512KB 64b/line 
16-way L2 cache, 16MB 64b/line 16-way L3 cache
cpu2: ITLB 255 4KB entries direct-mapped, 255 4MB entries direct-mapped
cpu2: DTLB 255 4KB entries direct-mapped, 255 4MB entries direct-mapped
cpu2: smt 0, core 0, package 2
cpu3 at mainbus0: apid 3 (application processor)
cpu3: AMD Opteron(TM) Processor 6272, 2100.11 MHz, 15-01-02
cpu3: 
FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,MMX,FXSR,SSE,SSE2,HTT,SSE3,PCLMUL,SSSE3,CX16,SSE4.1,SSE4.2,x2APIC,POPCNT,DEADLINE,AES,XSAVE,AVX,HV,NXE,MMXX,FFXSR,PAGE1GB,RDTSCP,LONG,LAHF,CMPLEG,SVM,AMCR8,ABM,SSE4A,MASSE,3DNOWP,OSVW,XOP,FMA4,CPCTR,TSC_ADJUST,IBPB,VIRTSSBD
cpu3: 64KB 64b/line 2-way I-cache, 64KB 64b/line 2-way D-cache, 512KB 64b/line 
16-way L2 cache, 16MB 64b/line 16-way L3 cache
cpu3: ITLB 255 4KB entries dire

PF divert-packet does not work with IPv6, only IPv4

2020-05-17 Thread Logan Dunbar
I had to forward this in because my ISP blocks SMTP, apologies if the 
formatting is incorrect.

>Synopsis: PF divert-packet does not work with IPv6, only IPv4
>Category: kernel
>Environment:
System  : OpenBSD 6.7
Details : OpenBSD 6.7-current (GENERIC.MP) #194: Sun May 17 09:52:26 MDT 
2020
dera...@amd64.openbsd.org:/usr/src/sys/arch/amd64/compile/GENERIC.MP

Architecture: OpenBSD.amd64
Machine : amd64
>Description:
Recently, I have set up Suricata on OpenBSD and was able to get it to work with 
IPv4 using divert-packet. However, when I attempted to use IPv6 using 
divert-packet, I lost all connectivity.
>How-To-Repeat:
When I used this rule:
pass out on $lan inet divert-packet port 700

It worked with only IPv4, as it should, but it diverted perfectly.

When I attempted this rule:
pass out on $lan inet6 divert-packet port 700

I lost all IPv6 connectivity.

Thinking the problem could be with Suricata, I rewrite the divert(4) IPv4 
example program to support IPv6 and I still encountered the same problem. 
According to the program, it looked like the IPv6 was being diverted, but I 
still had no IPv6 connectivity. Which leads me to believe that there could be 
something wrong with divert-packet in the re-insertion process after being 
diverted when using IPv6. Although, I am a horrible programmer, so it is quite 
possible that the program itself is flawed. However, both Suricata and this 
program do not work with IPv6.

Here is the divert(4) program rewritten for IPv6:
https://pastebin.com/6Vm7WUVE
>Fix:


dmesg:
OpenBSD 6.7-current (GENERIC.MP) #194: Sun May 17 09:52:26 MDT 2020
dera...@amd64.openbsd.org:/usr/src/sys/arch/amd64/compile/GENERIC.MP
real mem = 6423400448 (6125MB)
avail mem = 6216110080 (5928MB)
mpath0 at root
scsibus0 at mpath0: 256 targets
mainbus0 at root
bios0 at mainbus0: SMBIOS rev. 2.8 @ 0xbf9cb000 (11 entries)
bios0:
bios0: QEMU Standard PC (i440FX + PIIX, 1996)
acpi0 at bios0: ACPI 1.0
acpi0: sleep states S3 S4 S5
acpi0: tables DSDT FACP APIC SSDT HPET SRAT BGRT
acpi0: wakeup devices
acpitimer0 at acpi0: 3579545 Hz, 24 bits
acpimadt0 at acpi0 addr 0xfee0: PC-AT compat
cpu0 at mainbus0: apid 0 (boot processor)
cpu0: AMD Opteron(TM) Processor 6272, 2100.39 MHz, 15-01-02
cpu0: 
FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,MMX,FXSR,SSE,SSE2,HTT,SSE3,PCLMUL,SSSE3,CX16,SSE4.1,SSE4.2,x2APIC,POPCNT,DEADLINE,AES,XSAVE,AVX,HV,NXE,MMXX,FFXSR,PAGE1GB,RDTSCP,LONG,LAHF,CMPLEG,SVM,AMCR8,ABM,SSE4A,MASSE,3DNOWP,OSVW,XOP,FMA4,CPCTR,TSC_ADJUST,IBPB,VIRTSSBD
cpu0: 64KB 64b/line 2-way I-cache, 64KB 64b/line 2-way D-cache, 512KB 64b/line 
16-way L2 cache, 16MB 64b/line 16-way L3 cache
cpu0: ITLB 255 4KB entries direct-mapped, 255 4MB entries direct-mapped
cpu0: DTLB 255 4KB entries direct-mapped, 255 4MB entries direct-mapped
cpu0: smt 0, core 0, package 0
mtrr: Pentium Pro MTRR support, 8 var ranges, 88 fixed ranges
cpu0: apic clock running at 999MHz
cpu1 at mainbus0: apid 1 (application processor)
cpu1: AMD Opteron(TM) Processor 6272, 2100.11 MHz, 15-01-02
cpu1: 
FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,MMX,FXSR,SSE,SSE2,HTT,SSE3,PCLMUL,SSSE3,CX16,SSE4.1,SSE4.2,x2APIC,POPCNT,DEADLINE,AES,XSAVE,AVX,HV,NXE,MMXX,FFXSR,PAGE1GB,RDTSCP,LONG,LAHF,CMPLEG,SVM,AMCR8,ABM,SSE4A,MASSE,3DNOWP,OSVW,XOP,FMA4,CPCTR,TSC_ADJUST,IBPB,VIRTSSBD
cpu1: 64KB 64b/line 2-way I-cache, 64KB 64b/line 2-way D-cache, 512KB 64b/line 
16-way L2 cache, 16MB 64b/line 16-way L3 cache
cpu1: ITLB 255 4KB entries direct-mapped, 255 4MB entries direct-mapped
cpu1: DTLB 255 4KB entries direct-mapped, 255 4MB entries direct-mapped
cpu1: smt 0, core 0, package 1
cpu2 at mainbus0: apid 2 (application processor)
cpu2: AMD Opteron(TM) Processor 6272, 2100.11 MHz, 15-01-02
cpu2: 
FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,MMX,FXSR,SSE,SSE2,HTT,SSE3,PCLMUL,SSSE3,CX16,SSE4.1,SSE4.2,x2APIC,POPCNT,DEADLINE,AES,XSAVE,AVX,HV,NXE,MMXX,FFXSR,PAGE1GB,RDTSCP,LONG,LAHF,CMPLEG,SVM,AMCR8,ABM,SSE4A,MASSE,3DNOWP,OSVW,XOP,FMA4,CPCTR,TSC_ADJUST,IBPB,VIRTSSBD
cpu2: 64KB 64b/line 2-way I-cache, 64KB 64b/line 2-way D-cache, 512KB 64b/line 
16-way L2 cache, 16MB 64b/line 16-way L3 cache
cpu2: ITLB 255 4KB entries direct-mapped, 255 4MB entries direct-mapped
cpu2: DTLB 255 4KB entries direct-mapped, 255 4MB entries direct-mapped
cpu2: smt 0, core 0, package 2
cpu3 at mainbus0: apid 3 (application processor)
cpu3: AMD Opteron(TM) Processor 6272, 2100.11 MHz, 15-01-02
cpu3: 
FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,MMX,FXSR,SSE,SSE2,HTT,SSE3,PCLMUL,SSSE3,CX16,SSE4.1,SSE4.2,x2APIC,POPCNT,DEADLINE,AES,XSAVE,AVX,HV,NXE,MMXX,FFXSR,PAGE1GB,RDTSCP,LONG,LAHF,CMPLEG,SVM,AMCR8,ABM,SSE4A,MASSE,3DNOWP,OSVW,XOP,FMA4,CPCTR,TSC_ADJUST,IBPB,VIRTSSBD
cpu3: 64KB 64b/line 2-way I-cache, 64KB 64b/line 2-way D-cache, 512KB 64b/line 
16-way L2 cache, 16MB 64b/line 16-way L3 cache
cpu3: ITLB 255 4KB entries dire

PF divert-packet

2018-02-22 Thread Romain Gabet
Hi,


I don't know if it's a bug but, if I use "set prio" or "set queue" with 
"divert-packet", the priority isn't reflected to VLAN header or the packets 
isn't queued.

I diverted packets to snort. I use OpenBSD 6.2 (GENERIC.MP).

PS : sorry for my english.

Best regards.


PF divert-packet with nat-to/rdr-to works in 4.9, breaks in -current

2011-10-04 Thread Lawrence Teo
I was testing PF rules that use divert-packet with nat-to/rdr-to, and
found that a set of PF rules that work in OpenBSD 4.9 no longer work in
-current.

I tested with OpenBSD 4.9/i386 and the September 22, 2011 i386
snapshot. Their kern.version values are:

OpenBSD 4.9 (GENERIC) #671: Wed Mar  2 07:09:00 MST 2011
dera...@i386.openbsd.org:/usr/src/sys/arch/i386/compile/GENERIC

OpenBSD 5.0-current (GENERIC) #60: Thu Sep 22 11:33:48 MDT 2011
dera...@i386.openbsd.org:/usr/src/sys/arch/i386/compile/GENERIC

This is a long bug report, so I will start with a summary of the test
results:

Test Scenario  OpenBSD 4.9   Sep 22, 2011 snap
-  ---   -
A: Without NAT (Outbound)Success  Success
B: Without NAT (Inbound) Success  Success
C: With NAT (Outbound; nat-to)   Success  Failure
D: With NAT (Inbound; rdr-to)Success  Failure

The rest of the bug report will describe the test network, the four
test scenarios, the test program used to read/reinject packets to/from
the divert socket, and the test results with OpenBSD 4.9 followed by
the Sep 22, 2011 snapshot.


==[ TEST NETWORK ]==

The test network consists of three nodes -- Outside, Firewall, and
Inside.  It is set up within a VMware ESXi 4.1.0 environment as
follows.

+--+
| Outside  | OpenBSD 4.9/i386
++-+
 | 10.0.0.2
 |
 |
 em0 | 10.0.0.1
++-+
| Firewall |
++-+
 em1 | 192.168.1.1
 |
 |
 | 192.168.1.2
++-+
|  Inside  | OpenBSD 4.9/i386
+--+

The device being tested is Firewall. The Outside and Inside nodes are
OpenBSD 4.9/i386 VMs that are used to send/receive traffic for the
tests.


==[ TEST SCENARIOS ]==

The tests were done by setting up Firewall with divert-packet PF rules
in four scenarios. The four scenarios were first tested with OpenBSD 4.9 on
the Firewall followed by the snapshot. The scenarios are:

Scenario A: Without NAT - Outbound
Scenario B: Without NAT - Inbound
Scenario C: With NAT - Outbound
Scenario D: With NAT - Inbound

In the Without NAT scenarios (A & B), the following PF rules were applied:

set skip on lo
pass# to establish keep-state
pass out on em0 divert-packet port 7000
pass in on em0 divert-packet port 7000

In the With NAT scenarios (C & D), the following PF rules were applied:

set skip on lo
pass# to establish keep-state
pass out on em0 divert-packet port 7000 nat-to (em0:0)
pass in on em0 inet proto tcp to (em0:0) port 13 divert-packet port 7000 
rdr-to 192.168.1.2 port 13

In the Outbound scenarios, traffic was initiated from Inside to Outside by
running "ftp http://10.0.0.2/index.html"; on the Inside box to fetch a
file from the HTTP server running on Outside.

In the Inbound test, traffic was initiated from Outside to Inside.
This test consists of connecting to the daytime port (TCP port 13) on
the Inside box as follows:

(a) In the Without NAT scenarios, the connection was done directly from
the Outside box ("telnet 192.168.1.2 13");

(b) In the With NAT scenarios, the connection was made to the public IP
address of the Firewall ("telnet 10.0.0.1 13"), and the PF rdr-to rule
would forward that packet to the Inside box.


==[ TEST PROGRAM ]==

I wrote a very basic test program called div.c to read the packets from
the divert socket and reinject them back into the kernel.  It does not
attempt to process the packets in any way.

--BEGIN--
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 

int
main(int argc, char *argv[])
{
int fd, s;
struct sockaddr_in sin;
socklen_t sin_len;

time_t now;
struct tm tres;
char buf[256] = "";

memset(&tres, 0, sizeof(tres));
time(&now);
localtime_r(&now, &tres);

memset(buf, 0, sizeof(buf));

fd = socket(AF_INET, SOCK_RAW, IPPROTO_DIVERT);
if (fd == -1) {
fprintf(stderr, "could not open divert socket\n");
exit(1);
}

memset(&sin, 0, sizeof(sin));
sin.sin_family = AF_INET;
sin.sin_port = htons(7000);
sin.sin_addr.s_addr = 0;

sin_len = sizeof(struct sockaddr_in);

s = bind(fd, (struct sockaddr *) &sin, sin_len);
if (s == -1) {
fprintf(stderr, "bind failed\n");
exit(1);
}

for (;;) {
ssize_t n;
char packet[131072];
struct ip *ip_hdr;
struct tcpiphdr *tcpip_hdr;
char src_ip[256], dst_ip[256];

memset(packet, 0, sizeof(packet));
n = recvfrom(fd, packet, sizeof(packet), 0, (struct sockaddr *) 
&sin, &sin_len);

memset(&tres, 0, sizeof(tres));