Re: Strange PF behaviour after 6.0 -> 6.1 pgrade

2017-04-22 Thread Sjöholm Per-Olov

> On 21 Apr 2017, at 14:22, Sjöholm Per-Olov  wrote:
> 
> 
>> On 21 Apr 2017, at 10:34, Stuart Henderson  wrote:
>> 
>> On 2017-04-20, Sjöholm Per-Olov  wrote:
>>> Could it be any buffers that is causing this in 6.1 but not in 6.0 ?
>> 
>> There were changes that would allow larger TCP buffers in 6.1. This
>> would not have made a difference to normal or natted connections from
>> non-OpenBSD going through PF to non-OpenBSD but could possibly affect
>> some configurations with proxies (though only if PF rules were already
>> dodgy - you would have active states in "pfctl -ss|grep -A1 tcp"
>> without wscale values if this was the case).
>> 
>> Might be worth bumping up the pf log level and seeing if system logs
>> give you more clues. Default is "error", you need "notice" to get the
>> ones which might give useful clues (loose state match warnings or
>> state mismatch errors).  (On a busy machine, be ready to back-off on
>> the debug level in case it causes too much load).
>> 
>> 
> 
> Another addition… This is what the problem actually looks like
> 
> ## 1 ## When the problem is ongoing…. Telnet from internet to DMZ server FAIL
> [sjoholmp@dewey ~]$ telnet mail.dyn.incedo.org 25
> Trying 155.4.8.28...
> ^C
> 
> ## 2 ## This looks like this
> Apr 21 14:06:28.751796 rule 573/(match) pass in on em3: 168.235.89.110.42126 
> > 192.168.1.12.25: S 2597688027:2597688027(0) win 29200  1460,sackOK,timestamp 668227520 0,nop,wscale 6> (DF)
> Apr 21 14:06:28.751824 rule 63/(match) block out on em3: 155.4.8.28.25 > 
> 168.235.89.110.42126: R 0:0(0) ack 2597688028 win 0 (DF)
> 
> 
> ## 3 ## Reload PF
> root@xanadu:/var/log#pfctl -f /etc/pf.conf
> root@xanadu:/var/log#
> 
> 
> ## 4 ## Telnet from internet again WORKS
> [sjoholmp@dewey ~]$ telnet mail.dyn.incedo.org 25
> Trying 155.4.8.28...
> Connected to mail.dyn.incedo.org.
> Escape character is '^]'.
> 220 mail.dyn.incedo.org ESMTP Sendmail; Fri, 21 Apr 2017 14:08:16 +0200
> 
> 
> ## 5 ## Looks like this
> Apr 21 14:08:16.239213 rule 573/(match) pass in on em3: 168.235.89.110.42168 
> > 192.168.1.12.25: S 4285065753:4285065753(0) win 29200  1460,sackOK,timestamp 668335004 0,nop,wscale 6> (DF)
> Apr 21 14:08:16.239267 rule 89/(match) pass out on vlan3: 
> 168.235.89.110.42168 > 192.168.1.12.25: S 4285065753:4285065753(0) win 29200 
>  (DF)
> 
> ## 6 ## After a few hours the same problem occurs again which requires a PF 
> reload 
> 
> The dmesg extra output ater pfctl -x notice only shows..
> pf: pf_map_addr: selected address 155.4.8.28
> 
> 
> I have serious problems with 6.1. I will probably go back to 6.0. I will 
> giveit  to the end of this day and check what I can…
> 
> Peo
> 


I downgraded to 6.0 stable again and all problems are gone.

As I cleaned up sysctl and reduced the ruleset to basic and still had the 
problem, I guess there eventually could be a problem with 6.1 kernel. I tried 
both UNI and MP kernel with same problem.

/Peo



Re: Strange PF behaviour after 6.0 -> 6.1 pgrade

2017-04-21 Thread Sjöholm Per-Olov

> On 21 Apr 2017, at 10:34, Stuart Henderson  wrote:
> 
> On 2017-04-20, Sjöholm Per-Olov  wrote:
>> Could it be any buffers that is causing this in 6.1 but not in 6.0 ?
> 
> There were changes that would allow larger TCP buffers in 6.1. This
> would not have made a difference to normal or natted connections from
> non-OpenBSD going through PF to non-OpenBSD but could possibly affect
> some configurations with proxies (though only if PF rules were already
> dodgy - you would have active states in "pfctl -ss|grep -A1 tcp"
> without wscale values if this was the case).
> 
> Might be worth bumping up the pf log level and seeing if system logs
> give you more clues. Default is "error", you need "notice" to get the
> ones which might give useful clues (loose state match warnings or
> state mismatch errors).  (On a busy machine, be ready to back-off on
> the debug level in case it causes too much load).
> 
> 

Another addition… This is what the problem actually looks like

## 1 ## When the problem is ongoing…. Telnet from internet to DMZ server FAIL
[sjoholmp@dewey ~]$ telnet mail.dyn.incedo.org 25
Trying 155.4.8.28...
^C

## 2 ## This looks like this
Apr 21 14:06:28.751796 rule 573/(match) pass in on em3: 168.235.89.110.42126 > 
192.168.1.12.25: S 2597688027:2597688027(0) win 29200  (DF)
Apr 21 14:06:28.751824 rule 63/(match) block out on em3: 155.4.8.28.25 > 
168.235.89.110.42126: R 0:0(0) ack 2597688028 win 0 (DF)


## 3 ## Reload PF
root@xanadu:/var/log#pfctl -f /etc/pf.conf
root@xanadu:/var/log#


## 4 ## Telnet from internet again WORKS
[sjoholmp@dewey ~]$ telnet mail.dyn.incedo.org 25
Trying 155.4.8.28...
Connected to mail.dyn.incedo.org.
Escape character is '^]'.
220 mail.dyn.incedo.org ESMTP Sendmail; Fri, 21 Apr 2017 14:08:16 +0200


## 5 ## Looks like this
Apr 21 14:08:16.239213 rule 573/(match) pass in on em3: 168.235.89.110.42168 > 
192.168.1.12.25: S 4285065753:4285065753(0) win 29200  (DF)
Apr 21 14:08:16.239267 rule 89/(match) pass out on vlan3: 168.235.89.110.42168 
> 192.168.1.12.25: S 4285065753:4285065753(0) win 29200  (DF)

## 6 ## After a few hours the same problem occurs again which requires a PF 
reload 

The dmesg extra output ater pfctl -x notice only shows..
pf: pf_map_addr: selected address 155.4.8.28


I have serious problems with 6.1. I will probably go back to 6.0. I will giveit 
 to the end of this day and check what I can…

Peo



Re: Strange PF behaviour after 6.0 -> 6.1 pgrade

2017-04-21 Thread Sjöholm Per-Olov

> On 21 Apr 2017, at 10:34, Stuart Henderson  wrote:
> 
> On 2017-04-20, Sjöholm Per-Olov  wrote:
>> Could it be any buffers that is causing this in 6.1 but not in 6.0 ?
> 
> There were changes that would allow larger TCP buffers in 6.1. This
> would not have made a difference to normal or natted connections from
> non-OpenBSD going through PF to non-OpenBSD but could possibly affect
> some configurations with proxies (though only if PF rules were already
> dodgy - you would have active states in "pfctl -ss|grep -A1 tcp"
> without wscale values if this was the case).
> 
> Might be worth bumping up the pf log level and seeing if system logs
> give you more clues. Default is "error", you need "notice" to get the
> ones which might give useful clues (loose state match warnings or
> state mismatch errors).  (On a busy machine, be ready to back-off on
> the debug level in case it causes too much load).
> 
> 

Tnx for the answer Stuart

I will check and do what you suggest. In the meanwhile some additions…

I have removed all tuning in sysctl.conf to make sure we have nothing that 
interfere.

When pf is reloaded it works perfect for hours. And then the kernel, just like 
that stops route some packages. when it works it could look like this in the 
logg…
Apr 21 10:32:14.734332 rule 573/(match) pass in on em3: 202.67.41.252.49461 > 
192.168.1.12.25: S 583218598:583218598(0) win 8192  (DF)
Apr 21 10:32:14.734356 rule 89/(match) pass out on vlan3: 202.67.41.252.49461 > 
192.168.1.12.25: S 583218598:583218598(0) win 8192  (DF)

Note that the problem appeared and started just between these above and below 
connections. When it happens I have permanent intermittent issues that is only 
solved by reloading pf.

When it stops working after a few hours it then looks like this where the 
kernel simply refuse to forward the incoming internet packet on em3 packet to 
the dmz1 (i.e vlan3)…
Apr 21 10:32:17.373591 rule 573/(match) pass in on em3: 122.200.1.158.55956 > 
192.168.1.12.25: S 1479648704:1479648704(0) win 8192  (DF)
Apr 21 10:32:17.373618 rule 63/(match) block out on em3: 155.4.8.28.25 > 
122.200.1.158.55956: R 0:0(0) ack 1479648705 win 0 (DF)


root@xanadu:/var/log#pfctl -g -sr|grep @573
@573 pass in log quick on em3 inet proto tcp from any to 192.168.1.12 port = 25 
flags S/FSRA keep state (source-track rule, max-src-states 90, max-src-conn 90, 
max-src-conn-rate 30/30, max-src-nodes 70, overload  flush global, 
src.track 30) label "MAIL"
root@xanadu:/var/log#pfctl -g -sr|grep @89
@89 pass out log quick on vlan3 inet proto tcp all flags S/SA
root@xanadu:/var/log#pfctl -g -sr|grep @63 
@63 block drop log all

So is it perhaps possible to give any more hints based on this extra info? Here 
I see wscale in both cases.

Note though that on 6.0 it worked flawlessly. I upgraded from 6.0 and just did 
what the upgrade guide said + did a sysmerge where I kept my pf.conf as is. 

Peo




Re: Strange PF behaviour after 6.0 -> 6.1 pgrade

2017-04-21 Thread Stuart Henderson
On 2017-04-20, Sjöholm Per-Olov  wrote:
> Could it be any buffers that is causing this in 6.1 but not in 6.0 ?

There were changes that would allow larger TCP buffers in 6.1. This
would not have made a difference to normal or natted connections from
non-OpenBSD going through PF to non-OpenBSD but could possibly affect
some configurations with proxies (though only if PF rules were already
dodgy - you would have active states in "pfctl -ss|grep -A1 tcp"
without wscale values if this was the case).

Might be worth bumping up the pf log level and seeing if system logs
give you more clues. Default is "error", you need "notice" to get the
ones which might give useful clues (loose state match warnings or
state mismatch errors).  (On a busy machine, be ready to back-off on
the debug level in case it causes too much load).




Re: Strange PF behaviour after 6.0 -> 6.1 pgrade

2017-04-20 Thread Sjöholm Per-Olov

> On 20 Apr 2017, at 01:18, Sjöholm Per-Olov  wrote:
> 
> 
>> On 20 Apr 2017, at 00:39, Fred  wrote:
>> 
>> On 04/19/17 23:30, Sjöholm Per-Olov wrote:
>>> Anyone with a clue would be _very_ much appreciated….
>>> I upgraded from 6.0 to 6.1 two days ago and **did not change anything to 
>>> the network** stuff at all. After that clients have random problems 
>>> reaching my dmz web server (centos + nginx). I have checked the release 
>>> notes, but could not see any clue there. Se logs below
>>> # Relevant rules from PF
>>> LAN_INT="vlan2"
>>> DMZ1_INT="vlan3"
>>> DMZ2_INT="vlan4"
>>> GUEST_INT="vlan1003"
>>> INTERNET_INT="em3
>>> ALL_INTERFACES="{" $LAN_INT $GUEST_INT $DMZ1_INT $DMZ2_INT $INTERNET_INT "}"
>>> pass out on $ALL_INTERFACES inet proto {tcp gre esp udp icmp ipv6} all keep 
>>> state
>>> pass out on $ALL_INTERFACES inet6  proto {tcp gre esp udp icmp6} all keep 
>>> state
>>> pass out on $IPV6_TUNNEL_INT inet6 all keep state
>>> pass in log quick on $INTERNET_INT inet proto tcp  from any  to 
>>> $DMZ1_DAEDALUS port  { 80 443 } label "webstats:$dstport" flags S/SAFR keep 
>>> state (max-src-nodes 90, max-src-states 150, max-src-conn 150, 
>>> max-src-conn-rate 250/30,  overload  flush global)
>>> # Log that after upgrade shows problems in the logs related to this 
>>> directly after the upgrade
>>> root@xanadu:/var/log#tcpdump -e -n -ttt -r /var/log/pflog.6|grep block|grep 
>>> 155.4|grep out |grep ': R'
>>> tcpdump: WARNING: snaplen raised from 116 to 160
>>> root@xanadu:/var/log#tcpdump -e -n -ttt -r /var/log/pflog.5|grep block|grep 
>>> 155.4|grep out |grep ': R'
>>> tcpdump: WARNING: snaplen raised from 116 to 160
>>> root@xanadu:/var/log#tcpdump -e -n -ttt -r /var/log/pflog.4|grep block|grep 
>>> 155.4|grep out |grep ': R'
>>> tcpdump: WARNING: snaplen raised from 116 to 160
>>> root@xanadu:/var/log#tcpdump -e -n -ttt -r /var/log/pflog.3|grep block|grep 
>>> 155.4|grep out |grep ': R'
>>> tcpdump: WARNING: snaplen raised from 116 to 160
>>> root@xanadu:/var/log#tcpdump -e -n -ttt -r /var/log/pflog.2|grep block|grep 
>>> 155.4|grep out |grep ': R'
>>> tcpdump: WARNING: snaplen raised from 116 to 160
>>> Apr 17 05:43:36.359067 rule 63/(match) block out on em3: 155.4.8.28.80 > 
>>> 164.132.161.92.46942: R 0:0(0) ack 2697518940 win 0 (DF)
>>> Apr 17 05:43:37.358688 rule 63/(match) block out on em3: 155.4.8.28.80 > 
>>> 164.132.161.92.46942: R 0:0(0) ack 1 win 0 (DF)
>>> Apr 17 05:43:39.362671 rule 63/(match) block out on em3: 155.4.8.28.80 > 
>>> 164.132.161.92.46942: R 0:0(0) ack 1 win 0 (DF)
>>> Apr 17 06:10:24.490412 rule 63/(match) block out on em3: 155.4.8.28.80 > 
>>> 139.162.111.147.33930: R 0:0(0) ack 1409896759 win 0 (DF)
>>> Apr 17 06:32:45.198754 rule 63/(match) block out on em3: 155.4.8.28.80 > 
>>> 180.76.15.26.42835: R 0:0(0) ack 3718886589 win 0 (DF)
>>> Apr 17 06:32:46.198338 rule 63/(match) block out on em3: 155.4.8.28.80 > 
>>> 180.76.15.26.42835: R 0:0(0) ack 1 win 0 (DF)
>>> Apr 17 06:41:29.366359 rule 63/(match) block out on em3: 155.4.8.28.80 > 
>>> 51.255.65.91.42819: R 0:0(0) ack 4294673273 win 0 (DF)
>>> Apr 17 06:41:30.365396 rule 63/(match) block out on em3: 155.4.8.28.80 > 
>>> 51.255.65.91.42819: R 0:0(0) ack 1 win 0 (DF)
>>> Apr 17 06:41:32.369399 rule 63/(match) block out on em3: 155.4.8.28.80 > 
>>> 51.255.65.91.42819: R 0:0(0) ack 1 win 0 (DF)
>>> — cut the rest —
>>> What have I missed?
>>> Tnx in advance
>>> Peo
>>> Thanks
>>> Peo
>> 
>> You might get some clues from:
>> 
>> pfctl -sr -R 63
>> 
>> Cheers
>> 
>> Fred
>> 
> 
> I know that that rule is my default block…
> 
> root@xanadu:/etc#pfctl -g -sr|grep "@63"
> @63 block drop log all
> root@xanadu:/etc#
> 
> But why is this happening after upgrade. I have netiher touched pf.conf, 
> sysctl.conf  or /etc/hostname* nor found any changes in release notes related 
> to this. So I see no reason for the packet to get stuck on that rule. But I 
> am probably missing something obvious :)
> 
> 
> Peo
> 


This is a tricky one…. I would very much appreciate “pro” help :)

It seems a reload with pfctl -f /etc/pf.conf make these blocked packages go 
away for two hours or so (and everything is working). After that the problem 
comes back and I again see frequent intermittent block out on the internet 
interface.

root@xanadu:/etc#tcpdump -e -n -ttt -r /var/log/pflog|grep block|grep 
155.4|grep out |grep ': R'|tail -20
tcpdump: WARNING: snaplen raised from 116 to 160
Apr 20 12:34:18.052265 rule 63/(match) block out on em3: 155.4.8.28.25 > 
194.71.64.14.5910: R 0:0(0) ack 1 win 0 (DF)
Apr 20 12:34:21.249292 rule 63/(match) block out on em3: 155.4.8.28.25 > 
194.71.64.14.5910: R 0:0(0) ack 1 win 0 (DF)
Apr 20 12:34:24.449235 rule 63/(match) block out on em3: 155.4.8.28.25 > 
194.71.64.14.5910: R 0:0(0) ack 1 win 0 (DF)
Apr 20 12:34:27.649446 rule 63/(match) block out on em3: 155.4.8.28.25 > 
194.71.64.14.5910: R 0:0(0) ack 1 win 0 (DF)
Apr 20 12:34:30.849380 rule 63/(match) block out 

Re: Strange PF behaviour after 6.0 -> 6.1 pgrade

2017-04-19 Thread Sjöholm Per-Olov

> On 20 Apr 2017, at 00:39, Fred  wrote:
> 
> On 04/19/17 23:30, Sjöholm Per-Olov wrote:
>> Anyone with a clue would be _very_ much appreciated….
>> I upgraded from 6.0 to 6.1 two days ago and **did not change anything to the 
>> network** stuff at all. After that clients have random problems reaching my 
>> dmz web server (centos + nginx). I have checked the release notes, but could 
>> not see any clue there. Se logs below
>> # Relevant rules from PF
>> LAN_INT="vlan2"
>> DMZ1_INT="vlan3"
>> DMZ2_INT="vlan4"
>> GUEST_INT="vlan1003"
>> INTERNET_INT="em3
>> ALL_INTERFACES="{" $LAN_INT $GUEST_INT $DMZ1_INT $DMZ2_INT $INTERNET_INT "}"
>> pass out on $ALL_INTERFACES inet proto {tcp gre esp udp icmp ipv6} all keep 
>> state
>> pass out on $ALL_INTERFACES inet6  proto {tcp gre esp udp icmp6} all keep 
>> state
>> pass out on $IPV6_TUNNEL_INT inet6 all keep state
>> pass in log quick on $INTERNET_INT inet proto tcp  from any  to 
>> $DMZ1_DAEDALUS port  { 80 443 } label "webstats:$dstport" flags S/SAFR keep 
>> state (max-src-nodes 90, max-src-states 150, max-src-conn 150, 
>> max-src-conn-rate 250/30,  overload  flush global)
>> # Log that after upgrade shows problems in the logs related to this directly 
>> after the upgrade
>> root@xanadu:/var/log#tcpdump -e -n -ttt -r /var/log/pflog.6|grep block|grep 
>> 155.4|grep out |grep ': R'
>> tcpdump: WARNING: snaplen raised from 116 to 160
>> root@xanadu:/var/log#tcpdump -e -n -ttt -r /var/log/pflog.5|grep block|grep 
>> 155.4|grep out |grep ': R'
>> tcpdump: WARNING: snaplen raised from 116 to 160
>> root@xanadu:/var/log#tcpdump -e -n -ttt -r /var/log/pflog.4|grep block|grep 
>> 155.4|grep out |grep ': R'
>> tcpdump: WARNING: snaplen raised from 116 to 160
>> root@xanadu:/var/log#tcpdump -e -n -ttt -r /var/log/pflog.3|grep block|grep 
>> 155.4|grep out |grep ': R'
>> tcpdump: WARNING: snaplen raised from 116 to 160
>> root@xanadu:/var/log#tcpdump -e -n -ttt -r /var/log/pflog.2|grep block|grep 
>> 155.4|grep out |grep ': R'
>> tcpdump: WARNING: snaplen raised from 116 to 160
>> Apr 17 05:43:36.359067 rule 63/(match) block out on em3: 155.4.8.28.80 > 
>> 164.132.161.92.46942: R 0:0(0) ack 2697518940 win 0 (DF)
>> Apr 17 05:43:37.358688 rule 63/(match) block out on em3: 155.4.8.28.80 > 
>> 164.132.161.92.46942: R 0:0(0) ack 1 win 0 (DF)
>> Apr 17 05:43:39.362671 rule 63/(match) block out on em3: 155.4.8.28.80 > 
>> 164.132.161.92.46942: R 0:0(0) ack 1 win 0 (DF)
>> Apr 17 06:10:24.490412 rule 63/(match) block out on em3: 155.4.8.28.80 > 
>> 139.162.111.147.33930: R 0:0(0) ack 1409896759 win 0 (DF)
>> Apr 17 06:32:45.198754 rule 63/(match) block out on em3: 155.4.8.28.80 > 
>> 180.76.15.26.42835: R 0:0(0) ack 3718886589 win 0 (DF)
>> Apr 17 06:32:46.198338 rule 63/(match) block out on em3: 155.4.8.28.80 > 
>> 180.76.15.26.42835: R 0:0(0) ack 1 win 0 (DF)
>> Apr 17 06:41:29.366359 rule 63/(match) block out on em3: 155.4.8.28.80 > 
>> 51.255.65.91.42819: R 0:0(0) ack 4294673273 win 0 (DF)
>> Apr 17 06:41:30.365396 rule 63/(match) block out on em3: 155.4.8.28.80 > 
>> 51.255.65.91.42819: R 0:0(0) ack 1 win 0 (DF)
>> Apr 17 06:41:32.369399 rule 63/(match) block out on em3: 155.4.8.28.80 > 
>> 51.255.65.91.42819: R 0:0(0) ack 1 win 0 (DF)
>> — cut the rest —
>> What have I missed?
>> Tnx in advance
>> Peo
>> Thanks
>> Peo
> 
> You might get some clues from:
> 
> pfctl -sr -R 63
> 
> Cheers
> 
> Fred
> 

I know that that rule is my default block…

root@xanadu:/etc#pfctl -g -sr|grep "@63"
@63 block drop log all
root@xanadu:/etc#

But why is this happening after upgrade. I have netiher touched pf.conf, 
sysctl.conf  or /etc/hostname* nor found any changes in release notes related 
to this. So I see no reason for the packet to get stuck on that rule. But I am 
probably missing something obvious :)


Peo




Re: Strange PF behaviour after 6.0 -> 6.1 pgrade

2017-04-19 Thread Fred

On 04/19/17 23:30, Sjöholm Per-Olov wrote:

Anyone with a clue would be _very_ much appreciated….


I upgraded from 6.0 to 6.1 two days ago and **did not change anything to the 
network** stuff at all. After that clients have random problems reaching my dmz 
web server (centos + nginx). I have checked the release notes, but could not 
see any clue there. Se logs below

# Relevant rules from PF
LAN_INT="vlan2"
DMZ1_INT="vlan3"
DMZ2_INT="vlan4"
GUEST_INT="vlan1003"
INTERNET_INT="em3
ALL_INTERFACES="{" $LAN_INT $GUEST_INT $DMZ1_INT $DMZ2_INT $INTERNET_INT "}"
pass out on $ALL_INTERFACES inet proto {tcp gre esp udp icmp ipv6} all keep 
state
pass out on $ALL_INTERFACES inet6  proto {tcp gre esp udp icmp6} all keep state
pass out on $IPV6_TUNNEL_INT inet6 all keep state
pass in log quick on $INTERNET_INT inet proto tcp  from any  to $DMZ1_DAEDALUS port  { 80 443 } 
label "webstats:$dstport" flags S/SAFR keep state (max-src-nodes 90, max-src-states 
150, max-src-conn 150, max-src-conn-rate 250/30,  overload  flush global)



# Log that after upgrade shows problems in the logs related to this directly 
after the upgrade

root@xanadu:/var/log#tcpdump -e -n -ttt -r /var/log/pflog.6|grep block|grep 
155.4|grep out |grep ': R'
tcpdump: WARNING: snaplen raised from 116 to 160
root@xanadu:/var/log#tcpdump -e -n -ttt -r /var/log/pflog.5|grep block|grep 
155.4|grep out |grep ': R'
tcpdump: WARNING: snaplen raised from 116 to 160
root@xanadu:/var/log#tcpdump -e -n -ttt -r /var/log/pflog.4|grep block|grep 
155.4|grep out |grep ': R'
tcpdump: WARNING: snaplen raised from 116 to 160
root@xanadu:/var/log#tcpdump -e -n -ttt -r /var/log/pflog.3|grep block|grep 
155.4|grep out |grep ': R'
tcpdump: WARNING: snaplen raised from 116 to 160
root@xanadu:/var/log#tcpdump -e -n -ttt -r /var/log/pflog.2|grep block|grep 
155.4|grep out |grep ': R'
tcpdump: WARNING: snaplen raised from 116 to 160
Apr 17 05:43:36.359067 rule 63/(match) block out on em3: 155.4.8.28.80 > 
164.132.161.92.46942: R 0:0(0) ack 2697518940 win 0 (DF)
Apr 17 05:43:37.358688 rule 63/(match) block out on em3: 155.4.8.28.80 > 
164.132.161.92.46942: R 0:0(0) ack 1 win 0 (DF)
Apr 17 05:43:39.362671 rule 63/(match) block out on em3: 155.4.8.28.80 > 
164.132.161.92.46942: R 0:0(0) ack 1 win 0 (DF)
Apr 17 06:10:24.490412 rule 63/(match) block out on em3: 155.4.8.28.80 > 
139.162.111.147.33930: R 0:0(0) ack 1409896759 win 0 (DF)
Apr 17 06:32:45.198754 rule 63/(match) block out on em3: 155.4.8.28.80 > 
180.76.15.26.42835: R 0:0(0) ack 3718886589 win 0 (DF)
Apr 17 06:32:46.198338 rule 63/(match) block out on em3: 155.4.8.28.80 > 
180.76.15.26.42835: R 0:0(0) ack 1 win 0 (DF)
Apr 17 06:41:29.366359 rule 63/(match) block out on em3: 155.4.8.28.80 > 
51.255.65.91.42819: R 0:0(0) ack 4294673273 win 0 (DF)
Apr 17 06:41:30.365396 rule 63/(match) block out on em3: 155.4.8.28.80 > 
51.255.65.91.42819: R 0:0(0) ack 1 win 0 (DF)
Apr 17 06:41:32.369399 rule 63/(match) block out on em3: 155.4.8.28.80 > 
51.255.65.91.42819: R 0:0(0) ack 1 win 0 (DF)
— cut the rest —


What have I missed?

Tnx in advance
Peo


Thanks
Peo



You might get some clues from:

pfctl -sr -R 63

Cheers

Fred



Strange PF behaviour after 6.0 -> 6.1 pgrade

2017-04-19 Thread Sjöholm Per-Olov
Anyone with a clue would be _very_ much appreciated….


I upgraded from 6.0 to 6.1 two days ago and **did not change anything to the 
network** stuff at all. After that clients have random problems reaching my dmz 
web server (centos + nginx). I have checked the release notes, but could not 
see any clue there. Se logs below

# Relevant rules from PF
LAN_INT="vlan2"
DMZ1_INT="vlan3"
DMZ2_INT="vlan4"
GUEST_INT="vlan1003"
INTERNET_INT="em3
ALL_INTERFACES="{" $LAN_INT $GUEST_INT $DMZ1_INT $DMZ2_INT $INTERNET_INT "}"
pass out on $ALL_INTERFACES inet proto {tcp gre esp udp icmp ipv6} all keep 
state
pass out on $ALL_INTERFACES inet6  proto {tcp gre esp udp icmp6} all keep state
pass out on $IPV6_TUNNEL_INT inet6 all keep state
pass in log quick on $INTERNET_INT inet proto tcp  from any  to $DMZ1_DAEDALUS 
port  { 80 443 } label "webstats:$dstport" flags S/SAFR keep state 
(max-src-nodes 90, max-src-states 150, max-src-conn 150, max-src-conn-rate 
250/30,  overload  flush global)



# Log that after upgrade shows problems in the logs related to this directly 
after the upgrade

root@xanadu:/var/log#tcpdump -e -n -ttt -r /var/log/pflog.6|grep block|grep 
155.4|grep out |grep ': R'
tcpdump: WARNING: snaplen raised from 116 to 160
root@xanadu:/var/log#tcpdump -e -n -ttt -r /var/log/pflog.5|grep block|grep 
155.4|grep out |grep ': R'
tcpdump: WARNING: snaplen raised from 116 to 160
root@xanadu:/var/log#tcpdump -e -n -ttt -r /var/log/pflog.4|grep block|grep 
155.4|grep out |grep ': R'
tcpdump: WARNING: snaplen raised from 116 to 160
root@xanadu:/var/log#tcpdump -e -n -ttt -r /var/log/pflog.3|grep block|grep 
155.4|grep out |grep ': R'
tcpdump: WARNING: snaplen raised from 116 to 160
root@xanadu:/var/log#tcpdump -e -n -ttt -r /var/log/pflog.2|grep block|grep 
155.4|grep out |grep ': R'
tcpdump: WARNING: snaplen raised from 116 to 160
Apr 17 05:43:36.359067 rule 63/(match) block out on em3: 155.4.8.28.80 > 
164.132.161.92.46942: R 0:0(0) ack 2697518940 win 0 (DF)
Apr 17 05:43:37.358688 rule 63/(match) block out on em3: 155.4.8.28.80 > 
164.132.161.92.46942: R 0:0(0) ack 1 win 0 (DF)
Apr 17 05:43:39.362671 rule 63/(match) block out on em3: 155.4.8.28.80 > 
164.132.161.92.46942: R 0:0(0) ack 1 win 0 (DF)
Apr 17 06:10:24.490412 rule 63/(match) block out on em3: 155.4.8.28.80 > 
139.162.111.147.33930: R 0:0(0) ack 1409896759 win 0 (DF)
Apr 17 06:32:45.198754 rule 63/(match) block out on em3: 155.4.8.28.80 > 
180.76.15.26.42835: R 0:0(0) ack 3718886589 win 0 (DF)
Apr 17 06:32:46.198338 rule 63/(match) block out on em3: 155.4.8.28.80 > 
180.76.15.26.42835: R 0:0(0) ack 1 win 0 (DF)
Apr 17 06:41:29.366359 rule 63/(match) block out on em3: 155.4.8.28.80 > 
51.255.65.91.42819: R 0:0(0) ack 4294673273 win 0 (DF)
Apr 17 06:41:30.365396 rule 63/(match) block out on em3: 155.4.8.28.80 > 
51.255.65.91.42819: R 0:0(0) ack 1 win 0 (DF)
Apr 17 06:41:32.369399 rule 63/(match) block out on em3: 155.4.8.28.80 > 
51.255.65.91.42819: R 0:0(0) ack 1 win 0 (DF)
— cut the rest —


What have I missed?

Tnx in advance
Peo


Thanks
Peo