Re: [OpenIKED] Network traffic over VPN site-to-site tunnel stalls few times a day

2019-11-14 Thread Stuart Henderson
On 2019-11-13, radek  wrote:
> After upgrading my two endpoints to i386/6.6 it started to work flawlessly. 
> There wasn't even one IKED restart within first two days of running.
> Thank you Patrick, Stuart and everyone involved in making IKED work as 
> expected. I really appreciate it.

Thanks for the update. The main person to thank for the improvements in iked
between 6.5->6.6 is tobhe@, he has done a lot of work on it in that period.




Re: [OpenIKED] Network traffic over VPN site-to-site tunnel stalls few times a day

2019-11-13 Thread radek
After upgrading my two endpoints to i386/6.6 it started to work flawlessly. 
There wasn't even one IKED restart within first two days of running.
Thank you Patrick, Stuart and everyone involved in making IKED work as 
expected. I really appreciate it.

# vmstat -m | head -n 17 
Memory statistics by bucket size
Size   In Use   Free   Requests  HighWater  Couldfree
  16  528752 1253321280  0
  32 1470 66 105757 640  5
  64  6001682554483 320  0
 128  124 36  42106 160  0
 256  446 18  51276  80  0
 512  108  4 166303  40  0
1024   46  6  48352  20  0
2048   13  3 74  10  0
4096   16  2  84574   5  0
8192   21  1 44   5  0
   163846  0505   5  0
   327686  0 11   5  0
   655362  0  12333   5  0
  5242881  0  1   5  0

# vmstat -w 4
 procsmemory   pagedisk traps  cpu
 r   s   avm fre  flt  re  pi  po  fr  sr wd0  int   sys   cs us sy id
 2  53   29M313M   54   0   0   0   0   0   0  27560  109  0  2 98
 0  57   30M312M  140   0   0   0   0   0   0  378   131  470  0  4 96
 0  55   29M313M   30   0   0   0   0   0   0  38343  547  0  3 97
 0  55   29M313M2   0   0   0   0   0   0  38017  529  0  3 97
 0  57   30M312M  140   0   0   0   0   0   0  374   124  512  0  5 94


On Sun, 22 Sep 2019 17:11:20 +0200
Radek  wrote:

> Thank you Stuart.
> I can't touch/upgrade these routers, but I have a bunch of Soekris/net5501 
> that I can use for testing -current. Unfortunately, they are i386. I hope the 
> arch doesn't matter in this case.
> I'll try -current asap.
> 
> Am I the only one @misc who's facing this kind of iked issue? Nobody else 
> reports having the same issue here...
> 
> On Fri, 20 Sep 2019 16:55:02 - (UTC)
> Stuart Henderson  wrote:
> 
> > On 2019-09-20, radek  wrote:
> > > Hello Patrick,
> > > I am sorry for the late reply.
> > >
> > > I have replaced my ALIX/Soekris production routers with APU1C and with PC 
> > > box (cpu0: Intel(R) Pentium(R) D CPU 2.80GHz, 2810.34 MHz, 0f-06-04). 
> > > Both are running 6.5/amd64 and both are fully syspatched.
> > 
> > Please try a -current snapshot for starters, quite a number of iked bugs
> > have been fixed since then including some which would cause connectivity
> > problems during rekeying. (If you *really* can't update the whole thing,
> > it should work to build -current iked on a 6.5 system, but no guarantees).
> > 
> > 
> 
> 
> -- 
> Radek
> 


-- 
Radek



Re: [OpenIKED] Network traffic over VPN site-to-site tunnel stalls few times a day

2019-09-22 Thread Radek
Thank you Stuart.
I can't touch/upgrade these routers, but I have a bunch of Soekris/net5501 that 
I can use for testing -current. Unfortunately, they are i386. I hope the arch 
doesn't matter in this case.
I'll try -current asap.

Am I the only one @misc who's facing this kind of iked issue? Nobody else 
reports having the same issue here...

On Fri, 20 Sep 2019 16:55:02 - (UTC)
Stuart Henderson  wrote:

> On 2019-09-20, radek  wrote:
> > Hello Patrick,
> > I am sorry for the late reply.
> >
> > I have replaced my ALIX/Soekris production routers with APU1C and with PC 
> > box (cpu0: Intel(R) Pentium(R) D CPU 2.80GHz, 2810.34 MHz, 0f-06-04). 
> > Both are running 6.5/amd64 and both are fully syspatched.
> 
> Please try a -current snapshot for starters, quite a number of iked bugs
> have been fixed since then including some which would cause connectivity
> problems during rekeying. (If you *really* can't update the whole thing,
> it should work to build -current iked on a 6.5 system, but no guarantees).
> 
> 


-- 
Radek



Re: [OpenIKED] Network traffic over VPN site-to-site tunnel stalls few times a day

2019-09-20 Thread Stuart Henderson
On 2019-09-20, radek  wrote:
> Hello Patrick,
> I am sorry for the late reply.
>
> I have replaced my ALIX/Soekris production routers with APU1C and with PC box 
> (cpu0: Intel(R) Pentium(R) D CPU 2.80GHz, 2810.34 MHz, 0f-06-04). 
> Both are running 6.5/amd64 and both are fully syspatched.

Please try a -current snapshot for starters, quite a number of iked bugs
have been fixed since then including some which would cause connectivity
problems during rekeying. (If you *really* can't update the whole thing,
it should work to build -current iked on a 6.5 system, but no guarantees).




Re: [OpenIKED] Network traffic over VPN site-to-site tunnel stalls few times a day

2019-09-20 Thread radek
Hello Patrick,
I am sorry for the late reply.

I have replaced my ALIX/Soekris production routers with APU1C and with PC box 
(cpu0: Intel(R) Pentium(R) D CPU 2.80GHz, 2810.34 MHz, 0f-06-04). 
Both are running 6.5/amd64 and both are fully syspatched.

A also added "inet proto { tcp, udp, icmp }" to my match rule on the both sides:
match out log on $ext_if inet proto { tcp, udp, icmp } from { $lan_rac_local, 
$backup_local } nat-to $ext_if set prio (3, 7)

It does not make any changes. VPN still needs to be restarted with similar freq.
Date: Thu, 19 Sep 2019 23:15:39 +0200 (CEST)
Date: Fri, 20 Sep 2019 01:49:59 +0200 (CEST)
Date: Fri, 20 Sep 2019 03:37:15 +0200 (CEST)
Date: Fri, 20 Sep 2019 06:12:31 +0200 (CEST)
Date: Fri, 20 Sep 2019 08:46:45 +0200 (CEST)
Date: Fri, 20 Sep 2019 11:25:08 +0200 (CEST)
Date: Fri, 20 Sep 2019 13:59:06 +0200 (CEST)


> In my opinion upstream DNS & UDP issues can cause interrupts with some ISP's.
But at the time of VPN issue both sides can ping each other on public IPs. Only 
the VPN tunnel does not work as expected, untill restart of iked.

> It appears that you have ICMP allow rules which is a good idea in my opinion.
> Have you ever done any logging of these packets. Is there any legitimate 
> requests from your ISP?
No, there are not any ICMP requests from my ISP.
TCPDUMP shows only some pings from the world, mostly from Amazon's IPs.
The following was logged just before VPN traffic stalls:
13:38:09.194783 13.210.171.31 > A.A.A.A: icmp: echo request (DF) [tos 0x40]
13:38:09.194845 A.A.A.A > 13.210.171.31: icmp: echo reply [tos 0x40]
13:39:51.130602 18.138.136.9 > A.A.A.A: icmp: echo request (DF)
13:39:51.130665 A.A.A.A > 18.138.136.9: icmp: echo reply
13:42:42.825866 3.105.202.31 > A.A.A.A: icmp: echo request (DF) [tos 0x40]
13:42:42.825938 A.A.A.A > 3.105.202.31: icmp: echo reply [tos 0x40]
13:44:17.474364 18.136.167.37 > A.A.A.A: icmp: echo request (DF)
13:44:17.474434 A.A.A.A > 18.136.167.37: icmp: echo reply
13:47:55.225820 13.210.171.31 > A.A.A.A: icmp: echo request (DF) [tos 0x40]
13:47:55.225883 A.A.A.A > 13.210.171.31: icmp: echo reply [tos 0x40]
13:49:30.624877 18.138.136.9 > A.A.A.A: icmp: echo request (DF)
13:49:30.624945 A.A.A.A > 18.138.136.9: icmp: echo reply
13:53:45.675943 3.105.202.31 > A.A.A.A: icmp: echo request (DF) [tos 0x40]
13:53:45.676008 A.A.A.A > 3.105.202.31: icmp: echo reply [tos 0x40]
13:55:02.593285 18.136.167.37 > A.A.A.A: icmp: echo request (DF)
13:55:02.593347 A.A.A.A > 18.136.167.37: icmp: echo reply
13:55:31.703602 18.228.131.118 > A.A.A.A: icmp: echo request (DF)
13:55:31.703671 A.A.A.A > 18.228.131.118: icmp: echo reply

On the other side of VPN ICMP logs are similar.

> Do you have an alternate DNS server you can test against? Are you using your 
> ISP’s DNS?
On the one side I can use any DNS I want. I was using google's 8.8.8.8 and 
ISP's DNS. If I change to 1.1.1.1 and 1.0.0.1 my problem still occurs.
On the other side the ISP redirects all DNS requests to its own DNS. 

Any idea?

On Sun, 25 Aug 2019 20:28:27 -0500
Patrick Dohman  wrote:

> Radek
> In my opinion upstream DNS & UDP issues can cause interrupts with some ISP's.
> I also believe that defining specific proto's in your nat rule can decrease 
> interrupts. 
> You might consider the following to modification to your nat rule to 
> specificity allow UDP & ICMP.
> 
> match out log on $ext_if inet proto { tcp, udp, icmp } rom { $lan_rac_local, 
> $backup_local } nat-to $ext_if set prio (3, 7)
> 
> It appears that you have ICMP allow rules which is a good idea in my opinion.
> Have you ever done any logging of these packets. Is there any legitimate 
> requests from your ISP?
> Do you have an alternate DNS server you can test against? Are you using your 
> ISP’s DNS?
> Perhaps the new OpenBSD unwind package is worth investigating ;)
> ]Regards
> Patrick
> 
> > On Aug 25, 2019, at 1:31 PM, Radek  wrote:
> > 
> > Hello Patrick, 
> > 
> >> In my opinion your net5501’s system calls per interval are relatively high.
> >> The (traps sys) column on my firewall hovers between 40 & 50 quite 
> >> consistently.
> >> My understanding is that system calls are things like program calls & 
> >> library access.
> > Is there any way to decrease these values?
> > 
> >> Many commercial routers run a customized kernel & rely on a striped down 
> >> user-land.
> >> The kernel is also recompiled to run TCP/IP4 only & can no longer execute 
> >> things like storage or virtualization.
> >> The OpenBSD O.S includes all the user-land tools such as ping & top in 
> >> addition to a standardized precompiled kernel. 
> > Ok, I get it.
> > 
> > 
> > On Fri, 23 Aug 2019 21:12:35 -0500
> > Patrick Dohman  wrote:
> > 
> >> In my opinion your net5501’s system calls per interval are relatively high.
> >> The (traps sys) column on my firewall hovers between 40 & 50 quite 
> >> consistently.
> >> My understanding is that system calls are things like program calls & 
> >> library access.
> >> 
> >> In addition your 

Re: [OpenIKED] Network traffic over VPN site-to-site tunnel stalls few times a day

2019-08-25 Thread Patrick Dohman
Radek
In my opinion upstream DNS & UDP issues can cause interrupts with some ISP's.
I also believe that defining specific proto's in your nat rule can decrease 
interrupts. 
You might consider the following to modification to your nat rule to 
specificity allow UDP & ICMP.

match out log on $ext_if inet proto { tcp, udp, icmp } rom { $lan_rac_local, 
$backup_local } nat-to $ext_if set prio (3, 7)

It appears that you have ICMP allow rules which is a good idea in my opinion.
Have you ever done any logging of these packets. Is there any legitimate 
requests from your ISP?
Do you have an alternate DNS server you can test against? Are you using your 
ISP’s DNS?
Perhaps the new OpenBSD unwind package is worth investigating ;)
]Regards
Patrick

> On Aug 25, 2019, at 1:31 PM, Radek  wrote:
> 
> Hello Patrick, 
> 
>> In my opinion your net5501’s system calls per interval are relatively high.
>> The (traps sys) column on my firewall hovers between 40 & 50 quite 
>> consistently.
>> My understanding is that system calls are things like program calls & 
>> library access.
> Is there any way to decrease these values?
> 
>> Many commercial routers run a customized kernel & rely on a striped down 
>> user-land.
>> The kernel is also recompiled to run TCP/IP4 only & can no longer execute 
>> things like storage or virtualization.
>> The OpenBSD O.S includes all the user-land tools such as ping & top in 
>> addition to a standardized precompiled kernel. 
> Ok, I get it.
> 
> 
> On Fri, 23 Aug 2019 21:12:35 -0500
> Patrick Dohman  wrote:
> 
>> In my opinion your net5501’s system calls per interval are relatively high.
>> The (traps sys) column on my firewall hovers between 40 & 50 quite 
>> consistently.
>> My understanding is that system calls are things like program calls & 
>> library access.
>> 
>> In addition your net5501’s memory requests per second seem heavy.
>> You have fifty eight million 1024 bucket requests per second.
>> My firewall has a max of one hundred thousand 128 bucket requests per second.
>> 
>> Many commercial routers run a customized kernel & rely on a striped down 
>> user-land.
>> The kernel is also recompiled to run TCP/IP4 only & can no longer execute 
>> things like storage or virtualization.
>> The OpenBSD O.S includes all the user-land tools such as ping & top in 
>> addition to a standardized precompiled kernel. 
>> Regards
>> Patrick
>> .
>>> 
>>> 
>>> On Thu, 22 Aug 2019 19:12:55 -0500
>>> Patrick Dohman  wrote:
>>> 
 Radek
 
 I’ve found that fast networking is actually CPU & memory intensive. 
 Pentium 4 and Xeon's are increasingly a necessity for stable firewalls in 
 my opinion.
 Keep in mind OpenBSD is a monolithic kernel & isn’t a one to one ratio 
 with a commercial router.
 
 What are your context switches & interrupts doing while the VPN is up & 
 traffic is flowing?
 
 vmstat -w 4
 
 What is your memory high water mark during a peak traffic?
 
 vmstat -m
 
 Regards
 Patrick
 
> On Aug 21, 2019, at 12:34 AM, radek  wrote:
> 
> Hello Patrick,
> I am sorry for the late reply.
> 
>> Do you consider memory an issue?
> No, I do not. I have a bunch of old Soekris/net5501-70 and ALIX2d2/2d3, 
> that I use for VPN testing.
> Current testing set (6.5/i386) is net5501-70 <-> ALIX2d3
> Production set (6.3/i386) is net5501-70 <-> ALIX2d2
> Also have tried net5501-70 <-> net5501-70 - the same VPN problem occurs
> It is unlikely that every box has any hardware issue.
> 
>> Unix load average can occasionally be deceiving.
> I did not know.
> 
>  net5501-70 
> $top -d1 | head -n 4
> load averages:  0.05,  0.01,  0.00RAC-fw65-test.PRAC 10:58:14
> 38 processes: 1 running, 35 idle, 1 dead, 1 on processor  up 3 days, 18:02
> CPU states:  0.5% user,  0.0% nice,  0.4% sys,  0.0% spin,  0.2% intr, 
> 98.8% idle
> Memory: Real: 18M/267M act/tot Free: 222M Cache: 97M Swap: 0K/256M
> 
>  ALIX2d3 
> $top -d1 | head -n 4
> load averages:  0.00,  0.00,  0.00mon65.home 07:30:05
> 37 processes: 1 running, 35 idle, 1 on processor  up 13:46
> CPU states:  0.3% user,  0.0% nice,  1.1% sys,  0.0% spin,  0.4% intr, 
> 98.3% idle
> Memory: Real: 125M/223M act/tot Free: 14M Cache: 47M Swap: 73M/256M
> 
> 
> 
>> What is the speed of your memory?
>> What make of Ethernets are you running?
> Dmesgs below
> 
>  net5501-70 
> OpenBSD 6.5 (GENERIC) #2: Tue Jul 23 23:08:46 CEST 2019
>  r...@syspatch-65-i386.openbsd.org:/usr/src/sys/arch/i386/compile/GENERIC
> real mem  = 536363008 (511MB)
> avail mem = 511311872 (487MB)
> mpath0 at root
> scsibus0 at mpath0: 256 targets
> mainbus0 at root
> bios0 at mainbus0: date 20/80/26, BIOS32 rev. 0 @ 0xfac40
> pcibios0 at bios0: rev 2.0 @ 0xf/0x1
> pcibios0: 

Re: [OpenIKED] Network traffic over VPN site-to-site tunnel stalls few times a day

2019-08-25 Thread Radek
Hello Patrick, 

> In my opinion your net5501’s system calls per interval are relatively high.
> The (traps sys) column on my firewall hovers between 40 & 50 quite 
> consistently.
> My understanding is that system calls are things like program calls & library 
> access.
Is there any way to decrease these values?
 
> Many commercial routers run a customized kernel & rely on a striped down 
> user-land.
> The kernel is also recompiled to run TCP/IP4 only & can no longer execute 
> things like storage or virtualization.
> The OpenBSD O.S includes all the user-land tools such as ping & top in 
> addition to a standardized precompiled kernel. 
Ok, I get it.


On Fri, 23 Aug 2019 21:12:35 -0500
Patrick Dohman  wrote:

> In my opinion your net5501’s system calls per interval are relatively high.
> The (traps sys) column on my firewall hovers between 40 & 50 quite 
> consistently.
> My understanding is that system calls are things like program calls & library 
> access.
> 
> In addition your net5501’s memory requests per second seem heavy.
> You have fifty eight million 1024 bucket requests per second.
> My firewall has a max of one hundred thousand 128 bucket requests per second.
> 
> Many commercial routers run a customized kernel & rely on a striped down 
> user-land.
> The kernel is also recompiled to run TCP/IP4 only & can no longer execute 
> things like storage or virtualization.
> The OpenBSD O.S includes all the user-land tools such as ping & top in 
> addition to a standardized precompiled kernel. 
> Regards
> Patrick
> .
> > 
> > 
> > On Thu, 22 Aug 2019 19:12:55 -0500
> > Patrick Dohman  wrote:
> > 
> >> Radek
> >> 
> >> I’ve found that fast networking is actually CPU & memory intensive. 
> >> Pentium 4 and Xeon's are increasingly a necessity for stable firewalls in 
> >> my opinion.
> >> Keep in mind OpenBSD is a monolithic kernel & isn’t a one to one ratio 
> >> with a commercial router.
> >> 
> >> What are your context switches & interrupts doing while the VPN is up & 
> >> traffic is flowing?
> >> 
> >> vmstat -w 4
> >> 
> >> What is your memory high water mark during a peak traffic?
> >> 
> >> vmstat -m
> >> 
> >> Regards
> >> Patrick
> >> 
> >>> On Aug 21, 2019, at 12:34 AM, radek  wrote:
> >>> 
> >>> Hello Patrick,
> >>> I am sorry for the late reply.
> >>> 
>  Do you consider memory an issue?
> >>> No, I do not. I have a bunch of old Soekris/net5501-70 and ALIX2d2/2d3, 
> >>> that I use for VPN testing.
> >>> Current testing set (6.5/i386) is net5501-70 <-> ALIX2d3
> >>> Production set (6.3/i386) is net5501-70 <-> ALIX2d2
> >>> Also have tried net5501-70 <-> net5501-70 - the same VPN problem occurs
> >>> It is unlikely that every box has any hardware issue.
> >>> 
>  Unix load average can occasionally be deceiving.
> >>> I did not know.
> >>> 
> >>>  net5501-70 
> >>> $top -d1 | head -n 4
> >>> load averages:  0.05,  0.01,  0.00RAC-fw65-test.PRAC 10:58:14
> >>> 38 processes: 1 running, 35 idle, 1 dead, 1 on processor  up 3 days, 18:02
> >>> CPU states:  0.5% user,  0.0% nice,  0.4% sys,  0.0% spin,  0.2% intr, 
> >>> 98.8% idle
> >>> Memory: Real: 18M/267M act/tot Free: 222M Cache: 97M Swap: 0K/256M
> >>> 
> >>>  ALIX2d3 
> >>> $top -d1 | head -n 4
> >>> load averages:  0.00,  0.00,  0.00mon65.home 07:30:05
> >>> 37 processes: 1 running, 35 idle, 1 on processor  up 13:46
> >>> CPU states:  0.3% user,  0.0% nice,  1.1% sys,  0.0% spin,  0.4% intr, 
> >>> 98.3% idle
> >>> Memory: Real: 125M/223M act/tot Free: 14M Cache: 47M Swap: 73M/256M
> >>> 
> >>> 
> >>> 
>  What is the speed of your memory?
>  What make of Ethernets are you running?
> >>> Dmesgs below
> >>> 
> >>>  net5501-70 
> >>> OpenBSD 6.5 (GENERIC) #2: Tue Jul 23 23:08:46 CEST 2019
> >>>   r...@syspatch-65-i386.openbsd.org:/usr/src/sys/arch/i386/compile/GENERIC
> >>> real mem  = 536363008 (511MB)
> >>> avail mem = 511311872 (487MB)
> >>> mpath0 at root
> >>> scsibus0 at mpath0: 256 targets
> >>> mainbus0 at root
> >>> bios0 at mainbus0: date 20/80/26, BIOS32 rev. 0 @ 0xfac40
> >>> pcibios0 at bios0: rev 2.0 @ 0xf/0x1
> >>> pcibios0: pcibios_get_intr_routing - function not supported
> >>> pcibios0: PCI IRQ Routing information unavailable.
> >>> pcibios0: PCI bus #0 is the last bus
> >>> bios0: ROM list: 0xc8000/0xa800
> >>> cpu0 at mainbus0: (uniprocessor)
> >>> cpu0: Geode(TM) Integrated Processor by AMD PCS ("AuthenticAMD" 
> >>> 586-class) 500 MHz, 05-0a-02
> >>> cpu0: FPU,DE,PSE,TSC,MSR,CX8,SEP,PGE,CMOV,CFLUSH,MMX,MMXX,3DNOW2,3DNOW
> >>> mtrr: K6-family MTRR support (2 registers)
> >>> amdmsr0 at mainbus0
> >>> pci0 at mainbus0 bus 0: configuration mode 1 (bios)
> >>> 0:20:0: io address conflict 0x6100/0x100
> >>> 0:20:0: io address conflict 0x6200/0x200
> >>> pchb0 at pci0 dev 1 function 0 "AMD Geode LX" rev 0x33
> >>> glxsb0 at pci0 dev 1 function 2 "AMD Geode LX Crypto" rev 0x00: RNG AES
> >>> vr0 at pci0 dev 6 function 0 "VIA VT6105M RhineIII" rev 0x96: irq 11, 
> >>> 

Re: [OpenIKED] Network traffic over VPN site-to-site tunnel stalls few times a day

2019-08-23 Thread Patrick Dohman
In my opinion your net5501’s system calls per interval are relatively high.
The (traps sys) column on my firewall hovers between 40 & 50 quite consistently.
My understanding is that system calls are things like program calls & library 
access.

In addition your net5501’s memory requests per second seem heavy.
You have fifty eight million 1024 bucket requests per second.
My firewall has a max of one hundred thousand 128 bucket requests per second.

Many commercial routers run a customized kernel & rely on a striped down 
user-land.
The kernel is also recompiled to run TCP/IP4 only & can no longer execute 
things like storage or virtualization.
The OpenBSD O.S includes all the user-land tools such as ping & top in addition 
to a standardized precompiled kernel. 
Regards
Patrick
.
> 
> 
> On Thu, 22 Aug 2019 19:12:55 -0500
> Patrick Dohman  wrote:
> 
>> Radek
>> 
>> I’ve found that fast networking is actually CPU & memory intensive. 
>> Pentium 4 and Xeon's are increasingly a necessity for stable firewalls in my 
>> opinion.
>> Keep in mind OpenBSD is a monolithic kernel & isn’t a one to one ratio with 
>> a commercial router.
>> 
>> What are your context switches & interrupts doing while the VPN is up & 
>> traffic is flowing?
>> 
>> vmstat -w 4
>> 
>> What is your memory high water mark during a peak traffic?
>> 
>> vmstat -m
>> 
>> Regards
>> Patrick
>> 
>>> On Aug 21, 2019, at 12:34 AM, radek  wrote:
>>> 
>>> Hello Patrick,
>>> I am sorry for the late reply.
>>> 
 Do you consider memory an issue?
>>> No, I do not. I have a bunch of old Soekris/net5501-70 and ALIX2d2/2d3, 
>>> that I use for VPN testing.
>>> Current testing set (6.5/i386) is net5501-70 <-> ALIX2d3
>>> Production set (6.3/i386) is net5501-70 <-> ALIX2d2
>>> Also have tried net5501-70 <-> net5501-70 - the same VPN problem occurs
>>> It is unlikely that every box has any hardware issue.
>>> 
 Unix load average can occasionally be deceiving.
>>> I did not know.
>>> 
>>>  net5501-70 
>>> $top -d1 | head -n 4
>>> load averages:  0.05,  0.01,  0.00RAC-fw65-test.PRAC 10:58:14
>>> 38 processes: 1 running, 35 idle, 1 dead, 1 on processor  up 3 days, 18:02
>>> CPU states:  0.5% user,  0.0% nice,  0.4% sys,  0.0% spin,  0.2% intr, 
>>> 98.8% idle
>>> Memory: Real: 18M/267M act/tot Free: 222M Cache: 97M Swap: 0K/256M
>>> 
>>>  ALIX2d3 
>>> $top -d1 | head -n 4
>>> load averages:  0.00,  0.00,  0.00mon65.home 07:30:05
>>> 37 processes: 1 running, 35 idle, 1 on processor  up 13:46
>>> CPU states:  0.3% user,  0.0% nice,  1.1% sys,  0.0% spin,  0.4% intr, 
>>> 98.3% idle
>>> Memory: Real: 125M/223M act/tot Free: 14M Cache: 47M Swap: 73M/256M
>>> 
>>> 
>>> 
 What is the speed of your memory?
 What make of Ethernets are you running?
>>> Dmesgs below
>>> 
>>>  net5501-70 
>>> OpenBSD 6.5 (GENERIC) #2: Tue Jul 23 23:08:46 CEST 2019
>>>   r...@syspatch-65-i386.openbsd.org:/usr/src/sys/arch/i386/compile/GENERIC
>>> real mem  = 536363008 (511MB)
>>> avail mem = 511311872 (487MB)
>>> mpath0 at root
>>> scsibus0 at mpath0: 256 targets
>>> mainbus0 at root
>>> bios0 at mainbus0: date 20/80/26, BIOS32 rev. 0 @ 0xfac40
>>> pcibios0 at bios0: rev 2.0 @ 0xf/0x1
>>> pcibios0: pcibios_get_intr_routing - function not supported
>>> pcibios0: PCI IRQ Routing information unavailable.
>>> pcibios0: PCI bus #0 is the last bus
>>> bios0: ROM list: 0xc8000/0xa800
>>> cpu0 at mainbus0: (uniprocessor)
>>> cpu0: Geode(TM) Integrated Processor by AMD PCS ("AuthenticAMD" 586-class) 
>>> 500 MHz, 05-0a-02
>>> cpu0: FPU,DE,PSE,TSC,MSR,CX8,SEP,PGE,CMOV,CFLUSH,MMX,MMXX,3DNOW2,3DNOW
>>> mtrr: K6-family MTRR support (2 registers)
>>> amdmsr0 at mainbus0
>>> pci0 at mainbus0 bus 0: configuration mode 1 (bios)
>>> 0:20:0: io address conflict 0x6100/0x100
>>> 0:20:0: io address conflict 0x6200/0x200
>>> pchb0 at pci0 dev 1 function 0 "AMD Geode LX" rev 0x33
>>> glxsb0 at pci0 dev 1 function 2 "AMD Geode LX Crypto" rev 0x00: RNG AES
>>> vr0 at pci0 dev 6 function 0 "VIA VT6105M RhineIII" rev 0x96: irq 11, 
>>> address 00:00:24:cb:4f:cc
>>> ukphy0 at vr0 phy 1: Generic IEEE 802.3u media interface, rev. 3: OUI 
>>> 0x004063, model 0x0034
>>> vr1 at pci0 dev 7 function 0 "VIA VT6105M RhineIII" rev 0x96: irq 5, 
>>> address 00:00:24:cb:4f:cd
>>> ukphy1 at vr1 phy 1: Generic IEEE 802.3u media interface, rev. 3: OUI 
>>> 0x004063, model 0x0034
>>> vr2 at pci0 dev 8 function 0 "VIA VT6105M RhineIII" rev 0x96: irq 9, 
>>> address 00:00:24:cb:4f:ce
>>> ukphy2 at vr2 phy 1: Generic IEEE 802.3u media interface, rev. 3: OUI 
>>> 0x004063, model 0x0034
>>> vr3 at pci0 dev 9 function 0 "VIA VT6105M RhineIII" rev 0x96: irq 12, 
>>> address 00:00:24:cb:4f:cf
>>> ukphy3 at vr3 phy 1: Generic IEEE 802.3u media interface, rev. 3: OUI 
>>> 0x004063, model 0x0034
>>> glxpcib0 at pci0 dev 20 function 0 "AMD CS5536 ISA" rev 0x03: rev 3, 32-bit 
>>> 3579545Hz timer, watchdog, gpio, i2c
>>> gpio0 at glxpcib0: 32 pins
>>> iic0 at glxpcib0
>>> pciide0 at pci0 dev 

Re: [OpenIKED] Network traffic over VPN site-to-site tunnel stalls few times a day

2019-08-23 Thread radek
Hello Patrick,

> I’ve found that fast networking is actually CPU & memory intensive.
In my case it is 40/4 Mbps at both ends. Not so fast.

> Pentium 4 and Xeon's are increasingly a necessity for stable firewalls in my 
> opinion.
I will run the same VPN confs on apu1d and PC with Pentium D 820 and check if 
it works more stable.

> Keep in mind OpenBSD is a monolithic kernel & isn’t a one to one ratio with a 
> commercial router.
Could you explain it in other way?

> What are your context switches & interrupts doing while the VPN is up & 
> traffic is flowing?
> 
> vmstat -w 4
>
> What is your memory high water mark during a peak traffic?
> 
> vmstat -m

My testing 6.5 setup looks like this:
net5501-70 - no LAN clients
ALIX2d3 - my home router - two laptops connected directly to ALIX
There is no a significant traffic over VPN, just 3 ping packets every 32 sec, 
generated by monitoring script.
What is more, in the middle of the night (when home laptops were turned off) my 
script also restarted iked.
Date: Fri, 23 Aug 2019 03:43:58 +0200 (CEST)

01. if traffic is not flowing
ALIX$ ifstat -i vr0 -i enc0 
   vr0 enc0   
 KB/s in  KB/s out   KB/s in  KB/s out
0.13  0.27  0.00  0.00
0.06  0.14  0.00  0.00
0.63  0.14  0.00  0.00
0.42  0.14  0.00  0.00

ALIX$ vmstat -w 4
 procsmemory   pagedisk traps  cpu
 r   s   avm fre  flt  re  pi  po  fr  sr wd0  int   sys   cs us sy id
 1  57  192M 20M8   0   0   0   0 117   1  25831   71  0  1 99
 0  58  192M 20M4   0   0   0   0   0   0  23024   31  0  0 100
 1  57  192M 20M2   0   0   0   0   0   0  23023   32  0  0 100
 0  58  192M 20M2   0   0   0   0   0   0  23021   31  0  0 100
 0  58  192M 20M2   0   0   0   0   0   0  23025   33  0  0 100
 0  58  192M 20M2   0   0   0   0   0   0  22919   29  0  0 100
 0  58  192M 20M2   0   0   0   0   0   0  23024   33  0  1 99

net5501$ vmstat -w 4
 procsmemory   pagedisk traps  cpu
 r   s   avm fre  flt  re  pi  po  fr  sr wd0  int   sys   cs us sy id
 1  58   19M218M   24   0   0   0   0   0   0  229   148   28  0  1 99
 0  59   19M218M4   0   0   0   0   0   0  230   156   28  0  0 100
 0  59   19M218M2   0   0   0   0   0   0  230   154   28  0  0 100
 0  59   19M218M2   0   0   0   0   0   0  229   154   25  0  0 100
 0  59   19M218M2   0   0   0   0   0   0  229   154   25  0  0 100
 0  59   19M218M  171   0   0   0   0   0   0  232   158   42  0  2 98
 0  59   19M218M2   0   0   0   0   0   0  230   154   27  0  0 100
 0  59   19M218M2   0   0   0   0   0   0  231   157   28  0  0 100
 0  59   19M218M2   0   0   0   0   0   0  229   154   26  0  0 100


02. if traffic is flowing from ALIX to net5501
ALIX$ nc -N -s 172.16.1.254 10.0.17.254 1234 < 100MB.test
net5501$ nc -l 1234 > /dev/null

ALIX$ ifstat -i vr0 -i enc0
   vr0 enc0   
 KB/s in  KB/s out   KB/s in  KB/s out
   29.59579.75 17.39549.12
   30.15580.07 17.19549.56
   29.43578.51 17.40548.09
   32.87535.13 19.61506.97
   30.23581.61 17.47551.02
   29.90581.63 17.61551.04
   30.08580.03 17.40549.53

ALIX$ vmstat -w 4
 procsmemory   pagedisk traps  cpu
 r   s   avm fre  flt  re  pi  po  fr  sr wd0  int   sys   cs us sy id
 1  58  192M 19M8   0   0   0   0 117   1  25831   71  0  1 99
 0  59  192M 19M4   0   0   0   0   0   0  573   519  950  1 23 77
 0  59  192M 19M2   0   0   0   0   0   0  573   532  953  0 22 78
 0  59  192M 19M2   0   0   0   0   0   0  574   521  955  2 19 79
 0  59  192M 19M2   0   0   0   0   0   0  574   517  951  0 25 75
 0  59  192M 19M2   0   0   0   0   0   0  571   535  956  1 22 77
 0  59  192M 19M2   0   0   0   0   0   0  576   522  960  0 22 77


net5501$ vmstat -w 4
 procsmemory   pagedisk traps  cpu
 r   s   avm fre  flt  re  pi  po  fr  sr wd0  int   sys   cs us sy id
 1  59   20M218M   24   0   0   0   0   0   0  229   147   28  0  1 99
 0  60   20M218M4   0   0   0   0   0   0  651  1433 1575  1 28 72
 0  62   21M216M  143   0   0   0   0   0   0  647  1404 1567  0 28 72
 0  60   20M218M   31   0   0   0   0   0   0  648  1476 1593  0 25 75
 2  58   20M218M2   0   0   0   0   0   0  647  1429 1571  0 25 75
 0  60   20M218M2   0   0   0   0   0   0  651  1492 1602  0 25 75
 0  60   20M218M2   0   0   0   0   0   0  648  1442 1579  0 25 74
 0  60   20M218M2   0   0   0   0   0   0  646  1312 1587  1 27 73


ALIX$ vmstat -m
Memory statistics by bucket size
Size   In Use   Free   Requests  HighWater  Couldfree
   

Re: [OpenIKED] Network traffic over VPN site-to-site tunnel stalls few times a day

2019-08-22 Thread Patrick Dohman
Radek

I’ve found that fast networking is actually CPU & memory intensive. 
Pentium 4 and Xeon's are increasingly a necessity for stable firewalls in my 
opinion.
Keep in mind OpenBSD is a monolithic kernel & isn’t a one to one ratio with a 
commercial router.

What are your context switches & interrupts doing while the VPN is up & traffic 
is flowing?

vmstat -w 4

What is your memory high water mark during a peak traffic?

vmstat -m

Regards
Patrick

> On Aug 21, 2019, at 12:34 AM, radek  wrote:
> 
> Hello Patrick,
> I am sorry for the late reply.
> 
>> Do you consider memory an issue?
> No, I do not. I have a bunch of old Soekris/net5501-70 and ALIX2d2/2d3, that 
> I use for VPN testing.
> Current testing set (6.5/i386) is net5501-70 <-> ALIX2d3
> Production set (6.3/i386) is net5501-70 <-> ALIX2d2
> Also have tried net5501-70 <-> net5501-70 - the same VPN problem occurs
> It is unlikely that every box has any hardware issue.
> 
>> Unix load average can occasionally be deceiving.
> I did not know.
> 
>  net5501-70 
> $top -d1 | head -n 4
> load averages:  0.05,  0.01,  0.00RAC-fw65-test.PRAC 10:58:14
> 38 processes: 1 running, 35 idle, 1 dead, 1 on processor  up 3 days, 18:02
> CPU states:  0.5% user,  0.0% nice,  0.4% sys,  0.0% spin,  0.2% intr, 98.8% 
> idle
> Memory: Real: 18M/267M act/tot Free: 222M Cache: 97M Swap: 0K/256M
> 
>  ALIX2d3 
> $top -d1 | head -n 4
> load averages:  0.00,  0.00,  0.00mon65.home 07:30:05
> 37 processes: 1 running, 35 idle, 1 on processor  up 13:46
> CPU states:  0.3% user,  0.0% nice,  1.1% sys,  0.0% spin,  0.4% intr, 98.3% 
> idle
> Memory: Real: 125M/223M act/tot Free: 14M Cache: 47M Swap: 73M/256M
> 
> 
> 
>> What is the speed of your memory?
>> What make of Ethernets are you running?
> Dmesgs below
> 
>  net5501-70 
> OpenBSD 6.5 (GENERIC) #2: Tue Jul 23 23:08:46 CEST 2019
>r...@syspatch-65-i386.openbsd.org:/usr/src/sys/arch/i386/compile/GENERIC
> real mem  = 536363008 (511MB)
> avail mem = 511311872 (487MB)
> mpath0 at root
> scsibus0 at mpath0: 256 targets
> mainbus0 at root
> bios0 at mainbus0: date 20/80/26, BIOS32 rev. 0 @ 0xfac40
> pcibios0 at bios0: rev 2.0 @ 0xf/0x1
> pcibios0: pcibios_get_intr_routing - function not supported
> pcibios0: PCI IRQ Routing information unavailable.
> pcibios0: PCI bus #0 is the last bus
> bios0: ROM list: 0xc8000/0xa800
> cpu0 at mainbus0: (uniprocessor)
> cpu0: Geode(TM) Integrated Processor by AMD PCS ("AuthenticAMD" 586-class) 
> 500 MHz, 05-0a-02
> cpu0: FPU,DE,PSE,TSC,MSR,CX8,SEP,PGE,CMOV,CFLUSH,MMX,MMXX,3DNOW2,3DNOW
> mtrr: K6-family MTRR support (2 registers)
> amdmsr0 at mainbus0
> pci0 at mainbus0 bus 0: configuration mode 1 (bios)
> 0:20:0: io address conflict 0x6100/0x100
> 0:20:0: io address conflict 0x6200/0x200
> pchb0 at pci0 dev 1 function 0 "AMD Geode LX" rev 0x33
> glxsb0 at pci0 dev 1 function 2 "AMD Geode LX Crypto" rev 0x00: RNG AES
> vr0 at pci0 dev 6 function 0 "VIA VT6105M RhineIII" rev 0x96: irq 11, address 
> 00:00:24:cb:4f:cc
> ukphy0 at vr0 phy 1: Generic IEEE 802.3u media interface, rev. 3: OUI 
> 0x004063, model 0x0034
> vr1 at pci0 dev 7 function 0 "VIA VT6105M RhineIII" rev 0x96: irq 5, address 
> 00:00:24:cb:4f:cd
> ukphy1 at vr1 phy 1: Generic IEEE 802.3u media interface, rev. 3: OUI 
> 0x004063, model 0x0034
> vr2 at pci0 dev 8 function 0 "VIA VT6105M RhineIII" rev 0x96: irq 9, address 
> 00:00:24:cb:4f:ce
> ukphy2 at vr2 phy 1: Generic IEEE 802.3u media interface, rev. 3: OUI 
> 0x004063, model 0x0034
> vr3 at pci0 dev 9 function 0 "VIA VT6105M RhineIII" rev 0x96: irq 12, address 
> 00:00:24:cb:4f:cf
> ukphy3 at vr3 phy 1: Generic IEEE 802.3u media interface, rev. 3: OUI 
> 0x004063, model 0x0034
> glxpcib0 at pci0 dev 20 function 0 "AMD CS5536 ISA" rev 0x03: rev 3, 32-bit 
> 3579545Hz timer, watchdog, gpio, i2c
> gpio0 at glxpcib0: 32 pins
> iic0 at glxpcib0
> pciide0 at pci0 dev 20 function 2 "AMD CS5536 IDE" rev 0x01: DMA, channel 0 
> wired to compatibility, channel 1 wired to compatibility
> wd0 at pciide0 channel 0 drive 0: 
> wd0: 1-sector PIO, LBA48, 7629MB, 15625216 sectors
> wd0(pciide0:0:0): using PIO mode 4, Ultra-DMA mode 2
> pciide0: channel 1 ignored (disabled)
> ohci0 at pci0 dev 21 function 0 "AMD CS5536 USB" rev 0x02: irq 15, version 
> 1.0, legacy support
> ehci0 at pci0 dev 21 function 1 "AMD CS5536 USB" rev 0x02: irq 15
> usb0 at ehci0: USB revision 2.0
> uhub0 at usb0 configuration 1 interface 0 "AMD EHCI root hub" rev 2.00/1.00 
> addr 1
> isa0 at glxpcib0
> isadma0 at isa0
> com0 at isa0 port 0x3f8/8 irq 4: ns16550a, 16 byte fifo
> com0: console
> com1 at isa0 port 0x2f8/8 irq 3: ns16550a, 16 byte fifo
> pckbc0 at isa0 port 0x60/5 irq 1 irq 12
> pckbc0: unable to establish interrupt for irq 12
> pckbd0 at pckbc0 (kbd slot)
> wskbd0 at pckbd0: console keyboard
> pcppi0 at isa0 port 0x61
> spkr0 at pcppi0
> nsclpcsio0 at isa0 port 0x2e/2: NSC PC87366 rev 9: GPIO VLM TMS
> gpio1 at nsclpcsio0: 29 pins
> npx0 at isa0 port 

Re: [OpenIKED] Network traffic over VPN site-to-site tunnel stalls few times a day

2019-08-20 Thread radek
Hello Patrick,
I am sorry for the late reply.

> Do you consider memory an issue?
No, I do not. I have a bunch of old Soekris/net5501-70 and ALIX2d2/2d3, that I 
use for VPN testing.
Current testing set (6.5/i386) is net5501-70 <-> ALIX2d3
Production set (6.3/i386) is net5501-70 <-> ALIX2d2
Also have tried net5501-70 <-> net5501-70 - the same VPN problem occurs
It is unlikely that every box has any hardware issue.

> Unix load average can occasionally be deceiving.
I did not know.

 net5501-70 
$top -d1 | head -n 4
load averages:  0.05,  0.01,  0.00RAC-fw65-test.PRAC 10:58:14
38 processes: 1 running, 35 idle, 1 dead, 1 on processor  up 3 days, 18:02
CPU states:  0.5% user,  0.0% nice,  0.4% sys,  0.0% spin,  0.2% intr, 98.8% 
idle
Memory: Real: 18M/267M act/tot Free: 222M Cache: 97M Swap: 0K/256M

 ALIX2d3 
$top -d1 | head -n 4
load averages:  0.00,  0.00,  0.00mon65.home 07:30:05
37 processes: 1 running, 35 idle, 1 on processor  up 13:46
CPU states:  0.3% user,  0.0% nice,  1.1% sys,  0.0% spin,  0.4% intr, 98.3% 
idle
Memory: Real: 125M/223M act/tot Free: 14M Cache: 47M Swap: 73M/256M



> What is the speed of your memory?
> What make of Ethernets are you running?
Dmesgs below

 net5501-70 
OpenBSD 6.5 (GENERIC) #2: Tue Jul 23 23:08:46 CEST 2019
r...@syspatch-65-i386.openbsd.org:/usr/src/sys/arch/i386/compile/GENERIC
real mem  = 536363008 (511MB)
avail mem = 511311872 (487MB)
mpath0 at root
scsibus0 at mpath0: 256 targets
mainbus0 at root
bios0 at mainbus0: date 20/80/26, BIOS32 rev. 0 @ 0xfac40
pcibios0 at bios0: rev 2.0 @ 0xf/0x1
pcibios0: pcibios_get_intr_routing - function not supported
pcibios0: PCI IRQ Routing information unavailable.
pcibios0: PCI bus #0 is the last bus
bios0: ROM list: 0xc8000/0xa800
cpu0 at mainbus0: (uniprocessor)
cpu0: Geode(TM) Integrated Processor by AMD PCS ("AuthenticAMD" 586-class) 500 
MHz, 05-0a-02
cpu0: FPU,DE,PSE,TSC,MSR,CX8,SEP,PGE,CMOV,CFLUSH,MMX,MMXX,3DNOW2,3DNOW
mtrr: K6-family MTRR support (2 registers)
amdmsr0 at mainbus0
pci0 at mainbus0 bus 0: configuration mode 1 (bios)
0:20:0: io address conflict 0x6100/0x100
0:20:0: io address conflict 0x6200/0x200
pchb0 at pci0 dev 1 function 0 "AMD Geode LX" rev 0x33
glxsb0 at pci0 dev 1 function 2 "AMD Geode LX Crypto" rev 0x00: RNG AES
vr0 at pci0 dev 6 function 0 "VIA VT6105M RhineIII" rev 0x96: irq 11, address 
00:00:24:cb:4f:cc
ukphy0 at vr0 phy 1: Generic IEEE 802.3u media interface, rev. 3: OUI 0x004063, 
model 0x0034
vr1 at pci0 dev 7 function 0 "VIA VT6105M RhineIII" rev 0x96: irq 5, address 
00:00:24:cb:4f:cd
ukphy1 at vr1 phy 1: Generic IEEE 802.3u media interface, rev. 3: OUI 0x004063, 
model 0x0034
vr2 at pci0 dev 8 function 0 "VIA VT6105M RhineIII" rev 0x96: irq 9, address 
00:00:24:cb:4f:ce
ukphy2 at vr2 phy 1: Generic IEEE 802.3u media interface, rev. 3: OUI 0x004063, 
model 0x0034
vr3 at pci0 dev 9 function 0 "VIA VT6105M RhineIII" rev 0x96: irq 12, address 
00:00:24:cb:4f:cf
ukphy3 at vr3 phy 1: Generic IEEE 802.3u media interface, rev. 3: OUI 0x004063, 
model 0x0034
glxpcib0 at pci0 dev 20 function 0 "AMD CS5536 ISA" rev 0x03: rev 3, 32-bit 
3579545Hz timer, watchdog, gpio, i2c
gpio0 at glxpcib0: 32 pins
iic0 at glxpcib0
pciide0 at pci0 dev 20 function 2 "AMD CS5536 IDE" rev 0x01: DMA, channel 0 
wired to compatibility, channel 1 wired to compatibility
wd0 at pciide0 channel 0 drive 0: 
wd0: 1-sector PIO, LBA48, 7629MB, 15625216 sectors
wd0(pciide0:0:0): using PIO mode 4, Ultra-DMA mode 2
pciide0: channel 1 ignored (disabled)
ohci0 at pci0 dev 21 function 0 "AMD CS5536 USB" rev 0x02: irq 15, version 1.0, 
legacy support
ehci0 at pci0 dev 21 function 1 "AMD CS5536 USB" rev 0x02: irq 15
usb0 at ehci0: USB revision 2.0
uhub0 at usb0 configuration 1 interface 0 "AMD EHCI root hub" rev 2.00/1.00 
addr 1
isa0 at glxpcib0
isadma0 at isa0
com0 at isa0 port 0x3f8/8 irq 4: ns16550a, 16 byte fifo
com0: console
com1 at isa0 port 0x2f8/8 irq 3: ns16550a, 16 byte fifo
pckbc0 at isa0 port 0x60/5 irq 1 irq 12
pckbc0: unable to establish interrupt for irq 12
pckbd0 at pckbc0 (kbd slot)
wskbd0 at pckbd0: console keyboard
pcppi0 at isa0 port 0x61
spkr0 at pcppi0
nsclpcsio0 at isa0 port 0x2e/2: NSC PC87366 rev 9: GPIO VLM TMS
gpio1 at nsclpcsio0: 29 pins
npx0 at isa0 port 0xf0/16: reported by CPUID; using exception 16
usb1 at ohci0: USB revision 1.0
uhub1 at usb1 configuration 1 interface 0 "AMD OHCI root hub" rev 1.00/1.00 
addr 1
vscsi0 at root
scsibus1 at vscsi0: 256 targets
softraid0 at root
scsibus2 at softraid0: 256 targets
root on wd0a (2bf8b7abbbce37df.a) swap on wd0b dump on wd0b


 ALIX2d3 
OpenBSD 6.5 (GENERIC) #2: Tue Jul 23 23:08:46 CEST 2019
r...@syspatch-65-i386.openbsd.org:/usr/src/sys/arch/i386/compile/GENERIC
real mem  = 267931648 (255MB)
avail mem = 247779328 (236MB)
mpath0 at root
scsibus0 at mpath0: 256 targets
mainbus0 at root
bios0 at mainbus0: date 11/05/08, BIOS32 rev. 0 @ 0xfd088
pcibios0 at bios0: rev 2.1 @ 0xf/0x1
pcibios0: 

Re: [OpenIKED] Network traffic over VPN site-to-site tunnel stalls few times a day

2019-08-19 Thread Patrick Dohman
Do you consider memory an issue?
What is the speed of your memory?
Unix load average can occasionally be deceiving.
What make of Ethernets are you running?
Regards
Patrick

> On Aug 19, 2019, at 5:28 AM, radek  wrote:
> 
> Hello Patrick,
> 
>> Does your ISP implement authoritative DNS?
>> Do you suspect a UDP issue?
> My VPN is configured with IPs, not with domain names. Does DNS and/or UDP 
> matter anyway?
> 
>> Is a managed (switch) involved?
> No, it is not. I do not use any switches in my testing setup.
> GW1--ISP1_modem--.--ISP2_modem--GW2
> 
> Has duplex ever been an issue?
> I have never noticed any duplex issue.
> 
> 
> On Sun, 18 Aug 2019 16:07:14 -0500
> Patrick Dohman  wrote:
> 
>> Does your ISP implement authoritative DNS?
>> Do you suspect a UDP issue?
>> Is a managed (switch) involved? Has duplex ever been an issue?
>> Regards
>> Patrick  
>> 
>>> On Aug 18, 2019, at 1:03 PM, Radek  wrote:
>>> 
>>> Hello,
>>> 
>>> I have two testing gateways (6.5/i386) with site-to-side VPN between its 
>>> LANs (OpenIKED).
>>> Both gws are fully syspatched, have public IPs and the same iked/pf 
>>> configuration.
>>> 
>>> Unfortunately, the network traffic over the VPN tunnel stalls few times a 
>>> day. 
>>> 
>>> On the one side I use a script to monitor VPN tunnel with ping, it restarts 
>>> iked and emails me if there is no ping over the VPN tunnel.
>>> Date: Sat, 17 Aug 2019 22:10:30 +0200 (CEST)
>>> Date: Sun, 18 Aug 2019 06:00:20 +0200 (CEST)
>>> Date: Sun, 18 Aug 2019 11:09:00 +0200 (CEST)
>>> Date: Sun, 18 Aug 2019 19:03:02 +0200 (CEST)
>>> 
>>> 
>>> In 6.3/i386 I have the same problem, but more frequently.
>>> Date: Sat, 17 Aug 2019 23:03:56 +0200 (CEST)
>>> Date: Sun, 18 Aug 2019 01:37:50 +0200 (CEST)
>>> Date: Sun, 18 Aug 2019 04:12:31 +0200 (CEST)
>>> Date: Sun, 18 Aug 2019 06:46:25 +0200 (CEST)
>>> Date: Sun, 18 Aug 2019 09:20:22 +0200 (CEST)
>>> Date: Sun, 18 Aug 2019 11:59:08 +0200 (CEST)
>>> Date: Sun, 18 Aug 2019 14:34:38 +0200 (CEST)
>>> Date: Sun, 18 Aug 2019 17:12:57 +0200 (CEST)
>>> Date: Sun, 18 Aug 2019 19:47:16 +0200 (CEST)
>>> 
>>> Do I have any bugs/deficiencies in my configs, missed something? 
>>> Is there any way to make it work uninterruptedly?
>>> I would be very greatful if you could help me with this case.
>>> 
>>> $cat /etc/hostname.enc0
>>> up
>>> 
>>> $cat /etc/hostname.vr3
>>> inet 10.0.17.254 255.255.255.0 NONE description "LAN17"
>>> group trust
>>> 
>>> $cat /etc/iked.conf
>>> local_gw_RAC17  = "10.0.17.254" # lan_RAC
>>> local_lan_RAC17 = "10.0.17.0/24"
>>> remote_gw_MON   = "1.2.3.5" # fw_MON
>>> remote_lan_MON  = "172.16.1.0/24"
>>> ikev2 quick active esp \
>>> from $local_gw_RAC17 to $remote_gw_MON \
>>> from $local_lan_RAC17 to $remote_lan_MON peer $remote_gw_MON \
>>> childsa enc chacha20-poly1305 \
>>> psk "psk"
>>> 
>>> $cat /etc/pf.conf
>>> # RAC-fwTEST
>>> ext_if  = "vr0"
>>> lan_rac_if  = "vr3" # vr3 -
>>> lan_rac_local   = $lan_rac_if:network # 10.0.17.0/24
>>> backup_if   = "vr2" # vr2 - lewy port
>>> backup_local= $backup_if:network # 10.0.117/24
>>> 
>>> bud = "1.2.3.0/25"
>>> rdk_wy  = "1.2.3.4"
>>> rdk_mon = "1.2.3.5"
>>> panac_krz   = "1.2.3.6"
>>> panac_rac   = "1.2.3.7"
>>> 
>>> set fingerprints "/dev/null"
>>> set skip on { lo, enc0 }
>>> set block-policy drop
>>> set optimization normal
>>> set ruleset-optimization basic
>>> 
>>> antispoof quick for {lo0, $lan_rac_if, $backup_if }
>>> 
>>> match out log on $ext_if from { $lan_rac_local, $backup_local } nat-to 
>>> $ext_if set prio (3, 7)
>>> 
>>> block all
>>> 
>>> match in all scrub (no-df random-id)
>>> match out all scrub (no-df random-id)
>>> pass out on egress keep state
>>> 
>>> pass from { 10.0.201.0/24, $lan_rac_local, $backup_local } to any set prio 
>>> (3, 7) keep state
>>> 
>>> ssh_port= "1071"
>>> table  const { $bud, $rdk_wy, $rdk_mon, $panac_krz, $panac_rac, 
>>> 10.0.2.0/24, 10.0.15.0/24, 10.0.100.0/24 }
>>> table  persist counters
>>> block from 
>>> pass in log quick inet proto tcp from  to $ext_if port $ssh_port 
>>> flags S/SA \
>>>  set prio (7, 7) keep state \
>>>  (max-src-conn 15, max-src-conn-rate 2/10, overload  flush 
>>> global)
>>> 
>>> icmp_types  = "{ echoreq, unreach }"
>>> pass inet proto icmp all icmp-type $icmp_types \
>>>  set prio (7, 7) keep state
>>> 
>>> table  const { $rdk_mon, $panac_rac, $panac_krz }
>>> pass out quick on egress proto esp from (egress:0) to
>>>set prio (6, 7) keep state
>>> pass out quick on egress proto udp from (egress:0) to  port 
>>> {500, 4500} set prio (6, 7) keep state
>>> pass  in quick on egress proto esp from  to (egress:0)   
>>>set prio (6, 7) keep state
>>> pass  in quick on egress proto udp from  to (egress:0) port 
>>> {500, 4500} set prio (6, 7) keep state
>>> pass out quick on trust received-on enc0 set prio (6, 7) keep state
>>> 
>>> pass in on egress proto udp from any to (egress:0) 

Re: [OpenIKED] Network traffic over VPN site-to-site tunnel stalls few times a day

2019-08-19 Thread radek
Hello Patrick,

> Does your ISP implement authoritative DNS?
> Do you suspect a UDP issue?
My VPN is configured with IPs, not with domain names. Does DNS and/or UDP 
matter anyway?

> Is a managed (switch) involved?
No, it is not. I do not use any switches in my testing setup.
GW1--ISP1_modem--.--ISP2_modem--GW2

Has duplex ever been an issue?
I have never noticed any duplex issue.


On Sun, 18 Aug 2019 16:07:14 -0500
Patrick Dohman  wrote:

> Does your ISP implement authoritative DNS?
> Do you suspect a UDP issue?
> Is a managed (switch) involved? Has duplex ever been an issue?
> Regards
> Patrick  
> 
> > On Aug 18, 2019, at 1:03 PM, Radek  wrote:
> > 
> > Hello,
> > 
> > I have two testing gateways (6.5/i386) with site-to-side VPN between its 
> > LANs (OpenIKED).
> > Both gws are fully syspatched, have public IPs and the same iked/pf 
> > configuration.
> > 
> > Unfortunately, the network traffic over the VPN tunnel stalls few times a 
> > day. 
> > 
> > On the one side I use a script to monitor VPN tunnel with ping, it restarts 
> > iked and emails me if there is no ping over the VPN tunnel.
> > Date: Sat, 17 Aug 2019 22:10:30 +0200 (CEST)
> > Date: Sun, 18 Aug 2019 06:00:20 +0200 (CEST)
> > Date: Sun, 18 Aug 2019 11:09:00 +0200 (CEST)
> > Date: Sun, 18 Aug 2019 19:03:02 +0200 (CEST)
> > 
> > 
> > In 6.3/i386 I have the same problem, but more frequently.
> > Date: Sat, 17 Aug 2019 23:03:56 +0200 (CEST)
> > Date: Sun, 18 Aug 2019 01:37:50 +0200 (CEST)
> > Date: Sun, 18 Aug 2019 04:12:31 +0200 (CEST)
> > Date: Sun, 18 Aug 2019 06:46:25 +0200 (CEST)
> > Date: Sun, 18 Aug 2019 09:20:22 +0200 (CEST)
> > Date: Sun, 18 Aug 2019 11:59:08 +0200 (CEST)
> > Date: Sun, 18 Aug 2019 14:34:38 +0200 (CEST)
> > Date: Sun, 18 Aug 2019 17:12:57 +0200 (CEST)
> > Date: Sun, 18 Aug 2019 19:47:16 +0200 (CEST)
> > 
> > Do I have any bugs/deficiencies in my configs, missed something? 
> > Is there any way to make it work uninterruptedly?
> > I would be very greatful if you could help me with this case.
> > 
> > $cat /etc/hostname.enc0
> > up
> > 
> > $cat /etc/hostname.vr3
> > inet 10.0.17.254 255.255.255.0 NONE description "LAN17"
> > group trust
> > 
> > $cat /etc/iked.conf
> > local_gw_RAC17  = "10.0.17.254" # lan_RAC
> > local_lan_RAC17 = "10.0.17.0/24"
> > remote_gw_MON   = "1.2.3.5" # fw_MON
> > remote_lan_MON  = "172.16.1.0/24"
> > ikev2 quick active esp \
> > from $local_gw_RAC17 to $remote_gw_MON \
> > from $local_lan_RAC17 to $remote_lan_MON peer $remote_gw_MON \
> > childsa enc chacha20-poly1305 \
> > psk "psk"
> > 
> > $cat /etc/pf.conf
> > # RAC-fwTEST
> > ext_if  = "vr0"
> > lan_rac_if  = "vr3" # vr3 -
> > lan_rac_local   = $lan_rac_if:network # 10.0.17.0/24
> > backup_if   = "vr2" # vr2 - lewy port
> > backup_local= $backup_if:network # 10.0.117/24
> > 
> > bud = "1.2.3.0/25"
> > rdk_wy  = "1.2.3.4"
> > rdk_mon = "1.2.3.5"
> > panac_krz   = "1.2.3.6"
> > panac_rac   = "1.2.3.7"
> > 
> > set fingerprints "/dev/null"
> > set skip on { lo, enc0 }
> > set block-policy drop
> > set optimization normal
> > set ruleset-optimization basic
> > 
> > antispoof quick for {lo0, $lan_rac_if, $backup_if }
> > 
> > match out log on $ext_if from { $lan_rac_local, $backup_local } nat-to 
> > $ext_if set prio (3, 7)
> > 
> > block all
> > 
> > match in all scrub (no-df random-id)
> > match out all scrub (no-df random-id)
> > pass out on egress keep state
> > 
> > pass from { 10.0.201.0/24, $lan_rac_local, $backup_local } to any set prio 
> > (3, 7) keep state
> > 
> > ssh_port= "1071"
> > table  const { $bud, $rdk_wy, $rdk_mon, $panac_krz, $panac_rac, 
> > 10.0.2.0/24, 10.0.15.0/24, 10.0.100.0/24 }
> > table  persist counters
> > block from 
> > pass in log quick inet proto tcp from  to $ext_if port $ssh_port 
> > flags S/SA \
> >set prio (7, 7) keep state \
> >(max-src-conn 15, max-src-conn-rate 2/10, overload  
> > flush global)
> > 
> > icmp_types  = "{ echoreq, unreach }"
> > pass inet proto icmp all icmp-type $icmp_types \
> >set prio (7, 7) keep state
> > 
> > table  const { $rdk_mon, $panac_rac, $panac_krz }
> > pass out quick on egress proto esp from (egress:0) to
> >set prio (6, 7) keep state
> > pass out quick on egress proto udp from (egress:0) to  port 
> > {500, 4500} set prio (6, 7) keep state
> > pass  in quick on egress proto esp from  to (egress:0)   
> >set prio (6, 7) keep state
> > pass  in quick on egress proto udp from  to (egress:0) port 
> > {500, 4500} set prio (6, 7) keep state
> > pass out quick on trust received-on enc0 set prio (6, 7) keep state
> > 
> > pass in on egress proto udp from any to (egress:0) port 
> > {isakmp,ipsec-nat-t} set prio (6,7) keep state
> > pass in on egress proto {ah,esp} set prio (6,7) keep state
> > 
> > # By default, do not permit remote connections to X11
> > block return in on ! lo0 proto tcp to port 6000:6010
> > 
> > $cat 

Re: [OpenIKED] Network traffic over VPN site-to-site tunnel stalls few times a day

2019-08-18 Thread Patrick Dohman
Does your ISP implement authoritative DNS?
Do you suspect a UDP issue?
Is a managed (switch) involved? Has duplex ever been an issue?
Regards
Patrick  

> On Aug 18, 2019, at 1:03 PM, Radek  wrote:
> 
> Hello,
> 
> I have two testing gateways (6.5/i386) with site-to-side VPN between its LANs 
> (OpenIKED).
> Both gws are fully syspatched, have public IPs and the same iked/pf 
> configuration.
> 
> Unfortunately, the network traffic over the VPN tunnel stalls few times a 
> day. 
> 
> On the one side I use a script to monitor VPN tunnel with ping, it restarts 
> iked and emails me if there is no ping over the VPN tunnel.
> Date: Sat, 17 Aug 2019 22:10:30 +0200 (CEST)
> Date: Sun, 18 Aug 2019 06:00:20 +0200 (CEST)
> Date: Sun, 18 Aug 2019 11:09:00 +0200 (CEST)
> Date: Sun, 18 Aug 2019 19:03:02 +0200 (CEST)
> 
> 
> In 6.3/i386 I have the same problem, but more frequently.
> Date: Sat, 17 Aug 2019 23:03:56 +0200 (CEST)
> Date: Sun, 18 Aug 2019 01:37:50 +0200 (CEST)
> Date: Sun, 18 Aug 2019 04:12:31 +0200 (CEST)
> Date: Sun, 18 Aug 2019 06:46:25 +0200 (CEST)
> Date: Sun, 18 Aug 2019 09:20:22 +0200 (CEST)
> Date: Sun, 18 Aug 2019 11:59:08 +0200 (CEST)
> Date: Sun, 18 Aug 2019 14:34:38 +0200 (CEST)
> Date: Sun, 18 Aug 2019 17:12:57 +0200 (CEST)
> Date: Sun, 18 Aug 2019 19:47:16 +0200 (CEST)
> 
> Do I have any bugs/deficiencies in my configs, missed something? 
> Is there any way to make it work uninterruptedly?
> I would be very greatful if you could help me with this case.
> 
> $cat /etc/hostname.enc0
> up
> 
> $cat /etc/hostname.vr3
> inet 10.0.17.254 255.255.255.0 NONE description "LAN17"
> group trust
> 
> $cat /etc/iked.conf
> local_gw_RAC17  = "10.0.17.254" # lan_RAC
> local_lan_RAC17 = "10.0.17.0/24"
> remote_gw_MON   = "1.2.3.5" # fw_MON
> remote_lan_MON  = "172.16.1.0/24"
> ikev2 quick active esp \
> from $local_gw_RAC17 to $remote_gw_MON \
> from $local_lan_RAC17 to $remote_lan_MON peer $remote_gw_MON \
> childsa enc chacha20-poly1305 \
> psk "psk"
> 
> $cat /etc/pf.conf
> # RAC-fwTEST
> ext_if  = "vr0"
> lan_rac_if  = "vr3" # vr3 -
> lan_rac_local   = $lan_rac_if:network # 10.0.17.0/24
> backup_if   = "vr2" # vr2 - lewy port
> backup_local= $backup_if:network # 10.0.117/24
> 
> bud = "1.2.3.0/25"
> rdk_wy  = "1.2.3.4"
> rdk_mon = "1.2.3.5"
> panac_krz   = "1.2.3.6"
> panac_rac   = "1.2.3.7"
> 
> set fingerprints "/dev/null"
> set skip on { lo, enc0 }
> set block-policy drop
> set optimization normal
> set ruleset-optimization basic
> 
> antispoof quick for {lo0, $lan_rac_if, $backup_if }
> 
> match out log on $ext_if from { $lan_rac_local, $backup_local } nat-to 
> $ext_if set prio (3, 7)
> 
> block all
> 
> match in all scrub (no-df random-id)
> match out all scrub (no-df random-id)
> pass out on egress keep state
> 
> pass from { 10.0.201.0/24, $lan_rac_local, $backup_local } to any set prio 
> (3, 7) keep state
> 
> ssh_port= "1071"
> table  const { $bud, $rdk_wy, $rdk_mon, $panac_krz, $panac_rac, 
> 10.0.2.0/24, 10.0.15.0/24, 10.0.100.0/24 }
> table  persist counters
> block from 
> pass in log quick inet proto tcp from  to $ext_if port $ssh_port 
> flags S/SA \
>set prio (7, 7) keep state \
>(max-src-conn 15, max-src-conn-rate 2/10, overload  flush 
> global)
> 
> icmp_types  = "{ echoreq, unreach }"
> pass inet proto icmp all icmp-type $icmp_types \
>set prio (7, 7) keep state
> 
> table  const { $rdk_mon, $panac_rac, $panac_krz }
> pass out quick on egress proto esp from (egress:0) to  
>  set prio (6, 7) keep state
> pass out quick on egress proto udp from (egress:0) to  port {500, 
> 4500} set prio (6, 7) keep state
> pass  in quick on egress proto esp from  to (egress:0) 
>  set prio (6, 7) keep state
> pass  in quick on egress proto udp from  to (egress:0) port {500, 
> 4500} set prio (6, 7) keep state
> pass out quick on trust received-on enc0 set prio (6, 7) keep state
> 
> pass in on egress proto udp from any to (egress:0) port {isakmp,ipsec-nat-t} 
> set prio (6,7) keep state
> pass in on egress proto {ah,esp} set prio (6,7) keep state
> 
> # By default, do not permit remote connections to X11
> block return in on ! lo0 proto tcp to port 6000:6010
> 
> $cat iked_monitor.sh
> #!/bin/sh
> while true
> do
> vpn=`ping -c 3 -w 1 -I 10.0.17.254 172.16.1.254 | grep packets | awk -F " " 
> '{print $4}'`
> 
> if [ "${vpn}" -eq 0 ] ; then
> mon=`ping -c 3 -w 1 the_other_side_WAN_IP | grep packets | awk -F " " '{print 
> $4}'`
> wan=`ping -c 3 -w 1 8.8.8.8 | grep packets | awk -F " " '{print $4}'`
> 
>if [ "${mon}" -gt 0 ] && [ "${wan}" -gt 0 ] ; then
>echo vpn: ${vpn}, mon: ${mon}, wan: ${wan} | mail -s "no ping through 
> VPN RACTEST-MON! restartng iked!" em...@example.com
>rcctl restart iked
>fi
> fi
> sleep 32
> done
> 
> 
> -- 
> Radek
> 



[OpenIKED] Network traffic over VPN site-to-site tunnel stalls few times a day

2019-08-18 Thread Radek
Hello,

I have two testing gateways (6.5/i386) with site-to-side VPN between its LANs 
(OpenIKED).
Both gws are fully syspatched, have public IPs and the same iked/pf 
configuration.

Unfortunately, the network traffic over the VPN tunnel stalls few times a day. 

On the one side I use a script to monitor VPN tunnel with ping, it restarts 
iked and emails me if there is no ping over the VPN tunnel.
Date: Sat, 17 Aug 2019 22:10:30 +0200 (CEST)
Date: Sun, 18 Aug 2019 06:00:20 +0200 (CEST)
Date: Sun, 18 Aug 2019 11:09:00 +0200 (CEST)
Date: Sun, 18 Aug 2019 19:03:02 +0200 (CEST)


In 6.3/i386 I have the same problem, but more frequently.
Date: Sat, 17 Aug 2019 23:03:56 +0200 (CEST)
Date: Sun, 18 Aug 2019 01:37:50 +0200 (CEST)
Date: Sun, 18 Aug 2019 04:12:31 +0200 (CEST)
Date: Sun, 18 Aug 2019 06:46:25 +0200 (CEST)
Date: Sun, 18 Aug 2019 09:20:22 +0200 (CEST)
Date: Sun, 18 Aug 2019 11:59:08 +0200 (CEST)
Date: Sun, 18 Aug 2019 14:34:38 +0200 (CEST)
Date: Sun, 18 Aug 2019 17:12:57 +0200 (CEST)
Date: Sun, 18 Aug 2019 19:47:16 +0200 (CEST)

Do I have any bugs/deficiencies in my configs, missed something? 
Is there any way to make it work uninterruptedly?
I would be very greatful if you could help me with this case.

$cat /etc/hostname.enc0
up

$cat /etc/hostname.vr3
inet 10.0.17.254 255.255.255.0 NONE description "LAN17"
group trust

$cat /etc/iked.conf
local_gw_RAC17  = "10.0.17.254" # lan_RAC
local_lan_RAC17 = "10.0.17.0/24"
remote_gw_MON   = "1.2.3.5" # fw_MON
remote_lan_MON  = "172.16.1.0/24"
ikev2 quick active esp \
from $local_gw_RAC17 to $remote_gw_MON \
from $local_lan_RAC17 to $remote_lan_MON peer $remote_gw_MON \
childsa enc chacha20-poly1305 \
psk "psk"

$cat /etc/pf.conf
# RAC-fwTEST
ext_if  = "vr0"
lan_rac_if  = "vr3" # vr3 -
lan_rac_local   = $lan_rac_if:network # 10.0.17.0/24
backup_if   = "vr2" # vr2 - lewy port
backup_local= $backup_if:network # 10.0.117/24

bud = "1.2.3.0/25"
rdk_wy  = "1.2.3.4"
rdk_mon = "1.2.3.5"
panac_krz   = "1.2.3.6"
panac_rac   = "1.2.3.7"

set fingerprints "/dev/null"
set skip on { lo, enc0 }
set block-policy drop
set optimization normal
set ruleset-optimization basic

antispoof quick for {lo0, $lan_rac_if, $backup_if }

match out log on $ext_if from { $lan_rac_local, $backup_local } nat-to $ext_if 
set prio (3, 7)

block all

match in all scrub (no-df random-id)
match out all scrub (no-df random-id)
pass out on egress keep state

pass from { 10.0.201.0/24, $lan_rac_local, $backup_local } to any set prio (3, 
7) keep state

ssh_port= "1071"
table  const { $bud, $rdk_wy, $rdk_mon, $panac_krz, $panac_rac, 
10.0.2.0/24, 10.0.15.0/24, 10.0.100.0/24 }
table  persist counters
block from 
pass in log quick inet proto tcp from  to $ext_if port $ssh_port 
flags S/SA \
set prio (7, 7) keep state \
(max-src-conn 15, max-src-conn-rate 2/10, overload  flush 
global)

icmp_types  = "{ echoreq, unreach }"
pass inet proto icmp all icmp-type $icmp_types \
set prio (7, 7) keep state

table  const { $rdk_mon, $panac_rac, $panac_krz }
pass out quick on egress proto esp from (egress:0) to
   set prio (6, 7) keep state
pass out quick on egress proto udp from (egress:0) to  port {500, 
4500} set prio (6, 7) keep state
pass  in quick on egress proto esp from  to (egress:0)   
   set prio (6, 7) keep state
pass  in quick on egress proto udp from  to (egress:0) port {500, 
4500} set prio (6, 7) keep state
pass out quick on trust received-on enc0 set prio (6, 7) keep state

pass in on egress proto udp from any to (egress:0) port {isakmp,ipsec-nat-t} 
set prio (6,7) keep state
pass in on egress proto {ah,esp} set prio (6,7) keep state

# By default, do not permit remote connections to X11
block return in on ! lo0 proto tcp to port 6000:6010

$cat iked_monitor.sh
#!/bin/sh
while true
do
vpn=`ping -c 3 -w 1 -I 10.0.17.254 172.16.1.254 | grep packets | awk -F " " 
'{print $4}'`

if [ "${vpn}" -eq 0 ] ; then
mon=`ping -c 3 -w 1 the_other_side_WAN_IP | grep packets | awk -F " " '{print 
$4}'`
wan=`ping -c 3 -w 1 8.8.8.8 | grep packets | awk -F " " '{print $4}'`

if [ "${mon}" -gt 0 ] && [ "${wan}" -gt 0 ] ; then
echo vpn: ${vpn}, mon: ${mon}, wan: ${wan} | mail -s "no ping through 
VPN RACTEST-MON! restartng iked!" em...@example.com
rcctl restart iked
fi
fi
sleep 32
done


-- 
Radek