On 05/20/2015 03:08 PM, Paul Wouters wrote:
The error was triggered again, but unfortunately the leak detector
still shows the same exact leaks.
Which were? there must have been a lot of the,?
These are the exact same leaks in the exact same quantity from my
original report, nothing is building up over time and being detected.
May 20 18:01:57 sanfrancisco pluto[7864]: leak: 4 * struct event in
event_schedule(), item size: 32
May 20 18:01:57 sanfrancisco pluto[7864]: leak: pluto crypto helpers,
item size: 64
May 20 18:01:57 sanfrancisco pluto[7864]: leak detective found 5 leaks,
total size 96
The following errors were printed during the shutdown process:
May 20 18:01:57 sanfrancisco pluto[7864]: "wonderproxy-L2TP"[15046]
69.90.78.100: deleting connection "wonderproxy-L2TP" instance with
peer 69.90.78.100 {isakmp=#0/ipsec=#0}
May 20 18:01:57 sanfrancisco pluto[7864]: "wonderproxy-L2TP" #34623:
deleting state (STATE_QUICK_R1)
You had 34623 states since startup. Usually that indicates tunnels that
are infinitely failing to establish. I suspect something in the error
path is leaking that memory.
Our monitoring system repeatedly sets up and tears down tunnels; its
purpose is to verify that the system is able to accept incoming requests
and properly create the tunnel. This process happens every 5 minutes,
from each of our 3 monitoring systems. So that is 864 tunnels a day, and
with 2 states used per tunnel, 34623 is roughly 20 days worth of testing
tunnels.
May 20 18:01:57 sanfrancisco pluto[7864]: "wonderproxy-L2TP" #34623:
ERROR: netlink response for Del SA [email protected]
included errno 3: No such process
Those are SA's the kernel deleted but pluto thought those should still
be there. I'm confused what would have happened to those.
These are ones that were detected as failures by our monitoring system.
Here is the full log for #34622/34623:
May 20 17:49:13 sanfrancisco pluto[7864]: packet from 69.90.78.100:500:
received Vendor ID payload [Dead Peer Detection]
May 20 17:49:13 sanfrancisco pluto[7864]: packet from 69.90.78.100:500:
received Vendor ID payload [FRAGMENTATION]
May 20 17:49:13 sanfrancisco pluto[7864]: packet from 69.90.78.100:500:
received Vendor ID payload [RFC 3947]
May 20 17:49:13 sanfrancisco pluto[7864]: packet from 69.90.78.100:500:
ignoring Vendor ID payload [draft-ietf-ipsec-nat-t-ike-03]
May 20 17:49:13 sanfrancisco pluto[7864]: packet from 69.90.78.100:500:
ignoring Vendor ID payload [draft-ietf-ipsec-nat-t-ike-02_n]
May 20 17:49:13 sanfrancisco pluto[7864]: packet from 69.90.78.100:500:
ignoring Vendor ID payload [draft-ietf-ipsec-nat-t-ike-02]
May 20 17:49:13 sanfrancisco pluto[7864]: "wonderproxy-L2TP"[15046]
69.90.78.100 #34622: enabling possible NAT-traversal with method RFC
3947 (NAT-Traversal)
May 20 17:49:13 sanfrancisco pluto[7864]: "wonderproxy-L2TP"[15046]
69.90.78.100 #34622: responding to Main Mode from unknown peer 69.90.78.100
May 20 17:49:13 sanfrancisco pluto[7864]: "wonderproxy-L2TP"[15046]
69.90.78.100 #34622: transition from state STATE_MAIN_R0 to state
STATE_MAIN_R1
May 20 17:49:13 sanfrancisco pluto[7864]: "wonderproxy-L2TP"[15046]
69.90.78.100 #34622: STATE_MAIN_R1: sent MR1, expecting MI2
May 20 17:49:13 sanfrancisco pluto[7864]: "wonderproxy-L2TP"[15046]
69.90.78.100 #34622: NAT-Traversal: Result using RFC 3947
(NAT-Traversal) sender port 500: no NA
May 20 17:49:13 sanfrancisco pluto[7864]: "wonderproxy-L2TP"[15046]
69.90.78.100 #34622: transition from state STATE_MAIN_R1 to state
STATE_MAIN_R2
May 20 17:49:13 sanfrancisco pluto[7864]: "wonderproxy-L2TP"[15046]
69.90.78.100 #34622: STATE_MAIN_R2: sent MR2, expecting MI3
May 20 17:49:13 sanfrancisco pluto[7864]: "wonderproxy-L2TP"[15046]
69.90.78.100 #34622: Main mode peer ID is ID_IPV4_ADDR: '69.90.78.100'
May 20 17:49:13 sanfrancisco pluto[7864]: "wonderproxy-L2TP"[15046]
69.90.78.100 #34622: transition from state STATE_MAIN_R2 to state
STATE_MAIN_R3
May 20 17:49:13 sanfrancisco pluto[7864]: "wonderproxy-L2TP"[15046]
69.90.78.100 #34622: STATE_MAIN_R3: sent MR3, ISAKMP SA established
{auth=PRESHARED_KEY cipher=a
May 20 17:49:13 sanfrancisco pluto[7864]: "wonderproxy-L2TP"[15046]
69.90.78.100 #34622: the peer proposed: 198.199.98.122/32:17/1701 ->
69.90.78.100/32:17/1701
May 20 17:49:13 sanfrancisco pluto[7864]: "wonderproxy-L2TP"[15046]
69.90.78.100 #34623: responding to Quick Mode proposal {msgid:0fc8804b}
May 20 17:49:13 sanfrancisco pluto[7864]: "wonderproxy-L2TP"[15046]
69.90.78.100 #34623: us: 198.199.98.122<198.199.98.122>:17/1701
May 20 17:49:13 sanfrancisco pluto[7864]: "wonderproxy-L2TP"[15046]
69.90.78.100 #34623: them: 69.90.78.100:17/1701
May 20 17:49:13 sanfrancisco pluto[7864]: "wonderproxy-L2TP"[15046]
69.90.78.100 #34623: transition from state STATE_QUICK_R0 to state
STATE_QUICK_R1
May 20 17:49:13 sanfrancisco pluto[7864]: "wonderproxy-L2TP"[15046]
69.90.78.100 #34623: STATE_QUICK_R1: sent QR1, inbound IPsec SA
installed, expecting QI2
*May 20 17:49:13 sanfrancisco pluto[7864]: "wonderproxy-L2TP"[15046]
69.90.78.100 #34623: unable to popen up-host command*
May 20 17:49:23 sanfrancisco pluto[7864]: "wonderproxy-L2TP"[15046]
69.90.78.100 #34623: discarding duplicate packet; already STATE_QUICK_R1
May 20 17:49:33 sanfrancisco pluto[7864]: "wonderproxy-L2TP"[15046]
69.90.78.100 #34623: discarding duplicate packet; already STATE_QUICK_R1
198.199.98.122 - is that machine's local IP
69.90.78.100, 176.58.89.113, and 198.58.96.25 are the IPs of our
monitoring servers
If this is just 1-3 machines, then you must see a continuous log of IKE
failures?
No, I don't see any failures/errors in the log until this issue is
triggered.
--Will
_______________________________________________
Swan mailing list
[email protected]
https://lists.libreswan.org/mailman/listinfo/swan