I meant this is the new IP (after running "ipsec update"), but
ipsec.secrets still refers to the old IP and therefore says no shared
key found for the new IP.
Den 2017-07-23 kl. 00:51, skrev Dusan Ilic:
initiating IKE_SA to 85.24.241.x
generating IKE_SA_INIT request 0 [ SA KE No N(NATD_S_IP) N(NATD_D_IP) ]
sending packet: from 94.254.123.x[500] to 85.24.241.x[500]
received packet: from 85.24.241.x[500] to 94.254.123.x[500]
parsed IKE_SA_INIT response 0 [ SA KE No N(NATD_S_IP) N(NATD_D_IP)
CERTREQ N(MULT_AUTH) ]
received 1 cert requests for an unknown ca
authentication of 'local.hostname' (myself) with pre-shared key
no shared key found for 'local.hostname' - '85.24.241.x'
85.24.241.x is old IP of the other peer.
Den 2017-07-23 kl. 00:47, skrev Dusan Ilic:
I think the problem is that also ipsec.secrets is resolved to the old
IP, so the PSK's doesn't match.
Jul 23 00:44:25 GW pluto[7661]: loading secrets from
"/etc/ipsec.secrets"
Jul 23 00:44:25 GW pluto[7661]: loaded PSK secret for 85.24.241.x
How can I use the % feature in ipsec.secrets?
Den 2017-07-22 kl. 21:32, skrev Noel Kuntze:
On 22.07.2017 19:57, Dusan Ilic wrote:
Okey, the remote endpoint dont know that the other side have
changed its IP, however according to below a connection should
still be able to be made if the end with the new IP initiates it.
"parameter right|leftallowany parameters helps to handle
the case where both peers possess dynamic IP addresses that are
usually resolved using DynDNS or a similar service.
The configuration
right=peer.foo.bar <http://peer.foo.bar>
rightallowany=yes
can be used by the initiator to start up a connection to a peer
by resolving peer.foo.bar <http://peer.foo.bar> into the currently
allocated IP address.
Thanks to the rightallowany flag the connection behaves later on
as
right=%any
so that the peer can rekey the connection as an initiator when his
IP address changes. An alternative notation is
right=%peer.foo.bar <http://peer.foo.bar>
which will implicitly set rightallowany=yes
"
However, Strongswan on the side that have changed IP is obviously
aware of its new IP without restarting (how?), so why does it give
the following output when trying to initiate the connection?
initiating IKE_SA to 94.254.123 <tel:94.254.123>.x
generating IKE_SA_INIT request 0 [ SA KE No N(NATD_S_IP)
N(NATD_D_IP) N(FRAG_SUP) N(HASH_ALG) N(REDIR_SUP) ]
sending packet: from 85.24.244 <tel:85.24.244>.x[500] to 94.254.123
<tel:94.254.123>.x[500] (464 bytes)
received packet: from 94.254.123 <tel:94.254.123>.x[500] to
85.24.244 <tel:85.24.244>.x[500] (36 bytes)
parsed IKE_SA_INIT response 0 [ N(NO_PROP) ]
received NO_PROPOSAL_CHOSEN notify error
establishing connection 'wesafe' failed
The remote peer sends that error. What does it log?
Why does it report no proposal chosen, until i restart the remote
endpoint?
IF I have understood it right, the use of % in front of the
hostname should allow a connection attempt from whatever the IP May
be?
---- Noel Kuntze skrev ----
Seems like it.
On 22.07.2017 11:17, Dusan Ilic wrote:
Hi Noel,
So, are you saying that there is no way to make Strongswan aware
of a domain name have changed IP-address without restarting it
manually?
Den 2017-07-22 kl. 01:49, skrev Noel Kuntze:
Hi Dusan,
I took a "quick" look at the code[1] and it seems the DNS names
are only resolved once the result replaces the original destination.
So it has nothing to do with caching. Just with a disadvantageous
design decision.
Kind regards
Noel
[1]
https://github.com/strongswan/strongswan/blob/master/src/libcharon/sa/ike_sa.c#L1470
On 21.07.2017 00:19, Dusan Ilic wrote:
Okey, so I just did a forced release/renew on the same endpoint,
dynamic DNS updated shortly the new IP (ttl 5 min) and after
like 10 min or so another endpoint reconnected again (a
fortigate, I have two endpoints), however the troubling endpoint
(also strongswan) havent connected yet.
When logging in to the remote endpoint and pinging the domain
name, it resolves to the new IP, but below are the output from
both sides of the tunnel when trying to manually run ipsec
up-command.
On the remote endpoint:
First of, running ipsec statusall connection shows as if the
tunnel is still up, maybe thats the problem that Strongswan
thinks its up even if it isn't?
ESTABLISHED 46 minutes ago,
94.254.123.x[local.host.name]...85.24.241.x[85.24.241.x]
IKE SPIs: 1dffaab2cafa2f48_i 15e867fa149370f0_r*, pre-shared key
reauthentication in 22 hours
IKE proposal: AES_CBC_128/HMAC_SHA1_96/PRF_HMAC_SHA1/MODP_2048
INSTALLED, TUNNEL, ESP SPIs: cf0473a2_i c45028cb_o
AES_CBC_128/HMAC_SHA1_96, 8275 bytes_i (2616s ago), 81235
bytes_o (8s ago), rekeying in 7 hours
192.168.1.0/24 === 10.1.1.0/26
Command ipsec up connection
initiating IKE_SA to 85.24.241.x
generating IKE_SA_INIT request 0 [ SA KE No N(NATD_S_IP)
N(NATD_D_IP) ]
sending packet: from 94.254.123.x[500] to 85.24.241.x[500]
retransmit 1 of request with message ID 0
sending packet: from 94.254.123.x[500] to 85.24.241.x[500]
On the local endpoint (with new IP):
initiating IKE_SA to 94.254.123.x
generating IKE_SA_INIT request 0 [ SA KE No N(NATD_S_IP)
N(NATD_D_IP) N(FRAG_SUP) N(HASH_ALG) N(REDIR_SUP) ]
sending packet: from 85.24.244.x[500] to 94.254.123.x[500] (464
bytes)
received packet: from 94.254.123.x[500] to 85.24.244.x[500] (36
bytes)
parsed IKE_SA_INIT response 0 [ N(NO_PROP) ]
received NO_PROPOSAL_CHOSEN notify error
establishing connection 'wesafe' failed
And when restarting Strongswan on the remote endpoint, it
connects again...
Den 2017-07-20 kl. 12:00, skrev Dusan Ilic:
Hi,
I have some issues with a site to site tunnel with two dynamic
endpoints. One side almost never changes IP-adress (it is DHCP
however), the other side changes more frequently. Both
endpoints IP-adresses are using dynamic DNS and have a
corresponding domain name associated at all times.
Today one side changed IP, and the new IP have been updated in
public DNS. I understand DNS propagation and caching, but I
seem to not understand how Strongswan handles and acts upon it.
For example, I have set keyingtries to %forever on both sides,
so that they continuesly tries to reconnect when connections is
lost. I have also changed the global initiation parameter from
default 0 to 60 s, so that it retries unsuccesful connections
attempts.
Now the other side is trying to reconnect to the old IP still,
however if I ping the hostname from that endpoint it resolves
to the new, correct IP. It seems like Strongswan is caching the
old DNS some how?
At last I tried to restart Strongswan and then it picked up the
new IP.
I would like to have a system that solves this by itself, so I
don't need to manually have to intervene each and everythime
any of the endpoints get a new IP. How can this best be achieved?