[strongSwan] Can't get logging to work
Hi, I am using the latest version of strongSwan with CentOS 7 and I cannot get the logging to work. My strongswan.conf is shown below. I see the logs in /var/log/secure and /var/log/messages but not the file I define. [root@CENTOS7 ~]# cat /etc/strongswan/strongswan.conf# strongswan.conf - strongSwan configuration file## Refer to the strongswan.conf(5) manpage for details## Configuration changes should be made in the included files charon { load_modular = yes plugins { include strongswan.d/charon/*.conf } filelog { /var/log/strongswan/strongswan.log { append = no default = 1 flush_line = yes } }} include strongswan.d/*.conf Any ideas as to why it won't work? Thank, Mark-___ Users mailing list Users@lists.strongswan.org https://lists.strongswan.org/mailman/listinfo/users
Re: [strongSwan] HA plugin: stopping charon does not remove IKE_SA/CHILD_SA from other nodes
Ok, I found my problem. I replaced the UDP socket implementation of the HA plugin with mine (based on corosync and recv/send queues). When I send a HA message, I queue a job to perform the actual job. But the stronswan's shutdown prevents the job to be executed: In src/libcharon/daemon.c: /** * Clean up all daemon resources */ static void destroy(private_daemon_t *this) { /* terminate all idle threads */ lib->processor->set_threads(lib->processor, 0); /* make sure nobody waits for a DNS query */ lib->hosts->flush(lib->hosts); /* close all IKE_SAs */ if (this->public.ike_sa_manager) { this->public.ike_sa_manager->flush(this->public.ike_sa_manager); } if (this->public.traps) { this->public.traps->flush(this->public.traps); } if (this->public.sender) { this->public.sender->flush(this->public.sender); } /* cancel all threads and wait for their termination */ lib->processor->cancel(lib->processor); If I terminate idle threads just before cancelling threads, the job is being executed and it works as expected: the SA are being deleted on the passive node. Not sure it is the correct fix for that problem though? Best Regards, Emeric - Mail original - De: "Emeric POUPON" À: "Martin Willi" Cc: users@lists.strongswan.org Envoyé: Lundi 2 Mars 2015 10:45:03 Objet: Re: [strongSwan] HA plugin: stopping charon does not remove IKE_SA/CHILD_SA from other nodes >> In that particular configuration (no monitoring/heartbeat) stopping >> charon on the active node should clear the connections on the remote >> gateway (OK) and on the other node (not OK), right? > >The active node will delete the IKE_SA, and send a close event to the >passive node. > That is what I don't understand: I don't see the close event on the passive node. Therefore the IKE SA and its associated CHILD SA stay in the passive node. Maybe I have missed something? Regards, Emeric ___ Users mailing list Users@lists.strongswan.org https://lists.strongswan.org/mailman/listinfo/users
Re: [strongSwan] xAuth request for VICI
>> 1) Is there alternative for 'leftfirewall=yes' in the VICI interface to >> automatically setup iptables rules? > > There is no option for the default updown script, but you may manually > specify "ipsec _updown" in the CHILD_SA "updown" configuration option. Actually, the command equivalent to `leftfirewall=yes` is `ipsec _updown iptables`. Regards, Tobias ___ Users mailing list Users@lists.strongswan.org https://lists.strongswan.org/mailman/listinfo/users
Re: [strongSwan] Nested IPsec Tunnels
Hi Ryan, > I have an application scenario where I need to test Nested IPsec Tunnels. > I googled and came up with some old threads talking about how this isn't > supported with strongSwan unless I use two boxes, or a VM to route the > traffic through again. Is this still the case? Yes, this is still the case. To manage its own tunnels, IKE traffic must be exempted from the negotiated tunnel. strongSwan does this globally using IPsec bypass policies. This implies that IKE never goes over any negotiated tunnel, and prevents nested tunnels. So unless you want to change that IPsec bypass policy behavior, running one instance in a VM is probably the best option. Maybe even running two strongSwan instances in their own network namespace works, but I've never tried that. Regards Martin ___ Users mailing list Users@lists.strongswan.org https://lists.strongswan.org/mailman/listinfo/users
Re: [strongSwan] strongSwan 5.2+ disconects clients after 1 hour
Hi Volker, Yes, it was a similar problem. I'm using kernel 2.6.33.4, with pppol2tp module. I removed the module (modprobe -r pppol2tp) and the connexion became stable. xl2tpd complains that I don't have kernel support for L2TP now, but as long as it works, I'm OK with that :) Thank you. Best regards, Dan On 3/2/2015 11:43 PM, Volker Rümelin wrote: > Hello Dan, > > I am quite sure this is the same problem. > > https://lists.strongswan.org/pipermail/users/2013-December/005699.html > https://lists.strongswan.org/pipermail/users/2013-December/005703.html > > Regards, > Volker > > >> Hi, >> >> strongSwan 5.2.1 (also tested with 5.2.0 and 5.2.2) on Slackware 13.1. >> L2TP/IPsec, using PSK with xl2ptd. >> >> After initial successful connection, the client (Windows 7 or 8) tries >> to rekey after ~1 hour and it fails. >> >> The debug log is here: http://pastebin.com/akuAYEDn >> >> /etc/ipsec.conf >> conn vpnserver >>type=transport >>authby=secret >>rekey=yes >>lifetime=2h >>ikelifetime=4h >>leftprotoport=udp/l2tp >>right=%any >>rightprotoport=%any >>auto=add >> >> >> >> Any ideas? >> >> Thank you. >> >> Best regards, >> Dan Craciun > > ___ Users mailing list Users@lists.strongswan.org https://lists.strongswan.org/mailman/listinfo/users