Dear Vamsi,

It was long long time ago when I wrote original ipsec code, so I don't remember 
details anymore.
For IKEv2, i would suggest using external implementation, as ikev2 stuff is 
just PoC quality code.

-- 
Damjan

> On 29 Jun 2018, at 18:38, Vamsi Krishna <vamsi...@gmail.com> wrote:
> 
> Hi Damjan, Dave,
> 
> Can you please also answer the questions I had in the email just before Jim 
> hijacked the thread.
> 
> Thanks
> Vamsi
> 
> On Fri, Jun 29, 2018 at 3:06 PM, Damjan Marion <dmar...@me.com 
> <mailto:dmar...@me.com>> wrote:
> 
> Hi Jim,
> 
> Atomic add sounds like a reasonable solution to me...
> 
> -- 
> Damjan
> 
>> On 28 Jun 2018, at 09:26, Jim Thompson <j...@netgate.com 
>> <mailto:j...@netgate.com>> wrote:
>> 
>> All,
>> 
>> I don't know if any of the previously-raised issues occur in real-life.  
>> Goodness knows we've run billions of IPsec packets in the test harnesses 
>> (harnessi?) here without seeing them.
>> 
>> There are a couple issues with IPsec and multicore that haven't been raised, 
>> however, so I'm gonna hijack the thread.
>> 
>> If multiple worker threads are configured in VPP, it seems like there’s the 
>> potential for problems with IPsec where the sequence number or replay window 
>> for an SA could get stomped on by two threads trying to update them at the 
>> same. We assume that this issue is well known since the following comment 
>> occurs at line 173 in src/vnet/ipsec/esp.h
>> 
>>     /* TODO seq increment should be atomic to be accessed by multiple 
>> workers */
>> 
>> See: https://github.com/FDio/vpp/blob/master/src/vnet/ipsec/esp.h#L173 
>> <https://github.com/FDio/vpp/blob/master/src/vnet/ipsec/esp.h#L173>
>> 
>> We've asked if anyone is working on this, and are willing to try and fix it, 
>> but would need some direction on what is the best way to accomplish same.
>> 
>> We could try to use locking, which would be straightforward but would add 
>> overhead.  Maybe that overhead could be offset some by requesting a block of 
>> sequence numbers upfront for all of the packets being processed instead of 
>> getting a sequence number and incrementing as each packet is processed.
>> 
>> There is also the clib_smp_atomic_add() call, which invokes 
>> __sync_fetch_and_add(addr,increment).  This is a GCC built_in that uses a 
>> memory barrier to avoid obtaining a lock.  We're not sure if there are 
>> drawbacks to using this.
>> 
>> See: http://gcc.gnu.org/onlinedocs/gcc-4.4.3/gcc/Atomic-Builtins.html 
>> <http://gcc.gnu.org/onlinedocs/gcc-4.4.3/gcc/Atomic-Builtins.html>
>> 
>> GRE uses clib_smp_atomic_add() for sequence number processing, see 
>> src/vnet/gre/gre.c#L409 and src/vnet/gre/gre.c#L421
>> 
>> Finally, there seem to be issues around AES-GCM nonce processing when 
>> operating multi-threaded.  If it is nonce processing, it can probably (also) 
>> be addressed via clib_smp_atomic_add(), but.. don't know yet.
>> 
>> We've raised these before, but haven't received much in the way of response. 
>>  Again, we're willing to work on these, but would like a bit of 'guidance' 
>> from vpp-dev.
>> 
>> Thanks,
>> 
>> Jim (and the rest of Netgate)
>> 
>> 
>> On Thu, Jun 28, 2018 at 1:44 AM, Vamsi Krishna <vamsi...@gmail.com 
>> <mailto:vamsi...@gmail.com>> wrote:
>> Hi Damjan, Dave, 
>> 
>> Thanks for the quick reply. 
>> 
>> It is really helpful. So the barrier ensures that the IPSec data structure 
>> access is thread safe. 
>> 
>> Have a few more question on the IPSec implementation.
>> 1. The inbound SA lookup (in ipsec-input) is actually going through the 
>> inbound policies for the given spd id linearly and matching a policy. The SA 
>> is picked based on the matching policy.
>>      This could have been an SAD hash table with key as (SPI, dst address, 
>> proto (ESP or AH) ), so that the SA can be looked up from the hash on 
>> receiving an ESP packet. 
>>      Is there a particular reason it is implemented using a linear policy 
>> match?
>> 
>> 2. There is also an IKEv2 responder implementation that adds/deletes IPSec 
>> tunnel interfaces. How does this work? Is there any documentation that can 
>> be referred to?
>> 
>> Thanks
>> Krishna
>> 
>> On Wed, Jun 27, 2018 at 6:23 PM, Dave Barach (dbarach) <dbar...@cisco.com 
>> <mailto:dbar...@cisco.com>> wrote:
>> +1.
>> 
>>  
>> 
>> To amplify a bit: all binary API messages are processed with worker threads 
>> paused in a barrier sync, unless the API message has been explicitly marked 
>> thread-safe.
>> 
>>  
>> 
>> Here is the relevant code in 
>> .../src/vlibapi/api_shared.c:vl_api_msg_handler_with_vm_node(...)
>> 
>>  
>> 
>>       if (!am->is_mp_safe[id])
>> 
>>      {
>> 
>>        vl_msg_api_barrier_trace_context (am->msg_names[id]);
>> 
>>        vl_msg_api_barrier_sync ();
>> 
>>      }
>> 
>>       (*handler) (the_msg, vm, node);
>> 
>>  
>> 
>>       if (!am->is_mp_safe[id])
>> 
>>        vl_msg_api_barrier_release ();
>> 
>>  
>> 
>> Typical example of marking a message mp-safe:
>> 
>>  
>> 
>>   api_main_t *am=&api_main;
>> 
>>   ...
>> 
>>  
>> 
>>   am->is_mp_safe[VL_API_MEMCLNT_KEEPALIVE_REPLY] = 1;
>> 
>>  
>> 
>> The debug CLI uses the same scheme. Unless otherwise marked mp-safe, debug 
>> CLI commands are executed with worker threads paused in a barrier sync.
>> 
>>  
>> 
>> HTH... Dave
>> 
>>  
>> 
>> -----Original Message-----
>> From: vpp-dev@lists.fd.io <mailto:vpp-dev@lists.fd.io> <vpp-dev@lists.fd.io 
>> <mailto:vpp-dev@lists.fd.io>> On Behalf Of Damjan Marion
>> Sent: Wednesday, June 27, 2018 6:59 AM
>> To: Vamsi Krishna <vamsi...@gmail.com <mailto:vamsi...@gmail.com>>
>> Cc: vpp-dev@lists.fd.io <mailto:vpp-dev@lists.fd.io>
>> Subject: Re: [vpp-dev] Is VPP IPSec implementation thread safe?
>> 
>>  
>> 
>> ipsec data structures are updated during barrier sync, so there is not 
>> packets in-flight...
>> 
>>  
>> 
>>  
>> 
>> > On 27 Jun 2018, at 07:45, Vamsi Krishna <vamsi...@gmail.com 
>> > <mailto:vamsi...@gmail.com>> wrote:
>> 
>> >
>> 
>> > Hi ,
>> 
>> >
>> 
>> > I have looked at the ipsec code in VPP and trying to understand how it 
>> > works in a multi threaded environment. Noticed that the datastructures for 
>> > spd, sad and tunnel interface are pools and there are no locks to prevent 
>> > race conditions.
>> 
>> >
>> 
>> > For instance the ipsec-input node passes SA index to the esp-encrypt node, 
>> > and esp-encrypt node looks up the SA from sad pool. But during the time in 
>> > which the packet is passed from one node to another the entry at SA index 
>> > may be changed or deleted. Same seems to be true for dpdk-esp-encrypt and 
>> > dpdk-esp-decrypt. How are these cases handled? Can the implementation be 
>> > used in multi-threaded environment?
>> 
>> >
>> 
>> > Please help understand the IPSec implementation.
>> 
>> >
>> 
>> > Thanks
>> 
>> > Krishna
>> 
>> > -=-=-=-=-=-=-=-=-=-=-=-
>> 
>> > Links: You receive all messages sent to this group.
>> 
>> >
>> 
>> > View/Reply Online (#9709): https://lists.fd.io/g/vpp-dev/message/9709 
>> > <https://lists.fd.io/g/vpp-dev/message/9709>
>> > Mute This Topic: https://lists.fd.io/mt/22720913/675642 
>> > <https://lists.fd.io/mt/22720913/675642>
>> > Group Owner: vpp-dev+ow...@lists.fd.io <mailto:vpp-dev+ow...@lists.fd.io>
>> > Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub 
>> > <https://lists.fd.io/g/vpp-dev/unsub>  [dmar...@me.com 
>> > <mailto:dmar...@me.com>]
>> 
>> > -=-=-=-=-=-=-=-=-=-=-=-
>> 
>>  
>> 
>> 
>> 
>> -=-=-=-=-=-=-=-=-=-=-=-
>> Links: You receive all messages sent to this group.
>> 
>> View/Reply Online (#9730): https://lists.fd.io/g/vpp-dev/message/9730 
>> <https://lists.fd.io/g/vpp-dev/message/9730>
>> Mute This Topic: https://lists.fd.io/mt/22720913/675164 
>> <https://lists.fd.io/mt/22720913/675164>
>> Group Owner: vpp-dev+ow...@lists.fd.io <mailto:vpp-dev%2bow...@lists.fd.io>
>> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub 
>> <https://lists.fd.io/g/vpp-dev/unsub>  [j...@netgate.com 
>> <mailto:j...@netgate.com>]
>> -=-=-=-=-=-=-=-=-=-=-=-
>> 
>> 
> 
> 
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#9747): https://lists.fd.io/g/vpp-dev/message/9747
> Mute This Topic: https://lists.fd.io/mt/22720913/675642
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [dmar...@me.com]
> -=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#9748): https://lists.fd.io/g/vpp-dev/message/9748
Mute This Topic: https://lists.fd.io/mt/22720913/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to