[vpp-dev] Is able to span a tunnel interface?

2021-07-18 Thread
packets send/recv from a tunnel interface do not go through 
device-input/interface-output node,    Is there a way to span tunnel 
interface.



-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19826): https://lists.fd.io/g/vpp-dev/message/19826
Mute This Topic: https://lists.fd.io/mt/84300641/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] mgre interface get UNRESOLVED fib entry.

2021-03-17 Thread

    set interface state eh0 up
    set interface ip addr eth0 1.1.1.1/24
    create gre tunnel src 1.1.1.1 instance 1 multipoint
    set interface state gre1 up
    set interface ip addr gre1 2.1.1.2/32
    create teib  gre1 peer 3.3.3.3 nh 1.1.1.2
    ip route add 3.3.3.3/32 via  gre1
    create teib  gre1 peer 4.4.4.4 nh 1.1.1.3
    ip route add 4.4.4.4/32 via  gre1

this config works.

在 2021/3/17 下午5:28, Vijay Kumar Nagaraj 写道:


Hi Yedg,

Gentle reminder!!

Hope you are doing fine.

I am trying mGRE for a certain project at Microsoft and even I don’t 
have much idea about exact config. I followed mGRE example in the 
fd.io wiki page but it is crashing when I configured multipoint 
tunnel, setup route and tried to ping from VPP to the destination host


Can you pls share me your mGRE config if it is working?

*From:*Vijay Kumar N
*Sent:* 15 March 2021 11:09
*To:* 'y...@wangsu.com' 
*Cc:* vjkumar2...@gmail.com
*Subject:* RE: [vpp-dev] mgre interface get UNRESOLVED fib entry.

Hi Yedg,

Hope you are doing fine, I saw your recent query on vpp mailing list.

Are you able to successfully test mGRE feature?

Has the below config worked for you after Neale’s reply.

I am trying mGRE for a certain project at Microsoft and even I don’t 
have much idea about exact config. I followed mGRE example in the 
fd.io wiki page but it is crashing when I configured multipoint 
tunnel, setup route and tried to ping from VPP to the destination host


Can you pls share me your mGRE config if it is working?

Regards.

-- Forwarded message -
From: *Neale Ranns* mailto:ne...@graphiant.com>>
Date: Mon, Feb 22, 2021 at 8:47 PM
Subject: Re: [vpp-dev] mgre interface get UNRESOLVED fib entry.
To: y...@wangsu.com <mailto:y...@wangsu.com> <mailto:y...@wangsu.com>>, vpp-dev@lists.fd.io 
<mailto:vpp-dev@lists.fd.io> <mailto:vpp-dev@lists.fd.io>>


*From: *vpp-dev@lists.fd.io <mailto:vpp-dev@lists.fd.io> 
mailto:vpp-dev@lists.fd.io>> on behalf of 叶东岗via 
lists.fd.io <http://lists.fd.io> <mailto:wangsu@lists.fd.io>>

*Date: *Monday, 22 February 2021 at 13:53
*To: *vpp-dev@lists.fd.io <mailto:vpp-dev@lists.fd.io> 
mailto:vpp-dev@lists.fd.io>>

*Subject: *[vpp-dev] mgre interface get UNRESOLVED fib entry.

Hi:

 I try to config a mgre interface fellow those steps, then i get an
UNRESOLVED fib entry,  is it right? I think it should be 
unicast-ip4-chain?


 any examples of mgre config?  Thinks.


create memif socket id 1 filename /work/memif1
create interface memif socket-id 1 master
set interface state memif1/0 up
set interface ip addr memif1/0 1.1.1.2/24 <http://1.1.1.2/24>
set interface rx-mode memif1/0 interrupt
create gre tunnel src 1.1.1.2 instance 1 multipoint
set interface state gre1 up
set interface ip addr gre1 2.1.1.2/32 <http://2.1.1.2/32>
create teib  gre1 peer 3.3.3.3 nh 1.1.1.1

3.3.3.3 is not in the same subnet as 2.1.1.2/32 <http://2.1.1.2/32>, 
so it’s not a valid neighbour, hence the UNRESOLVED.


/neale




DBGvpp# show ip fib 3.3.3.3/32 <http://3.3.3.3/32>
ipv4-VRF:0, fib_index:0, flow hash:[src dst sport dport proto flowlabel
] epoch:0 flags:none locks:[adjacency:2, recursive-resolution:1,
default-route:1, ]
3.3.3.3/32 <http://3.3.3.3/32> fib:0 index:16 locks:4
   adjacency refs:1 entry-flags:attached,
src-flags:added,contributing,active, cover:0
 path-list:[21] locks:2 uPRF-list:24 len:1 itfs:[2, ]
   path:[25] pl-index:21 ip4 weight=1 pref=0 attached-nexthop:
oper-flags:resolved,
 3.3.3.3 gre1
   [@0]: ipv4 via 3.3.3.3 gre1: mtu:9000 next:4
4500fe2fb8cb01010101010101010800
  stacked-on entry:11:
    [@3]: ipv4 via 1.1.1.1 memif1/0: mtu:9000 next:3
02fe21058f7502fe049eea920800
 Extensions:
  path:25
   recursive-resolution refs:1 src-flags:added, cover:-1

  forwarding:   UNRESOLVED


DBGvpp# show adj
[@0] ipv4-glean: [src:0.0.0.0/0 <http://0.0.0.0/0>] memif1/0: mtu:9000 
next:1

02fe049eea920806
[@1] ipv4-glean: [src:1.1.1.0/24 <http://1.1.1.0/24>] memif1/0: 
mtu:9000 next:1

02fe049eea920806
[@2] ipv4 via 1.1.1.1 memif1/0: mtu:9000 next:3 
02fe21058f7502fe049eea920800

[@3] ipv4 via 3.3.3.3 gre1: mtu:9000 next:4
4500fe2fb8cb01010101010101010800
   stacked-on entry:11:
 [@3]: ipv4 via 1.1.1.1 memif1/0: mtu:9000 next:3
02fe21058f7502fe049eea920800


DBGvpp# show teib
[0] gre1:3.3.3.3 via [0]:1.1.1.1/32 <http://1.1.1.1/32>





-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#18950): https://lists.fd.io/g/vpp-dev/message/18950
Mute This Topic: https://lists.fd.io/mt/80823285/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] mgre interface get UNRESOLVED fib entry.

2021-03-17 Thread

    set interface state eh0 up
    set interface ip addr eth0 1.1.1.1/24
    create gre tunnel src 1.1.1.1 instance 1 multipoint
    set interface state gre1 up
    set interface ip addr gre1 2.1.1.2/32
    create teib  gre1 peer 3.3.3.3 nh 1.1.1.2
    ip route add 3.3.3.3/32 via  gre1
    create teib  gre1 peer 4.4.4.4 nh 1.1.1.3
    ip route add 4.4.4.4/32 via  gre1

this  works.


在 2021/3/15 下午1:44, Vijay Kumar Nagaraj 写道:


Hi Yedg,

Hope you are doing fine, I saw your recent query on vpp mailing list.

Are you able to successfully test mGRE feature?

Has the below config worked for you after Neale’s reply.

I am trying mGRE for a certain project at Microsoft and even I don’t 
have much idea about exact config. I followed mGRE example in the 
fd.io wiki page but it is crashing when I configured multipoint 
tunnel, setup route and tried to ping from VPP to the destination host


Can you pls share me your mGRE config if it is working?

Regards.

-- Forwarded message -
From: *Neale Ranns* mailto:ne...@graphiant.com>>
Date: Mon, Feb 22, 2021 at 8:47 PM
Subject: Re: [vpp-dev] mgre interface get UNRESOLVED fib entry.
To: y...@wangsu.com <mailto:y...@wangsu.com> <mailto:y...@wangsu.com>>, vpp-dev@lists.fd.io 
<mailto:vpp-dev@lists.fd.io> <mailto:vpp-dev@lists.fd.io>>


*From: *vpp-dev@lists.fd.io <mailto:vpp-dev@lists.fd.io> 
mailto:vpp-dev@lists.fd.io>> on behalf of 叶东岗via 
lists.fd.io <http://lists.fd.io> <mailto:wangsu@lists.fd.io>>

*Date: *Monday, 22 February 2021 at 13:53
*To: *vpp-dev@lists.fd.io <mailto:vpp-dev@lists.fd.io> 
mailto:vpp-dev@lists.fd.io>>

*Subject: *[vpp-dev] mgre interface get UNRESOLVED fib entry.

Hi:

 I try to config a mgre interface fellow those steps, then i get an
UNRESOLVED fib entry,  is it right? I think it should be 
unicast-ip4-chain?


 any examples of mgre config?  Thinks.


create memif socket id 1 filename /work/memif1
create interface memif socket-id 1 master
set interface state memif1/0 up
set interface ip addr memif1/0 1.1.1.2/24 <http://1.1.1.2/24>
set interface rx-mode memif1/0 interrupt
create gre tunnel src 1.1.1.2 instance 1 multipoint
set interface state gre1 up
set interface ip addr gre1 2.1.1.2/32 <http://2.1.1.2/32>
create teib  gre1 peer 3.3.3.3 nh 1.1.1.1

3.3.3.3 is not in the same subnet as 2.1.1.2/32 <http://2.1.1.2/32>, 
so it’s not a valid neighbour, hence the UNRESOLVED.


/neale




DBGvpp# show ip fib 3.3.3.3/32 <http://3.3.3.3/32>
ipv4-VRF:0, fib_index:0, flow hash:[src dst sport dport proto flowlabel
] epoch:0 flags:none locks:[adjacency:2, recursive-resolution:1,
default-route:1, ]
3.3.3.3/32 <http://3.3.3.3/32> fib:0 index:16 locks:4
   adjacency refs:1 entry-flags:attached,
src-flags:added,contributing,active, cover:0
 path-list:[21] locks:2 uPRF-list:24 len:1 itfs:[2, ]
   path:[25] pl-index:21 ip4 weight=1 pref=0 attached-nexthop:
oper-flags:resolved,
 3.3.3.3 gre1
   [@0]: ipv4 via 3.3.3.3 gre1: mtu:9000 next:4
4500fe2fb8cb01010101010101010800
  stacked-on entry:11:
    [@3]: ipv4 via 1.1.1.1 memif1/0: mtu:9000 next:3
02fe21058f7502fe049eea920800
 Extensions:
  path:25
   recursive-resolution refs:1 src-flags:added, cover:-1

  forwarding:   UNRESOLVED


DBGvpp# show adj
[@0] ipv4-glean: [src:0.0.0.0/0 <http://0.0.0.0/0>] memif1/0: mtu:9000 
next:1

02fe049eea920806
[@1] ipv4-glean: [src:1.1.1.0/24 <http://1.1.1.0/24>] memif1/0: 
mtu:9000 next:1

02fe049eea920806
[@2] ipv4 via 1.1.1.1 memif1/0: mtu:9000 next:3 
02fe21058f7502fe049eea920800

[@3] ipv4 via 3.3.3.3 gre1: mtu:9000 next:4
4500fe2fb8cb01010101010101010800
   stacked-on entry:11:
 [@3]: ipv4 via 1.1.1.1 memif1/0: mtu:9000 next:3
02fe21058f7502fe049eea920800


DBGvpp# show teib
[0] gre1:3.3.3.3 via [0]:1.1.1.1/32 <http://1.1.1.1/32>





-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#18949): https://lists.fd.io/g/vpp-dev/message/18949
Mute This Topic: https://lists.fd.io/mt/80823285/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] mgre interface get UNRESOLVED fib entry.

2021-02-22 Thread

Hi:

    I try to config a mgre interface fellow those steps, then i get an 
UNRESOLVED fib entry,  is it right? I think it should be unicast-ip4-chain?


    any examples of mgre config?  Thinks.


create memif socket id 1 filename /work/memif1
create interface memif socket-id 1 master
set interface state memif1/0 up
set interface ip addr memif1/0 1.1.1.2/24
set interface rx-mode memif1/0 interrupt
create gre tunnel src 1.1.1.2 instance 1 multipoint
set interface state gre1 up
set interface ip addr gre1 2.1.1.2/32
create teib  gre1 peer 3.3.3.3 nh 1.1.1.1


DBGvpp# show ip fib 3.3.3.3/32
ipv4-VRF:0, fib_index:0, flow hash:[src dst sport dport proto flowlabel 
] epoch:0 flags:none locks:[adjacency:2, recursive-resolution:1, 
default-route:1, ]

3.3.3.3/32 fib:0 index:16 locks:4
  adjacency refs:1 entry-flags:attached, 
src-flags:added,contributing,active, cover:0

    path-list:[21] locks:2 uPRF-list:24 len:1 itfs:[2, ]
  path:[25] pl-index:21 ip4 weight=1 pref=0 attached-nexthop: 
oper-flags:resolved,

    3.3.3.3 gre1
  [@0]: ipv4 via 3.3.3.3 gre1: mtu:9000 next:4 
4500fe2fb8cb01010101010101010800

 stacked-on entry:11:
   [@3]: ipv4 via 1.1.1.1 memif1/0: mtu:9000 next:3 
02fe21058f7502fe049eea920800

    Extensions:
 path:25
  recursive-resolution refs:1 src-flags:added, cover:-1

 forwarding:   UNRESOLVED


DBGvpp# show adj
[@0] ipv4-glean: [src:0.0.0.0/0] memif1/0: mtu:9000 next:1 
02fe049eea920806
[@1] ipv4-glean: [src:1.1.1.0/24] memif1/0: mtu:9000 next:1 
02fe049eea920806

[@2] ipv4 via 1.1.1.1 memif1/0: mtu:9000 next:3 02fe21058f7502fe049eea920800
[@3] ipv4 via 3.3.3.3 gre1: mtu:9000 next:4 
4500fe2fb8cb01010101010101010800

  stacked-on entry:11:
    [@3]: ipv4 via 1.1.1.1 memif1/0: mtu:9000 next:3 
02fe21058f7502fe049eea920800



DBGvpp# show teib
[0] gre1:3.3.3.3 via [0]:1.1.1.1/32


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#18780): https://lists.fd.io/g/vpp-dev/message/18780
Mute This Topic: https://lists.fd.io/mt/80823285/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] why tunnel interfaces do not support device-input feature?

2020-11-19 Thread
thinks for your reply, I got it, but it is a little inconvenient if one 
want to add some features base on interface.


Instead,  he should add features on ip4-output/ip6-ouput arc or 
ip4-unicast/ip6-unicast twice,  but still there are packets get through 
interface but ip4/6 nodes.



在 2020/11/18 下午6:15, Neale Ranns (nranns) 写道:

Hi Ye,

Some comments inline...

On 17/11/2020 02:34, "vpp-dev@lists.fd.io on behalf of 叶东岗"  wrote:

 Hi all:

 why tunnel interfaces do not support device-input feature?

No one has asked for/contributed such support.  If you're volunteering, here's 
some advice.

Taking the feature arc always costs performance, but we accept that. What is 
harder to accept is a performance degradation when there are no features 
configured.

Devices are 'physical' interfaces, they represent an interface from VPP to the 
external world. This means they are read by nodes in poll mode, one device at a 
time. They therefore have the luxury of knowing that all packets in the 
vector/frame come from the same device. Virtual interfaces don't have that 
luxury, so the check for 'are there features on the arc' would be per buffer, 
not per-packet, this would be a noticeable performance cost.

 why  esp packets  do not go through ipsec interface's "interface-output"
 node?

The actions for TX on a virtual interface are different. The equivalent node is 
'adj-midchain-tx'. Running the 'interface-output' arc here would be possible, 
with a negligible performance cost because the adj can cache the feature arc's 
state.

/neale

 I think it's no bad idea to keep some features consistency of all
 interface in spite of an little performance degradation?


 Best regards,
 Ye Donggang





-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#18091): https://lists.fd.io/g/vpp-dev/message/18091
Mute This Topic: https://lists.fd.io/mt/78307484/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] why tunnel interfaces do not support device-input feature?

2020-11-16 Thread

Hi all:

why tunnel interfaces do not support device-input feature?

why  esp packets  do not go through ipsec interface's "interface-output" 
node?


I think it's no bad idea to keep some features consistency of all 
interface in spite of an little performance degradation?



Best regards,
Ye Donggang


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#18046): https://lists.fd.io/g/vpp-dev/message/18046
Mute This Topic: https://lists.fd.io/mt/78307484/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] dpdk interface don't support adaptive rx-mode

2020-11-10 Thread
Hi all:

  Why dpdk interface don't support adaptive rx-mode?  How to support it? Thinks.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#17973): https://lists.fd.io/g/vpp-dev/message/17973
Mute This Topic: https://lists.fd.io/mt/78156132/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] vpp19.08.2 crypto_ia32 do not support aes-gcm icv_size 8/12 crypto

2020-08-10 Thread
I get it, in _mm_movemask_epi8, (r == T) should be replaced with _mm_cmpeq_epi8 
(r, T)


  /* check tag */

  u16 tag_mask = tag_len ? (1 << tag_len) - 1 : 0x;
  r = _mm_loadu_si128 ((__m128i *) tag);
  if (_mm_movemask_epi8 (r == T) != tag_mask) {// what is 
this?  it will return 0, when tag_len equals 12
return 0;
 }


-原始邮件-
发件人:"叶东岗" 
发送时间:2020-08-07 10:37:24 (星期五)
收件人: damar...@cisco.com
抄送: vpp-dev@lists.fd.io
主题: [vpp-dev] vpp19.08.2 crypto_ia32 do not support aes-gcm icv_size 8/12 crypto






VPP19.08.2 crypto_ia32 do not support aes-gcm icv_size 8/12 crypto,  any ideas?







static_always_inline int

aes_gcm (const u8 * in, u8 * out, const u8 * addt, const u8 * iv, u8 * tag,
u32 data_bytes, u32 aad_bytes, u8 tag_len, aes_gcm_key_data_t * kd,
int aes_rounds, int is_encrypt)
{
  int i;
  __m128i r, Y0, T = { };
  ghash_data_t _gd, *gd = &_gd;

  _mm_prefetch (iv, _MM_HINT_T0);
  _mm_prefetch (in, _MM_HINT_T0);
  _mm_prefetch (in + CLIB_CACHE_LINE_BYTES, _MM_HINT_T0);

  /* calculate ghash for AAD - optimized for ipsec common cases */
  if (aad_bytes == 8)
T = aesni_gcm_ghash (T, kd, (__m128i *) addt, 8);
  else if (aad_bytes == 12)
T = aesni_gcm_ghash (T, kd, (__m128i *) addt, 12);
  else
T = aesni_gcm_ghash (T, kd, (__m128i *) addt, aad_bytes);

  /* initalize counter */
  Y0 = _mm_loadu_si128 ((__m128i *) iv);
  Y0 = _mm_insert_epi32 (Y0, clib_host_to_net_u32 (1), 3);

  /* ghash and encrypt/edcrypt  */
  if (is_encrypt)
T = aesni_gcm_enc (T, kd, Y0, in, out, data_bytes, aes_rounds);
  else
T = aesni_gcm_dec (T, kd, Y0, in, out, data_bytes, aes_rounds);

  _mm_prefetch (tag, _MM_HINT_T0);

  /* Finalize ghash */
  r[0] = data_bytes;
  r[1] = aad_bytes;

  /* bytes to bits */
  r <<= 3;

  /* interleaved computation of final ghash and E(Y0, k) */
  ghash_mul_first (gd, r ^ T, kd->Hi[0]);
  r = kd->Ke[0] ^ Y0;
  for (i = 1; i < 5; i += 1)
r = _mm_aesenc_si128 (r, kd->Ke[i]);
  ghash_reduce (gd);
  ghash_reduce2 (gd);
  for (; i < 9; i += 1)
r = _mm_aesenc_si128 (r, kd->Ke[i]);
  T = ghash_final (gd);
  for (; i < aes_rounds; i += 1)
r = _mm_aesenc_si128 (r, kd->Ke[i]);
  r = _mm_aesenclast_si128 (r, kd->Ke[aes_rounds]);
  T = aesni_gcm_bswap (T) ^ r;

  /* tag_len 16 -> 0 */
  tag_len &= 0xf;

  if (is_encrypt)
{
  /* store tag */
  if (tag_len)
aesni_gcm_store_partial ((__m128i *) tag, T,   (1 << tag_len) - 1); // must 
be tag_en
  else
_mm_storeu_si128 ((__m128i *) tag, T);
}
  else
{
  /* check tag */
  u16 tag_mask = tag_len ? (1 << tag_len) - 1 : 0x;
  r = _mm_loadu_si128 ((__m128i *) tag);
  if (_mm_movemask_epi8 (r == T) != tag_mask) {// what is 
this?  it will return 0, when tag_len equals 12
return 0;
  }
}
  return 1;
}

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#17179): https://lists.fd.io/g/vpp-dev/message/17179
Mute This Topic: https://lists.fd.io/mt/76100481/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] vpp19.08.2 crypto_ia32 do not support aes-gcm icv_size 8/12 crypto

2020-08-06 Thread



VPP19.08.2 crypto_ia32 do not support aes-gcm icv_size 8/12 crypto,  any ideas?




static_always_inline int

aes_gcm (const u8 * in, u8 * out, const u8 * addt, const u8 * iv, u8 * tag,
u32 data_bytes, u32 aad_bytes, u8 tag_len, aes_gcm_key_data_t * kd,
int aes_rounds, int is_encrypt)
{
  int i;
  __m128i r, Y0, T = { };
  ghash_data_t _gd, *gd = &_gd;

  _mm_prefetch (iv, _MM_HINT_T0);
  _mm_prefetch (in, _MM_HINT_T0);
  _mm_prefetch (in + CLIB_CACHE_LINE_BYTES, _MM_HINT_T0);

  /* calculate ghash for AAD - optimized for ipsec common cases */
  if (aad_bytes == 8)
T = aesni_gcm_ghash (T, kd, (__m128i *) addt, 8);
  else if (aad_bytes == 12)
T = aesni_gcm_ghash (T, kd, (__m128i *) addt, 12);
  else
T = aesni_gcm_ghash (T, kd, (__m128i *) addt, aad_bytes);

  /* initalize counter */
  Y0 = _mm_loadu_si128 ((__m128i *) iv);
  Y0 = _mm_insert_epi32 (Y0, clib_host_to_net_u32 (1), 3);

  /* ghash and encrypt/edcrypt  */
  if (is_encrypt)
T = aesni_gcm_enc (T, kd, Y0, in, out, data_bytes, aes_rounds);
  else
T = aesni_gcm_dec (T, kd, Y0, in, out, data_bytes, aes_rounds);

  _mm_prefetch (tag, _MM_HINT_T0);

  /* Finalize ghash */
  r[0] = data_bytes;
  r[1] = aad_bytes;

  /* bytes to bits */
  r <<= 3;

  /* interleaved computation of final ghash and E(Y0, k) */
  ghash_mul_first (gd, r ^ T, kd->Hi[0]);
  r = kd->Ke[0] ^ Y0;
  for (i = 1; i < 5; i += 1)
r = _mm_aesenc_si128 (r, kd->Ke[i]);
  ghash_reduce (gd);
  ghash_reduce2 (gd);
  for (; i < 9; i += 1)
r = _mm_aesenc_si128 (r, kd->Ke[i]);
  T = ghash_final (gd);
  for (; i < aes_rounds; i += 1)
r = _mm_aesenc_si128 (r, kd->Ke[i]);
  r = _mm_aesenclast_si128 (r, kd->Ke[aes_rounds]);
  T = aesni_gcm_bswap (T) ^ r;

  /* tag_len 16 -> 0 */
  tag_len &= 0xf;

  if (is_encrypt)
{
  /* store tag */
  if (tag_len)
aesni_gcm_store_partial ((__m128i *) tag, T,   (1 << tag_len) - 1); // must 
be tag_en
  else
_mm_storeu_si128 ((__m128i *) tag, T);
}
  else
{
  /* check tag */
  u16 tag_mask = tag_len ? (1 << tag_len) - 1 : 0x;
  r = _mm_loadu_si128 ((__m128i *) tag);
  if (_mm_movemask_epi8 (r == T) != tag_mask) {// what is 
this?  it will return 0, when tag_len equals 12
return 0;
  }
}
  return 1;
}

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#17158): https://lists.fd.io/g/vpp-dev/message/17158
Mute This Topic: https://lists.fd.io/mt/76042116/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] assert when set ip addr at an interface and delete it at another interface

2020-05-13 Thread
root@ac15b50ac370:/# /usr/local/vpp20/bin/vppctl 
_____   _  ___ 
 __/ __/ _ \  (_)__| | / / _ \/ _ \
 _/ _// // / / / _ \   | |/ / ___/ ___/
 /_/ /(_)_/\___/   |___/_/  /_/


DBGvpp# create tap 
tap0
DBGvpp# create tap 
tap1
DBGvpp# set interface ip addr tap0 1.1.1.1/24 
DBGvpp# set interface ip addr del tap1 1.1.1.1/24 




/usr/local/vpp20/bin/vpp[17762]: /work/vpp/src/vnet/ip/ip4_forward.c:656 
(ip4_sw_interface_enable_disable) assertion 
`im->ip_enabled_by_sw_if_index[sw_if_index] > 0' fails
/usr/local/vpp20/bin/vpp[17762]: received signal SIGWINCH, PC 0x7f9c3421899d
/usr/local/vpp20/bin/vpp[17762]: #0  0x7f9c34ad7ca8 unix_signal_handler + 
0x26f
/usr/local/vpp20/bin/vpp[17762]: #1  0x7f9c347fd890 0x7f9c347fd890
/usr/local/vpp20/bin/vpp[17762]: #2  0x7f9c34135e97 gsignal + 0xc7
/usr/local/vpp20/bin/vpp[17762]: #3  0x7f9c34137801 abort + 0x141
/usr/local/vpp20/bin/vpp[17762]: #4  0x556ba0b8641a 0x556ba0b8641a
/usr/local/vpp20/bin/vpp[17762]: #5  0x7f9c3451ae02 debugger + 0x9
/usr/local/vpp20/bin/vpp[17762]: #6  0x7f9c3451b1e5 _clib_error + 0x2d4
/usr/local/vpp20/bin/vpp[17762]: #7  0x7f9c3526bf3b 
ip4_sw_interface_enable_disable + 0x1ad
/usr/local/vpp20/bin/vpp[17762]: #8  0x7f9c3526cda7 
ip4_add_del_interface_address_internal + 0xd6c
/usr/local/vpp20/bin/vpp[17762]: #9  0x7f9c3526d077 
ip4_add_del_interface_address + 0x36
/usr/local/vpp20/bin/vpp[17762]: #10 0x7f9c352555ba add_del_ip_address + 
0x157
/usr/local/vpp20/bin/vpp[17762]: #11 0x7f9c34a2ad2e 
vlib_cli_dispatch_sub_commands + 0xc41
/usr/local/vpp20/bin/vpp[17762]: #12 0x7f9c34a2abac 
vlib_cli_dispatch_sub_commands + 0xabf
/usr/local/vpp20/bin/vpp[17762]: #13 0x7f9c34a2abac 
vlib_cli_dispatch_sub_commands + 0xabf
/usr/local/vpp20/bin/vpp[17762]: #14 0x7f9c34a2abac 
vlib_cli_dispatch_sub_commands + 0xabf


git  diff 1d61c2
diff --git a/src/vnet/ip/ip4_forward.c b/src/vnet/ip/ip4_forward.c
index ea78d5507..5d8be3621 100644
--- a/src/vnet/ip/ip4_forward.c
+++ b/src/vnet/ip/ip4_forward.c
@@ -779,7 +779,10 @@ ip4_add_del_interface_address_internal (vlib_main_t * vm,
  goto done;
}
 
-  ip_interface_address_del (lm, if_address_index, addr_fib);
+  error = ip_interface_address_del (lm, if_address_index, addr_fib,
+address_length, sw_if_index);
+  if (error)
+  goto done;
 }
   else
 {
diff --git a/src/vnet/ip/ip6_forward.c b/src/vnet/ip/ip6_forward.c
index 1d6c1b7f1..6b596dc69 100644
--- a/src/vnet/ip/ip6_forward.c
+++ b/src/vnet/ip/ip6_forward.c
@@ -428,7 +428,10 @@ ip6_add_del_interface_address (vlib_main_t * vm,
  goto done;
}
 
-  ip_interface_address_del (lm, if_address_index, addr_fib);
+  error = ip_interface_address_del (lm, if_address_index, addr_fib,
+address_length, sw_if_index);
+  if (error)
+goto done;
 }
   else
 {
diff --git a/src/vnet/ip/ip_interface.c b/src/vnet/ip/ip_interface.c
index 23c3df816..c6181ec68 100644
--- a/src/vnet/ip/ip_interface.c
+++ b/src/vnet/ip/ip_interface.c
@@ -90,14 +90,22 @@ ip_interface_address_add (ip_lookup_main_t * lm,
   return (NULL);
 }
 
-void
+clib_error_t *
 ip_interface_address_del (ip_lookup_main_t * lm,
- u32 address_index, void *addr_fib)
+ u32 address_index, void *addr_fib, u32 address_length,
+ u32 sw_if_index)
 {
   ip_interface_address_t *a, *prev, *next;
 
   a = pool_elt_at_index (lm->if_address_pool, address_index);
 
+  if (a->sw_if_index != sw_if_index) {
+  return clib_error_create ("%U not found for interface %U",
+ lm->format_address_and_length,
+ addr_fib, address_length,
+ format_vnet_sw_if_index_name,
+ vnet_get_main (), sw_if_index);
+  }
   if (a->prev_this_sw_interface != ~0)
 {
   prev = pool_elt_at_index (lm->if_address_pool,
@@ -121,6 +129,7 @@ ip_interface_address_del (ip_lookup_main_t * lm,
   mhash_unset (>address_to_if_address_index, addr_fib,
   /* old_value */ 0);
   pool_put (lm->if_address_pool, a);
+  return NULL;
 }
 
 u8
diff --git a/src/vnet/ip/ip_interface.h b/src/vnet/ip/ip_interface.h
index f95b8deb0..95393381c 100644
--- a/src/vnet/ip/ip_interface.h
+++ b/src/vnet/ip/ip_interface.h
@@ -28,8 +28,9 @@ clib_error_t *ip_interface_address_add (ip_lookup_main_t * lm,
void *address,
u32 address_length,
u32 * result_index);
-void ip_interface_address_del (ip_lookup_main_t * lm,
-  u32 addr_index, void *address);
+clib_error_t *ip_interface_address_del (ip_lookup_main_t * lm,
+   u32 addr_index, void *address, u32 address_len,
+