Re: [vpp-dev] control-plane restarts, etc.

2018-08-28 Thread Jerome Tollet via Lists.Fd.Io
Hi Mike,
In such a situation, CP has to run state reconciliation process. You can find 
examples in networking-vpp (Python), Ligato (go) or Honeycomb (Java).
Jerome

De :  au nom de "Bly, Mike" 
Date : mardi 28 août 2018 à 21:51
À : "vpp-dev@lists.fd.io" 
Objet : [vpp-dev] control-plane restarts, etc.

Hello,

Can someone tell me the current VPP stance on warm restarts, configuration 
replays, etc? If the control plane is restarted for whatever reason, what is 
the expectation regarding replaying of configuration down to VPP and/or 
auditing of the active VPP configuration on a live system?

Regards,
Mike
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10328): https://lists.fd.io/g/vpp-dev/message/10328
Mute This Topic: https://lists.fd.io/mt/25067741/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] control-plane restarts, etc.

2018-08-28 Thread Bly, Mike
Hello,

Can someone tell me the current VPP stance on warm restarts, configuration 
replays, etc? If the control plane is restarted for whatever reason, what is 
the expectation regarding replaying of configuration down to VPP and/or 
auditing of the active VPP configuration on a live system?

Regards,
Mike
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10327): https://lists.fd.io/g/vpp-dev/message/10327
Mute This Topic: https://lists.fd.io/mt/25067741/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] num-mbufs

2018-08-28 Thread Vratko Polak -X (vrpolak - PANTHEON TECHNOLOGIES at Cisco) via Lists.Fd.Io
> I cannot update wiki, it tells me that I'm trying to do suspicious login :)

My workaround it to attempt the login
from less deep wiki page, e.g. [1].

Vratko.

[1] https://wiki.fd.io/view/Main_Page

From: vpp-dev@lists.fd.io  On Behalf Of Damjan Marion via 
Lists.Fd.Io
Sent: Tuesday, 2018-August-21 10:49
To: Bly, Mike 
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] num-mbufs




On 20 Aug 2018, at 23:44, Bly, Mike mailto:m...@ciena.com>> 
wrote:

Hello,

I am looking to understand the math and reasoning behind the 16K DPDK MBUFs per 
socket allocation.

It is more educated guess than reasoning and math.



From: vpp/src/plugins/dkdp/device/dpdk.h

#define NB_MBUF   (16<<10)

The following online documentation has a comment stating it is 32K per socket. 
Above == 16K, and does not seem to align with the 32K statement.
https://wiki.fd.io/view/VPP/Using_VPP_In_A_Multi-thread_Model

Wiki i wrong, it should say 16K. I cannot update wiki, it tells me that I'm 
trying to do suspicious login :)



Is there some basic math or VPP process understanding that can be shared?

Total number of buffers needs to be bigger than all buffers waiting in the rx 
rings of all nics + number of buffers in flight during the one graph dispatch 
node iteration.
I.e. in simple case with 2 DPDK NICSs, that will mean 1024 buffers on each ring 
+ 2*256 buffers in-flight. Also at some places in the code we pre-allocate 
buffers for performance reasons so numbr might be slightly higher.


Is there an assumption as to the number of NICs or rx queues per NIC which 
drives this #?

yes, as explained above.


Are there assumptions being made relative to NIC port speed for this default 
value?

No, we have default of 1024 buffers on the RX ring for majority of DPDK NICs, 
For virtio number is 256.


Are there some online resources you can point me at to read and digest?

Source code :)



Regards,
Mike


--
Damjan

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10326): https://lists.fd.io/g/vpp-dev/message/10326
Mute This Topic: https://lists.fd.io/mt/24829323/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] TLS half open lock

2018-08-28 Thread via Lists.Fd.Io
Hi Ping,

I’ve already posted a comment ☺

Florin

From: "Yu, Ping" 
Date: Tuesday, August 28, 2018 at 8:20 AM
To: Florin Coras 
Cc: "Florin Coras (fcoras)" , "vpp-dev@lists.fd.io" 
, "Yu, Ping" 
Subject: RE: [vpp-dev] TLS half open lock

Yes, this is what I did to fix the code.

Please help review below patch, and besides adding the lock, and I also 
optimize a bit to reduce the lock scope.

https://gerrit.fd.io/r/14534

Ping

From: Florin Coras [mailto:fcoras.li...@gmail.com]
Sent: Tuesday, August 28, 2018 10:58 PM
To: Yu, Ping 
Cc: Florin Coras (fcoras) ; vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] TLS half open lock

Yes, this probably happens because free is called by a different thread. As 
mentioned in my previous reply, could you try protecting with 
clib_rwlock_reader_lock (>half_open_rwlock); the branch that does not 
expand the pool?

Thanks,
Florin



On Aug 28, 2018, at 7:51 AM, Yu, Ping 
mailto:ping...@intel.com>> wrote:

Hello, Florin,

From below info, we can see a real race condition happens.

thread 2 free: 149
thread 3 free: 155
thread 0: alloc: 149
thread 0: alloc: 149


[Time 0]

thread 2 free: 149
Thread 2 free 149 and now 149 is at the pool vec_len (_pool_var 
(p)->free_indices);

[Time 1 2 ]

Now thread 0 wants to get, and thread 3 wants to put at the same time.
Before thread 3 put, thread 0 get current vec_len: 149.
Thread 3 makes:  vec_add1 (_pool_var (p)->free_indices, _pool_var (l));
Thread 0 makes:  _vec_len (_pool_var (p)->free_indices) = _pool_var (l) - 1;


[Time 3]
Due to race condition, current vec_len is the same as time 0, and the old 
element 149 returns again.

Even though performance is important, and we still need to use lock to resolve 
this race condition.


From: vpp-dev@lists.fd.io 
[mailto:vpp-dev@lists.fd.io] On Behalf Of Yu, Ping
Sent: Tuesday, August 28, 2018 5:29 PM
To: Florin Coras mailto:fcoras.li...@gmail.com>>
Cc: Florin Coras (fcoras) mailto:fco...@cisco.com>>; 
vpp-dev@lists.fd.io; Yu, Ping 
mailto:ping...@intel.com>>
Subject: Re: [vpp-dev] TLS half open lock

Hi, Florin,

Yes, you are right, and all alloc operation is performed by thread 0. An 
interesting thing is that if running “test echo clients nclients 300 uri 
tls://10.10.1.1/” in clients with 4 threads, I can easily catch the case 
one same index be alloc twice by thread 0.

thread 0: alloc: 145
thread 0: alloc: 69
thread 4 free: 151
thread 0: alloc: 151
thread 2 free: 149
thread 3 free: 155
thread 0: alloc: 149
thread 0: alloc: 149
thread 0: alloc: 58
thread 0: alloc: 9
thread 0: alloc: 29
thread 3 free: 146
thread 0: alloc: 146
thread 2 free: 153
thread 0: alloc: 144
thread 0: alloc: 153
thread 0: alloc: 124
thread 3 free: 25
thread 0: alloc: 25

From: Florin Coras [mailto:fcoras.li...@gmail.com]
Sent: Tuesday, August 28, 2018 10:24 AM
To: Yu, Ping mailto:ping...@intel.com>>
Cc: Florin Coras (fcoras) mailto:fco...@cisco.com>>; 
vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] TLS half open lock

Hi Ping,

The expectation is that all connects/listens come on the main thread (with the 
worker barrier held). In other words, we only need to support a one writer, 
multiple readers scenario.

Florin

On Aug 27, 2018, at 6:29 PM, Yu, Ping 
mailto:ping...@intel.com>> wrote:

Hi, Florin,

To check if it is about to expand is also lockless. Is there any issue if two 
threads check the pool simultaneously, and just one slot is available? One code 
will do normal get, and the other thread is expanding the pool?

Thanks
Ping

From: vpp-dev@lists.fd.io 
[mailto:vpp-dev@lists.fd.io] On Behalf Of Florin Coras
Sent: Tuesday, August 28, 2018 12:51 AM
To: Yu, Ping mailto:ping...@intel.com>>
Cc: Florin Coras (fcoras) mailto:fco...@cisco.com>>; 
vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] TLS half open lock

Hi Ping,

The current implementation only locks the half-open pool if the pool is about 
to expand. This is done to increase speed by avoiding unnecessary locking, 
i.e., if pool is not about to expand, it should be safe to get a new element 
from it without affecting readers. Now the thing to figure out is why this is 
happening. Does the slowdown due to the “big lock” avoid some race or is there 
more to it?

First of all, how many workers do you have configured and how many sessions are 
you allocating/connecting? Do you see failed connects?

Your tls_ctx_half_open_get line numbers don’t match my code. Did you by chance 
modify something else?

Thanks,
Florin




On Aug 27, 2018, at 9:22 AM, Yu, Ping 
mailto:ping...@intel.com>> wrote:

Hello, all

Recently I found that the TLS half open lock is not well implemented, and if 
enabling multiple thread, there are chances to get the following core dump info 
in debug mode.

(gdb) where
#0  0x7f7a0848e428 in __GI_raise (sig=sig@entry=6) at 
../sysdeps/unix/sysv/linux/raise.c:54
#1  0x7f7a0849002a in __GI_abort () at 

Re: [vpp-dev] [FD.io Helpdesk #56625] Nexus fd.io.master.centos7 VPP artifacts

2018-08-28 Thread pmi...@cisco.com via RT
Hello,

Yes I think this will work long term. (still will let others to comment but, 
for now looks good to me)

Thank you.

Peter Mikus
Engineer – Software
Cisco Systems Limited

-Original Message-
From: Vanessa Valderrama via RT  
Sent: Monday, August 27, 2018 6:27 PM
To: Peter Mikus -X (pmikus - PANTHEON TECHNOLOGIES at Cisco) 
Cc: csit-...@lists.fd.io; infra-steer...@lists.fd.io; vpp-dev@lists.fd.io
Subject: [FD.io Helpdesk #56625] Nexus fd.io.master.centos7 VPP artifacts

Ed developed a script that will allow us to keep/remove artifacts based on the 
number of days. So we will remove artifacts older than 30 days.  Please let me 
know if this meets your requirements.

Thank you,
Vanessa


On Fri Aug 10 09:14:10 2018, jgel...@cisco.com wrote:
> 
> Hello,
> 
> Could you, please, let us know if the solution proposed by Peter below 
> is possible to implement or we need to start thinking about another 
> (non-optimal) solution, e.g.
> 
> - compile vpp artefacts for every csit-vpp-functional job (time and 
> resource wasting)
> 
> - copy vpp artefacts needed for csit-vpp-functional jobs to one of 
> VIRL servers and use nfs/rsync/apache2 (just temporary solution as 
> VIRL servers are planned to be re-used when CSIT VIRL tests migration 
> to VPP_path and VPP_device tests is finished)
> 
> - copy vpp artefacts needed for csit-vpp-functional jobs to another 
> location on Nexus that is used to store csit reports (here we would 
> need solution for deleting obsolete artefacts)
> 
> Thanks,
> Jan
> 
> Jan Gelety
> Engineer - Software
> Cisco Systems, Inc.
> 
> -Original Message-
> From: csit-...@lists.fd.io  On Behalf Of Peter 
> Mikus via Lists.Fd.Io
> Sent: Wednesday, August 8, 2018 5:29 PM
> To: fdio-helpd...@rt.linuxfoundation.org
> Cc: csit-...@lists.fd.io
> Subject: Re: [csit-dev] [FD.io Helpdesk #56625] Nexus
> fd.io.master.centos7 VPP artifacts
> 
> Hello,
> 
> We are creating branches on weekly basis and CSIT being verified in 
> weekly job. So the question is if there is option to set "date" or 
> "number" when limiting repo.
> 
> Counting the cadence of up to 10 merges per day (artifacts posted on
> Nexus) then means that safe value is around 100-120 artifacts.
> This I am talking about master branch of VPP. Stable branches we 
> should be ok with just 10-15 artifacts.
> 
> Peter Mikus
> Engineer – Software
> Cisco Systems Limited
> 
> -Original Message-
> From: Vanessa Valderrama via RT [mailto:fdio- 
> helpd...@rt.linuxfoundation.org]
> Sent: Wednesday, August 08, 2018 5:22 PM
> To: Peter Mikus -X (pmikus - PANTHEON TECHNOLOGIES at Cisco) 
> 
> Cc: csit-...@lists.fd.io; infra-steer...@lists.fd.io; vpp- 
> d...@lists.fd.io
> Subject: [FD.io Helpdesk #56625] Nexus fd.io.master.centos7 VPP 
> artifacts
> 
> Peter,
> 
> How many artifacts do you need us to retain for your testing?
> 
> Thank you,
> Vanessa
> 
> 
> On Mon Aug 06 04:53:29 2018, pmi...@cisco.com wrote:
> > Hello Vanessa,
> >
> > For CSIT it is not about release or not. We would need to increase 
> > cadence on our weekly jobs to daily. Currently CSIT jobs are all 
> > failing as VPP has more than 10-15 artifacts in week.
> > Our defined stable versions of VPP (updated once a week) are not in 
> > repo anymore or are obsoleting faster than we are updating. This is 
> > impacting everything.
> >
> > Right now we are *blocked* and we need to work new solution do adopt.
> >
> > One of the option is that we will have to start building VPP from 
> > scratch for every job as we cannot use artifacts anymore. This will 
> > cause huge overhead on infrastructure and execution times will 
> > extend as nexus acted as optimization for us.
> > Right now Nexus is not an option for us anymore. This also means 
> > that Nexus artifacts will not be tested by CSIT.
> >
> > Peter Mikus
> > Engineer – Software
> > Cisco Systems Limited
> >
> > -Original Message-
> >   From: Vanessa Valderrama via RT [mailto:fdio- 
> > helpd...@rt.linuxfoundation.org]
> > Sent: Tuesday, July 31, 2018 8:16 PM
> >  To: Peter Mikus -X (pmikus - PANTHEON TECHNOLOGIES at Cisco) 
> > 
> >  Cc: csit-...@lists.fd.io; infra-steer...@lists.fd.io; vpp- 
> > d...@lists.fd.io
> >  Subject: [FD.io Helpdesk #56625] Nexus fd.io.master.centos7 VPP 
> > artifacts
> >
> > Peter,
> >
> > We need to make a decision on the number of artifacts to keep. I'd 
> > like to propose the following
> >
> > previous release repos - 10 packages per subproject master - 10 to 
> > 15 packages per subproject
> >
> > Thank you,
> > Vanessa
> >
> > On Tue Jun 05 00:51:02 2018, pmi...@cisco.com wrote:
> > > Hello Vanessa,
> > >
> > > Thank you for an explanation. Indeed this will impact certain 
> > > things that are planned like "automatic bisecting" (downloading 
> > > and testing range of artifacts). Let me analyze current situation 
> > > with CSIT team and get back to you.
> > >
> > > Peter Mikus
> > > Engineer – Software
> > > Cisco Systems Limited
> > >
> > >
> > > 

Re: [vpp-dev] [csit-dev] [FD.io Helpdesk #56625] Nexus fd.io.master.centos7 VPP artifacts

2018-08-28 Thread Jan Gelety -X via RT
+1

30 day artefact availability (except official release artefacts, of course) is 
enough for us.

Thanks,
Jan

-Original Message-
From: csit-...@lists.fd.io  On Behalf Of Peter Mikus via 
Lists.Fd.Io
Sent: Tuesday, August 28, 2018 8:57 AM
To: fdio-helpd...@rt.linuxfoundation.org
Cc: csit-...@lists.fd.io
Subject: Re: [csit-dev] [FD.io Helpdesk #56625] Nexus fd.io.master.centos7 VPP 
artifacts

Hello,

Yes I think this will work long term. (still will let others to comment but, 
for now looks good to me)

Thank you.

Peter Mikus
Engineer – Software
Cisco Systems Limited

-Original Message-
From: Vanessa Valderrama via RT 
Sent: Monday, August 27, 2018 6:27 PM
To: Peter Mikus -X (pmikus - PANTHEON TECHNOLOGIES at Cisco) 
Cc: csit-...@lists.fd.io; infra-steer...@lists.fd.io; vpp-dev@lists.fd.io
Subject: [FD.io Helpdesk #56625] Nexus fd.io.master.centos7 VPP artifacts

Ed developed a script that will allow us to keep/remove artifacts based on the 
number of days. So we will remove artifacts older than 30 days.  Please let me 
know if this meets your requirements.

Thank you,
Vanessa


On Fri Aug 10 09:14:10 2018, jgel...@cisco.com wrote:
> 
> Hello,
> 
> Could you, please, let us know if the solution proposed by Peter below 
> is possible to implement or we need to start thinking about another
> (non-optimal) solution, e.g.
> 
> - compile vpp artefacts for every csit-vpp-functional job (time and 
> resource wasting)
> 
> - copy vpp artefacts needed for csit-vpp-functional jobs to one of 
> VIRL servers and use nfs/rsync/apache2 (just temporary solution as 
> VIRL servers are planned to be re-used when CSIT VIRL tests migration 
> to VPP_path and VPP_device tests is finished)
> 
> - copy vpp artefacts needed for csit-vpp-functional jobs to another 
> location on Nexus that is used to store csit reports (here we would 
> need solution for deleting obsolete artefacts)
> 
> Thanks,
> Jan
> 
> Jan Gelety
> Engineer - Software
> Cisco Systems, Inc.
> 
> -Original Message-
> From: csit-...@lists.fd.io  On Behalf Of Peter 
> Mikus via Lists.Fd.Io
> Sent: Wednesday, August 8, 2018 5:29 PM
> To: fdio-helpd...@rt.linuxfoundation.org
> Cc: csit-...@lists.fd.io
> Subject: Re: [csit-dev] [FD.io Helpdesk #56625] Nexus
> fd.io.master.centos7 VPP artifacts
> 
> Hello,
> 
> We are creating branches on weekly basis and CSIT being verified in 
> weekly job. So the question is if there is option to set "date" or 
> "number" when limiting repo.
> 
> Counting the cadence of up to 10 merges per day (artifacts posted on
> Nexus) then means that safe value is around 100-120 artifacts.
> This I am talking about master branch of VPP. Stable branches we 
> should be ok with just 10-15 artifacts.
> 
> Peter Mikus
> Engineer – Software
> Cisco Systems Limited
> 
> -Original Message-
> From: Vanessa Valderrama via RT [mailto:fdio- 
> helpd...@rt.linuxfoundation.org]
> Sent: Wednesday, August 08, 2018 5:22 PM
> To: Peter Mikus -X (pmikus - PANTHEON TECHNOLOGIES at Cisco) 
> 
> Cc: csit-...@lists.fd.io; infra-steer...@lists.fd.io; vpp- 
> d...@lists.fd.io
> Subject: [FD.io Helpdesk #56625] Nexus fd.io.master.centos7 VPP 
> artifacts
> 
> Peter,
> 
> How many artifacts do you need us to retain for your testing?
> 
> Thank you,
> Vanessa
> 
> 
> On Mon Aug 06 04:53:29 2018, pmi...@cisco.com wrote:
> > Hello Vanessa,
> >
> > For CSIT it is not about release or not. We would need to increase 
> > cadence on our weekly jobs to daily. Currently CSIT jobs are all 
> > failing as VPP has more than 10-15 artifacts in week.
> > Our defined stable versions of VPP (updated once a week) are not in 
> > repo anymore or are obsoleting faster than we are updating. This is 
> > impacting everything.
> >
> > Right now we are *blocked* and we need to work new solution do adopt.
> >
> > One of the option is that we will have to start building VPP from 
> > scratch for every job as we cannot use artifacts anymore. This will 
> > cause huge overhead on infrastructure and execution times will 
> > extend as nexus acted as optimization for us.
> > Right now Nexus is not an option for us anymore. This also means 
> > that Nexus artifacts will not be tested by CSIT.
> >
> > Peter Mikus
> > Engineer – Software
> > Cisco Systems Limited
> >
> > -Original Message-
> >   From: Vanessa Valderrama via RT [mailto:fdio- 
> > helpd...@rt.linuxfoundation.org]
> > Sent: Tuesday, July 31, 2018 8:16 PM
> >  To: Peter Mikus -X (pmikus - PANTHEON TECHNOLOGIES at Cisco) 
> > 
> >  Cc: csit-...@lists.fd.io; infra-steer...@lists.fd.io; vpp- 
> > d...@lists.fd.io
> >  Subject: [FD.io Helpdesk #56625] Nexus fd.io.master.centos7 VPP 
> > artifacts
> >
> > Peter,
> >
> > We need to make a decision on the number of artifacts to keep. I'd 
> > like to propose the following
> >
> > previous release repos - 10 packages per subproject master - 10 to
> > 15 packages per subproject
> >
> > Thank you,
> > Vanessa
> >
> > On Tue Jun 05 00:51:02 2018, 

Re: [vpp-dev] TLS half open lock

2018-08-28 Thread Yu, Ping
Yes, this is what I did to fix the code.

Please help review below patch, and besides adding the lock, and I also 
optimize a bit to reduce the lock scope.

https://gerrit.fd.io/r/14534

Ping

From: Florin Coras [mailto:fcoras.li...@gmail.com]
Sent: Tuesday, August 28, 2018 10:58 PM
To: Yu, Ping 
Cc: Florin Coras (fcoras) ; vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] TLS half open lock

Yes, this probably happens because free is called by a different thread. As 
mentioned in my previous reply, could you try protecting with 
clib_rwlock_reader_lock (>half_open_rwlock); the branch that does not 
expand the pool?

Thanks,
Florin


On Aug 28, 2018, at 7:51 AM, Yu, Ping 
mailto:ping...@intel.com>> wrote:

Hello, Florin,

From below info, we can see a real race condition happens.

thread 2 free: 149
thread 3 free: 155
thread 0: alloc: 149
thread 0: alloc: 149


[Time 0]

thread 2 free: 149
Thread 2 free 149 and now 149 is at the pool vec_len (_pool_var 
(p)->free_indices);

[Time 1 2 ]

Now thread 0 wants to get, and thread 3 wants to put at the same time.
Before thread 3 put, thread 0 get current vec_len: 149.
Thread 3 makes:  vec_add1 (_pool_var (p)->free_indices, _pool_var (l));
Thread 0 makes:  _vec_len (_pool_var (p)->free_indices) = _pool_var (l) - 1;


[Time 3]
Due to race condition, current vec_len is the same as time 0, and the old 
element 149 returns again.

Even though performance is important, and we still need to use lock to resolve 
this race condition.


From: vpp-dev@lists.fd.io 
[mailto:vpp-dev@lists.fd.io] On Behalf Of Yu, Ping
Sent: Tuesday, August 28, 2018 5:29 PM
To: Florin Coras mailto:fcoras.li...@gmail.com>>
Cc: Florin Coras (fcoras) mailto:fco...@cisco.com>>; 
vpp-dev@lists.fd.io; Yu, Ping 
mailto:ping...@intel.com>>
Subject: Re: [vpp-dev] TLS half open lock

Hi, Florin,

Yes, you are right, and all alloc operation is performed by thread 0. An 
interesting thing is that if running “test echo clients nclients 300 uri 
tls://10.10.1.1/” in clients with 4 threads, I can easily catch the case 
one same index be alloc twice by thread 0.

thread 0: alloc: 145
thread 0: alloc: 69
thread 4 free: 151
thread 0: alloc: 151
thread 2 free: 149
thread 3 free: 155
thread 0: alloc: 149
thread 0: alloc: 149
thread 0: alloc: 58
thread 0: alloc: 9
thread 0: alloc: 29
thread 3 free: 146
thread 0: alloc: 146
thread 2 free: 153
thread 0: alloc: 144
thread 0: alloc: 153
thread 0: alloc: 124
thread 3 free: 25
thread 0: alloc: 25

From: Florin Coras [mailto:fcoras.li...@gmail.com]
Sent: Tuesday, August 28, 2018 10:24 AM
To: Yu, Ping mailto:ping...@intel.com>>
Cc: Florin Coras (fcoras) mailto:fco...@cisco.com>>; 
vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] TLS half open lock

Hi Ping,

The expectation is that all connects/listens come on the main thread (with the 
worker barrier held). In other words, we only need to support a one writer, 
multiple readers scenario.

Florin

On Aug 27, 2018, at 6:29 PM, Yu, Ping 
mailto:ping...@intel.com>> wrote:

Hi, Florin,

To check if it is about to expand is also lockless. Is there any issue if two 
threads check the pool simultaneously, and just one slot is available? One code 
will do normal get, and the other thread is expanding the pool?

Thanks
Ping

From: vpp-dev@lists.fd.io 
[mailto:vpp-dev@lists.fd.io] On Behalf Of Florin Coras
Sent: Tuesday, August 28, 2018 12:51 AM
To: Yu, Ping mailto:ping...@intel.com>>
Cc: Florin Coras (fcoras) mailto:fco...@cisco.com>>; 
vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] TLS half open lock

Hi Ping,

The current implementation only locks the half-open pool if the pool is about 
to expand. This is done to increase speed by avoiding unnecessary locking, 
i.e., if pool is not about to expand, it should be safe to get a new element 
from it without affecting readers. Now the thing to figure out is why this is 
happening. Does the slowdown due to the “big lock” avoid some race or is there 
more to it?

First of all, how many workers do you have configured and how many sessions are 
you allocating/connecting? Do you see failed connects?

Your tls_ctx_half_open_get line numbers don’t match my code. Did you by chance 
modify something else?

Thanks,
Florin



On Aug 27, 2018, at 9:22 AM, Yu, Ping 
mailto:ping...@intel.com>> wrote:

Hello, all

Recently I found that the TLS half open lock is not well implemented, and if 
enabling multiple thread, there are chances to get the following core dump info 
in debug mode.

(gdb) where
#0  0x7f7a0848e428 in __GI_raise (sig=sig@entry=6) at 
../sysdeps/unix/sysv/linux/raise.c:54
#1  0x7f7a0849002a in __GI_abort () at abort.c:89
#2  0x00407f0b in os_panic () at 
/home/pyu4/git-home/vpp_clean/vpp/src/vpp/vnet/main.c:331
#3  0x7f7a08867bd0 in debugger () at 
/home/pyu4/git-home/vpp_clean/vpp/src/vppinfra/error.c:84
#4  0x7f7a08868008 in 

Re: [vpp-dev] TLS half open lock

2018-08-28 Thread Florin Coras
Yes, this probably happens because free is called by a different thread. As 
mentioned in my previous reply, could you try protecting with 
clib_rwlock_reader_lock (>half_open_rwlock); the branch that does not 
expand the pool?

Thanks, 
Florin

> On Aug 28, 2018, at 7:51 AM, Yu, Ping  wrote:
> 
> Hello, Florin,
>  
> From below info, we can see a real race condition happens.
>  
> thread 2 free: 149
> thread 3 free: 155
> thread 0: alloc: 149
> thread 0: alloc: 149
>  
>  
> [Time 0]
>  
> thread 2 free: 149
> Thread 2 free 149 and now 149 is at the pool vec_len (_pool_var 
> (p)->free_indices); 
>  
> [Time 1 2 ]
>  
> Now thread 0 wants to get, and thread 3 wants to put at the same time.
> Before thread 3 put, thread 0 get current vec_len: 149.
> Thread 3 makes:  vec_add1 (_pool_var (p)->free_indices, _pool_var (l));
> Thread 0 makes:  _vec_len (_pool_var (p)->free_indices) = _pool_var (l) - 1;
>  
>  
> [Time 3]
> Due to race condition, current vec_len is the same as time 0, and the old 
> element 149 returns again.
>  
> Even though performance is important, and we still need to use lock to 
> resolve this race condition.  
>  
>   <>
>  <>From: vpp-dev@lists.fd.io  
> [mailto:vpp-dev@lists.fd.io ] On Behalf Of Yu, 
> Ping
> Sent: Tuesday, August 28, 2018 5:29 PM
> To: Florin Coras mailto:fcoras.li...@gmail.com>>
> Cc: Florin Coras (fcoras) mailto:fco...@cisco.com>>; 
> vpp-dev@lists.fd.io ; Yu, Ping  >
> Subject: Re: [vpp-dev] TLS half open lock
>  
> Hi, Florin,
>  
> Yes, you are right, and all alloc operation is performed by thread 0. An 
> interesting thing is that if running “test echo clients nclients 300 uri 
> tls://10.10.1.1/ ” in clients with 4 threads, I can 
> easily catch the case one same index be alloc twice by thread 0.
>  
> thread 0: alloc: 145
> thread 0: alloc: 69
> thread 4 free: 151
> thread 0: alloc: 151
> thread 2 free: 149
> thread 3 free: 155
> thread 0: alloc: 149
> thread 0: alloc: 149
> thread 0: alloc: 58
> thread 0: alloc: 9
> thread 0: alloc: 29
> thread 3 free: 146
> thread 0: alloc: 146
> thread 2 free: 153
> thread 0: alloc: 144
> thread 0: alloc: 153
> thread 0: alloc: 124
> thread 3 free: 25
> thread 0: alloc: 25
>  
> From: Florin Coras [mailto:fcoras.li...@gmail.com 
> ] 
> Sent: Tuesday, August 28, 2018 10:24 AM
> To: Yu, Ping mailto:ping...@intel.com>>
> Cc: Florin Coras (fcoras) mailto:fco...@cisco.com>>; 
> vpp-dev@lists.fd.io 
> Subject: Re: [vpp-dev] TLS half open lock
>  
> Hi Ping, 
>  
> The expectation is that all connects/listens come on the main thread (with 
> the worker barrier held). In other words, we only need to support a one 
> writer, multiple readers scenario. 
>  
> Florin 
>  
> 
> On Aug 27, 2018, at 6:29 PM, Yu, Ping  > wrote:
>  
> Hi, Florin,
>  
> To check if it is about to expand is also lockless. Is there any issue if two 
> threads check the pool simultaneously, and just one slot is available? One 
> code will do normal get, and the other thread is expanding the pool?
>  
> Thanks
> Ping
>  
> From: vpp-dev@lists.fd.io  
> [mailto:vpp-dev@lists.fd.io ] On Behalf Of Florin 
> Coras
> Sent: Tuesday, August 28, 2018 12:51 AM
> To: Yu, Ping mailto:ping...@intel.com>>
> Cc: Florin Coras (fcoras) mailto:fco...@cisco.com>>; 
> vpp-dev@lists.fd.io 
> Subject: Re: [vpp-dev] TLS half open lock
>  
> Hi Ping, 
>  
> The current implementation only locks the half-open pool if the pool is about 
> to expand. This is done to increase speed by avoiding unnecessary locking, 
> i.e., if pool is not about to expand, it should be safe to get a new element 
> from it without affecting readers. Now the thing to figure out is why this is 
> happening. Does the slowdown due to the “big lock” avoid some race or is 
> there more to it?
>  
> First of all, how many workers do you have configured and how many sessions 
> are you allocating/connecting? Do you see failed connects? 
>  
> Your tls_ctx_half_open_get line numbers don’t match my code. Did you by 
> chance modify something else?
>  
> Thanks,
> Florin
> 
> 
> 
> On Aug 27, 2018, at 9:22 AM, Yu, Ping  > wrote:
>  
> Hello, all
>  
> Recently I found that the TLS half open lock is not well implemented, and if 
> enabling multiple thread, there are chances to get the following core dump 
> info in debug mode.
>  
> (gdb) where
> #0  0x7f7a0848e428 in __GI_raise (sig=sig@entry=6) at 
> ../sysdeps/unix/sysv/linux/raise.c:54
> #1  0x7f7a0849002a in __GI_abort () at abort.c:89
> #2  0x00407f0b in os_panic () at 
> /home/pyu4/git-home/vpp_clean/vpp/src/vpp/vnet/main.c:331
> #3  0x7f7a08867bd0 in debugger () at 
> /home/pyu4/git-home/vpp_clean/vpp/src/vppinfra/error.c:84
> #4  

Re: [vpp-dev] TLS half open lock

2018-08-28 Thread Florin Coras
Hi Ping, 

Looks like a free happens just before on thread 2. Could you try protecting the 
allocation of a half-open ctx that does not expand the pool with a reader lock, 
to see if this keeps on happening?

Florin

> On Aug 28, 2018, at 2:28 AM, Yu, Ping  wrote:
> 
> Hi, Florin,
>  
> Yes, you are right, and all alloc operation is performed by thread 0. An 
> interesting thing is that if running “test echo clients nclients 300 uri 
> tls://10.10.1.1/ ” in clients with 4 threads, I can 
> easily catch the case one same index be alloc twice by thread 0.
>  
> thread 0: alloc: 145
> thread 0: alloc: 69
> thread 4 free: 151
> thread 0: alloc: 151
> thread 2 free: 149
> thread 3 free: 155
> thread 0: alloc: 149
> thread 0: alloc: 149
> thread 0: alloc: 58
> thread 0: alloc: 9
> thread 0: alloc: 29
> thread 3 free: 146
> thread 0: alloc: 146
> thread 2 free: 153
> thread 0: alloc: 144
> thread 0: alloc: 153
> thread 0: alloc: 124
> thread 3 free: 25
> thread 0: alloc: 25
>  
> From: Florin Coras [mailto:fcoras.li...@gmail.com] 
> Sent: Tuesday, August 28, 2018 10:24 AM
> To: Yu, Ping 
> Cc: Florin Coras (fcoras) ; vpp-dev@lists.fd.io
> Subject: Re: [vpp-dev] TLS half open lock
>  
> Hi Ping, 
>  
> The expectation is that all connects/listens come on the main thread (with 
> the worker barrier held). In other words, we only need to support a one 
> writer, multiple readers scenario. 
>  
> Florin 
> 
> 
> On Aug 27, 2018, at 6:29 PM, Yu, Ping  > wrote:
>  
> Hi, Florin,
>  
> To check if it is about to expand is also lockless. Is there any issue if two 
> threads check the pool simultaneously, and just one slot is available? One 
> code will do normal get, and the other thread is expanding the pool?
>  
> Thanks
> Ping
>   <>
>  <>From: vpp-dev@lists.fd.io  
> [mailto:vpp-dev@lists.fd.io ] On Behalf Of Florin 
> Coras
> Sent: Tuesday, August 28, 2018 12:51 AM
> To: Yu, Ping mailto:ping...@intel.com>>
> Cc: Florin Coras (fcoras) mailto:fco...@cisco.com>>; 
> vpp-dev@lists.fd.io 
> Subject: Re: [vpp-dev] TLS half open lock
>  
> Hi Ping, 
>  
> The current implementation only locks the half-open pool if the pool is about 
> to expand. This is done to increase speed by avoiding unnecessary locking, 
> i.e., if pool is not about to expand, it should be safe to get a new element 
> from it without affecting readers. Now the thing to figure out is why this is 
> happening. Does the slowdown due to the “big lock” avoid some race or is 
> there more to it?
>  
> First of all, how many workers do you have configured and how many sessions 
> are you allocating/connecting? Do you see failed connects? 
>  
> Your tls_ctx_half_open_get line numbers don’t match my code. Did you by 
> chance modify something else?
>  
> Thanks,
> Florin
> 
> 
> 
> On Aug 27, 2018, at 9:22 AM, Yu, Ping  > wrote:
>  
> Hello, all
>  
> Recently I found that the TLS half open lock is not well implemented, and if 
> enabling multiple thread, there are chances to get the following core dump 
> info in debug mode.
>  
> (gdb) where
> #0  0x7f7a0848e428 in __GI_raise (sig=sig@entry=6) at 
> ../sysdeps/unix/sysv/linux/raise.c:54
> #1  0x7f7a0849002a in __GI_abort () at abort.c:89
> #2  0x00407f0b in os_panic () at 
> /home/pyu4/git-home/vpp_clean/vpp/src/vpp/vnet/main.c:331
> #3  0x7f7a08867bd0 in debugger () at 
> /home/pyu4/git-home/vpp_clean/vpp/src/vppinfra/error.c:84
> #4  0x7f7a08868008 in _clib_error (how_to_die=2, function_name=0x0, 
> line_number=0,
> fmt=0x7f7a0a0add78 "%s:%d (%s) assertion `%s' fails") at 
> /home/pyu4/git-home/vpp_clean/vpp/src/vppinfra/error.c:143
> #5  0x7f7a09e10be0 in tls_ctx_half_open_get (ctx_index=48) at 
> /home/pyu4/git-home/vpp_clean/vpp/src/vnet/tls/tls.c:126
> #6  0x7f7a09e11889 in tls_session_connected_callback (tls_app_index=0, 
> ho_ctx_index=48, tls_session=0x7f79c9b6d1c0,
> is_fail=0 '\000') at 
> /home/pyu4/git-home/vpp_clean/vpp/src/vnet/tls/tls.c:404
> #7  0x7f7a09d5ea6e in session_stream_connect_notify (tc=0x7f79c9b655fc, 
> is_fail=0 '\000')
> at /home/pyu4/git-home/vpp_clean/vpp/src/vnet/session/session.c:648
> #8  0x7f7a099cb969 in tcp46_syn_sent_inline (vm=0x7f79c8a25100, 
> node=0x7f79c9a60500, from_frame=0x7f79c8b2a9c0, is_ip4=1)
> at /home/pyu4/git-home/vpp_clean/vpp/src/vnet/tcp/tcp_input.c:2306
> #9  0x7f7a099cbe00 in tcp4_syn_sent (vm=0x7f79c8a25100, 
> node=0x7f79c9a60500, from_frame=0x7f79c8b2a9c0)
> at /home/pyu4/git-home/vpp_clean/vpp/src/vnet/tcp/tcp_input.c:2387
> #10 0x7f7a08fefa35 in dispatch_node (vm=0x7f79c8a25100, 
> node=0x7f79c9a60500, type=VLIB_NODE_TYPE_INTERNAL,
> dispatch_state=VLIB_NODE_STATE_POLLING, frame=0x7f79c8b2a9c0, 
> last_time_stamp=902372436923868)
> at /home/pyu4/git-home/vpp_clean/vpp/src/vlib/main.c:988
> #11 0x7f7a08feffee in 

Re: [vpp-dev] TLS half open lock

2018-08-28 Thread Yu, Ping
Hello, Florin,

From below info, we can see a real race condition happens.

thread 2 free: 149
thread 3 free: 155
thread 0: alloc: 149
thread 0: alloc: 149


[Time 0]

thread 2 free: 149
Thread 2 free 149 and now 149 is at the pool vec_len (_pool_var 
(p)->free_indices);

[Time 1 2 ]

Now thread 0 wants to get, and thread 3 wants to put at the same time.
Before thread 3 put, thread 0 get current vec_len: 149.
Thread 3 makes:  vec_add1 (_pool_var (p)->free_indices, _pool_var (l));
Thread 0 makes:  _vec_len (_pool_var (p)->free_indices) = _pool_var (l) - 1;


[Time 3]
Due to race condition, current vec_len is the same as time 0, and the old 
element 149 returns again.

Even though performance is important, and we still need to use lock to resolve 
this race condition.


From: vpp-dev@lists.fd.io [mailto:vpp-dev@lists.fd.io] On Behalf Of Yu, Ping
Sent: Tuesday, August 28, 2018 5:29 PM
To: Florin Coras 
Cc: Florin Coras (fcoras) ; vpp-dev@lists.fd.io; Yu, Ping 

Subject: Re: [vpp-dev] TLS half open lock

Hi, Florin,

Yes, you are right, and all alloc operation is performed by thread 0. An 
interesting thing is that if running “test echo clients nclients 300 uri 
tls://10.10.1.1/” in clients with 4 threads, I can easily catch the case 
one same index be alloc twice by thread 0.

thread 0: alloc: 145
thread 0: alloc: 69
thread 4 free: 151
thread 0: alloc: 151
thread 2 free: 149
thread 3 free: 155
thread 0: alloc: 149
thread 0: alloc: 149
thread 0: alloc: 58
thread 0: alloc: 9
thread 0: alloc: 29
thread 3 free: 146
thread 0: alloc: 146
thread 2 free: 153
thread 0: alloc: 144
thread 0: alloc: 153
thread 0: alloc: 124
thread 3 free: 25
thread 0: alloc: 25

From: Florin Coras [mailto:fcoras.li...@gmail.com]
Sent: Tuesday, August 28, 2018 10:24 AM
To: Yu, Ping mailto:ping...@intel.com>>
Cc: Florin Coras (fcoras) mailto:fco...@cisco.com>>; 
vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] TLS half open lock

Hi Ping,

The expectation is that all connects/listens come on the main thread (with the 
worker barrier held). In other words, we only need to support a one writer, 
multiple readers scenario.

Florin

On Aug 27, 2018, at 6:29 PM, Yu, Ping 
mailto:ping...@intel.com>> wrote:

Hi, Florin,

To check if it is about to expand is also lockless. Is there any issue if two 
threads check the pool simultaneously, and just one slot is available? One code 
will do normal get, and the other thread is expanding the pool?

Thanks
Ping

From: vpp-dev@lists.fd.io 
[mailto:vpp-dev@lists.fd.io] On Behalf Of Florin Coras
Sent: Tuesday, August 28, 2018 12:51 AM
To: Yu, Ping mailto:ping...@intel.com>>
Cc: Florin Coras (fcoras) mailto:fco...@cisco.com>>; 
vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] TLS half open lock

Hi Ping,

The current implementation only locks the half-open pool if the pool is about 
to expand. This is done to increase speed by avoiding unnecessary locking, 
i.e., if pool is not about to expand, it should be safe to get a new element 
from it without affecting readers. Now the thing to figure out is why this is 
happening. Does the slowdown due to the “big lock” avoid some race or is there 
more to it?

First of all, how many workers do you have configured and how many sessions are 
you allocating/connecting? Do you see failed connects?

Your tls_ctx_half_open_get line numbers don’t match my code. Did you by chance 
modify something else?

Thanks,
Florin


On Aug 27, 2018, at 9:22 AM, Yu, Ping 
mailto:ping...@intel.com>> wrote:

Hello, all

Recently I found that the TLS half open lock is not well implemented, and if 
enabling multiple thread, there are chances to get the following core dump info 
in debug mode.

(gdb) where
#0  0x7f7a0848e428 in __GI_raise (sig=sig@entry=6) at 
../sysdeps/unix/sysv/linux/raise.c:54
#1  0x7f7a0849002a in __GI_abort () at abort.c:89
#2  0x00407f0b in os_panic () at 
/home/pyu4/git-home/vpp_clean/vpp/src/vpp/vnet/main.c:331
#3  0x7f7a08867bd0 in debugger () at 
/home/pyu4/git-home/vpp_clean/vpp/src/vppinfra/error.c:84
#4  0x7f7a08868008 in _clib_error (how_to_die=2, function_name=0x0, 
line_number=0,
fmt=0x7f7a0a0add78 "%s:%d (%s) assertion `%s' fails") at 
/home/pyu4/git-home/vpp_clean/vpp/src/vppinfra/error.c:143
#5  0x7f7a09e10be0 in tls_ctx_half_open_get (ctx_index=48) at 
/home/pyu4/git-home/vpp_clean/vpp/src/vnet/tls/tls.c:126
#6  0x7f7a09e11889 in tls_session_connected_callback (tls_app_index=0, 
ho_ctx_index=48, tls_session=0x7f79c9b6d1c0,
is_fail=0 '\000') at 
/home/pyu4/git-home/vpp_clean/vpp/src/vnet/tls/tls.c:404
#7  0x7f7a09d5ea6e in session_stream_connect_notify (tc=0x7f79c9b655fc, 
is_fail=0 '\000')
at /home/pyu4/git-home/vpp_clean/vpp/src/vnet/session/session.c:648
#8  0x7f7a099cb969 in tcp46_syn_sent_inline (vm=0x7f79c8a25100, 
node=0x7f79c9a60500, from_frame=0x7f79c8b2a9c0, is_ip4=1)
at 

Re: [vpp-dev] IGMP enable issue

2018-08-28 Thread Neale Ranns via Lists.Fd.Io

Hi Aleksander,

It’s not top of my TODO list right now. Your additions would be most welcome.

/neale


From:  on behalf of Aleksander Djuric 

Date: Tuesday, 28 August 2018 at 14:41
To: "vpp-dev@lists.fd.io" 
Subject: Re: [vpp-dev] IGMP enable issue

In addition to my previous message...

Unfortunatelly it's not work for me (
I need IGMPv2 support.. and I have found this comment:

/* TODO: IGMPv2 and IGMPv1 */

Is it in your nearest plans?

Certainly I also will try to do something by myself..
Regards,
Aleksander
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10318): https://lists.fd.io/g/vpp-dev/message/10318
Mute This Topic: https://lists.fd.io/mt/24971765/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] IGMP enable issue

2018-08-28 Thread Aleksander Djuric
In addition to my previous message...

Unfortunatelly it's not work for me (
I need IGMPv2 support.. and I have found this comment:

/* TODO: IGMPv2 and IGMPv1 */

Is it in your nearest plans?

Certainly I also will try to do something by myself..
Regards,
Aleksander
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10317): https://lists.fd.io/g/vpp-dev/message/10317
Mute This Topic: https://lists.fd.io/mt/24971765/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] crash in vpp 18.07

2018-08-28 Thread Dave Barach via Lists.Fd.Io
If possible, repeat the exercise with a debug image. The code involved is well 
tested, and does not crash without “help.” I wouldn’t be surprised to run into 
an ASSERT elsewhere which explains the problem.

For the specific packet in question: see if 
vnet_buffer(b)->sw_if_index[VLIB_TX] is set to a nonexistent FIB index. That’s 
one possible reason why the root mtrie ply might not play nicely with the other 
children.

HTH... Dave

From: vpp-dev@lists.fd.io  On Behalf Of abbas ali chezgi 
via Lists.Fd.Io
Sent: Tuesday, August 28, 2018 5:16 AM
To: Vpp-dev 
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] crash in vpp 18.07

more info after full core dump:


#0  0x7f57b1b6e428 in raise () from /lib/x86_64-linux-gnu/libc.so.6
#1  0x7f57b1b7002a in abort () from /lib/x86_64-linux-gnu/libc.so.6
#2  0x5565b7348dee in os_exit (code=code@entry=1)
at 
/w/workspace/vpp-merge-1807-ubuntu1604/build-data/../src/vpp/vnet/main.c:355
#3  0x7f57b3210035 in unix_signal_handler (signum=, 
si=,
uc=) at 
/w/workspace/vpp-merge-1807-ubuntu1604/build-data/../src/vlib/unix/main.c:157
#4  
#5  0x7f57b29b7321 in ip4_fib_mtrie_lookup_step_one 
(dst_address=0x7f576af32a1a, m=)
at 
/w/workspace/vpp-merge-1807-ubuntu1604/build-data/../src/vnet/ip/ip4_mtrie.h:229
#6  ip4_local_check_src (error0=, last_check=,
ip0=0x7f576af32a0e, b=0x7f576af32900)
at 
/w/workspace/vpp-merge-1807-ubuntu1604/build-data/../src/vnet/ip/ip4_forward.c:1280
#7  ip4_local_inline (head_of_feature_arc=1, frame=, 
node=,
vm=0x7f57b342cf80 )
at 
/w/workspace/vpp-merge-1807-ubuntu1604/build-data/../src/vnet/ip/ip4_forward.c:1514
#8  ip4_local (frame=, node=, vm=0x7f57b342cf80 
)
at 
/w/workspace/vpp-merge-1807-ubuntu1604/build-data/../src/vnet/ip/ip4_forward.c:1534
#9  ip4_local_avx2 (vm=0x7f57b342cf80 , node=0x7f577155c3c0, 
frame=0x7f57721c4400)
at 
/w/workspace/vpp-merge-1807-ubuntu1604/build-data/../src/vnet/ip/ip4_forward.c:1555
#10 0x7f57b31d63a4 in dispatch_node (last_time_stamp=85226775203136, 
frame=0x7f57721c4400,
dispatch_state=VLIB_NODE_STATE_POLLING, type=VLIB_NODE_TYPE_INTERNAL, 
node=0x7f577155c3c0,
vm=0x7f57b342cf80 )
at /w/workspace/vpp-merge-1807-ubuntu1604/build-data/../src/vlib/main.c:988
#11 dispatch_pending_node (vm=vm@entry=0x7f57b342cf80 ,
pending_frame_index=pending_frame_index@entry=5,
last_time_stamp=last_time_stamp@entry=85226775203136)
at /w/workspace/vpp-merge-1807-ubuntu1604/build-data/../src/vlib/main.c:1138
#12 0x7f57b31d7dee in vlib_main_or_worker_loop (is_main=1, 
vm=0x7f57b342cf80 )
at /w/workspace/vpp-merge-1807-ubuntu1604/build-data/../src/vlib/main.c:1616
#13 vlib_main_loop (vm=0x7f57b342cf80 )
at /w/workspace/vpp-merge-1807-ubuntu1604/build-data/../src/vlib/main.c:1635
#14 vlib_main (vm=vm@entry=0x7f57b342cf80 , 
input=input@entry=0x7f57717e4fa0)
at /w/workspace/vpp-merge-1807-ubuntu1604/build-data/../src/vlib/main.c:1826
#15 0x7f57b320f4d3 in thread0 (arg=140014646382464)
at 
/w/workspace/vpp-merge-1807-ubuntu1604/build-data/../src/vlib/unix/main.c:607
#16 0x7f57b234e778 in clib_calljmp ()
at 
/w/workspace/vpp-merge-1807-ubuntu1604/build-data/../src/vppinfra/longjmp.S:110
#17 0x7ffce83886c0 in ?? ()
#18 0x7f57b3210576 in vlib_unix_main (argc=, argv=)
at 
/w/workspace/vpp-merge-1807-ubuntu1604/build-data/../src/vlib/unix/main.c:674
#19 0x367063740038 in ?? ()
#20 0x502074657365722d in ?? ()
#21 0x64207374656b6361 in ?? ()
#22 0x6620646570706f72 in ?? ()
#23 0x206b63616c20726f in ?? ()
#24 0x696620787220666f in ?? ()
#25 0x6563617073206f66 in ?? ()
#26 0x6425203a in ?? ()
#27 0x in ?? ()


On Tuesday, August 28, 2018, 1:30:26 PM GMT+4:30, abbas ali chezgi via 
Lists.Fd.Io mailto:chezgi=yahoo@lists.fd.io>> 
wrote:



this crash accoured in vpp 18.07:

vpp[71]: received signal SIGSEGV, PC 0x7f125c775321, faulting address 
0x7f16ae006978
vpp[71]: #0  0x7f125cfcdf7f 0x7f125cfcdf7f
vpp[71]: #1  0x7f125bed6390 0x7f125bed6390
vpp[71]: #2  0x7f125c775321 ip4_local_avx2 + 0x905
vpp[71]: #3  0x7f125cf943a4 0x7f125cf943a4
vpp[71]: #4  0x7f125cf95dee vlib_main + 0x71e
vpp[71]: #5  0x7f125cfcd4d3 0x7f125cfcd4d3
vpp[71]: #6  0x7f125c10c778 0x7f125c10c778



running options:

unix {
log /outDir/vpp.log
cli-listen /run/vpp/cli.sock
}


api-segment {
prefix vpp1
}


plugins {
plugin default { enable }
plugin dpdk_plugin.so { disable }
}






thanks.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10311): https://lists.fd.io/g/vpp-dev/message/10311
Mute This Topic: https://lists.fd.io/mt/25038079/1239119
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub 
 
[che...@yahoo.com]
-=-=-=-=-=-=-=-=-=-=-=-
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to 

Re: [vpp-dev] [csit-dev] [FD.io Helpdesk #56625] Nexus fd.io.master.centos7 VPP artifacts

2018-08-28 Thread Vratko Polak -X via RT
+1.

-Original Message-
From: csit-...@lists.fd.io  On Behalf Of Jan Gelety via 
Lists.Fd.Io
Sent: Tuesday, 2018-August-28 09:38
To: Peter Mikus -X (pmikus - PANTHEON TECHNOLOGIES at Cisco) 
; fdio-helpd...@rt.linuxfoundation.org
Cc: csit-...@lists.fd.io
Subject: Re: [csit-dev] [FD.io Helpdesk #56625] Nexus fd.io.master.centos7 VPP 
artifacts

+1

30 day artefact availability (except official release artefacts, of course) is 
enough for us.

Thanks,
Jan

-Original Message-
From: csit-...@lists.fd.io  On Behalf Of Peter Mikus via 
Lists.Fd.Io
Sent: Tuesday, August 28, 2018 8:57 AM
To: fdio-helpd...@rt.linuxfoundation.org
Cc: csit-...@lists.fd.io
Subject: Re: [csit-dev] [FD.io Helpdesk #56625] Nexus fd.io.master.centos7 VPP 
artifacts

Hello,

Yes I think this will work long term. (still will let others to comment but, 
for now looks good to me)

Thank you.

Peter Mikus
Engineer – Software
Cisco Systems Limited

-Original Message-
From: Vanessa Valderrama via RT 
Sent: Monday, August 27, 2018 6:27 PM
To: Peter Mikus -X (pmikus - PANTHEON TECHNOLOGIES at Cisco) 
Cc: csit-...@lists.fd.io; infra-steer...@lists.fd.io; vpp-dev@lists.fd.io
Subject: [FD.io Helpdesk #56625] Nexus fd.io.master.centos7 VPP artifacts

Ed developed a script that will allow us to keep/remove artifacts based on the 
number of days. So we will remove artifacts older than 30 days.  Please let me 
know if this meets your requirements.

Thank you,
Vanessa


On Fri Aug 10 09:14:10 2018, jgel...@cisco.com wrote:
> 
> Hello,
> 
> Could you, please, let us know if the solution proposed by Peter below 
> is possible to implement or we need to start thinking about another
> (non-optimal) solution, e.g.
> 
> - compile vpp artefacts for every csit-vpp-functional job (time and 
> resource wasting)
> 
> - copy vpp artefacts needed for csit-vpp-functional jobs to one of 
> VIRL servers and use nfs/rsync/apache2 (just temporary solution as 
> VIRL servers are planned to be re-used when CSIT VIRL tests migration 
> to VPP_path and VPP_device tests is finished)
> 
> - copy vpp artefacts needed for csit-vpp-functional jobs to another 
> location on Nexus that is used to store csit reports (here we would 
> need solution for deleting obsolete artefacts)
> 
> Thanks,
> Jan
> 
> Jan Gelety
> Engineer - Software
> Cisco Systems, Inc.
> 
> -Original Message-
> From: csit-...@lists.fd.io  On Behalf Of Peter 
> Mikus via Lists.Fd.Io
> Sent: Wednesday, August 8, 2018 5:29 PM
> To: fdio-helpd...@rt.linuxfoundation.org
> Cc: csit-...@lists.fd.io
> Subject: Re: [csit-dev] [FD.io Helpdesk #56625] Nexus
> fd.io.master.centos7 VPP artifacts
> 
> Hello,
> 
> We are creating branches on weekly basis and CSIT being verified in 
> weekly job. So the question is if there is option to set "date" or 
> "number" when limiting repo.
> 
> Counting the cadence of up to 10 merges per day (artifacts posted on
> Nexus) then means that safe value is around 100-120 artifacts.
> This I am talking about master branch of VPP. Stable branches we 
> should be ok with just 10-15 artifacts.
> 
> Peter Mikus
> Engineer – Software
> Cisco Systems Limited
> 
> -Original Message-
> From: Vanessa Valderrama via RT [mailto:fdio- 
> helpd...@rt.linuxfoundation.org]
> Sent: Wednesday, August 08, 2018 5:22 PM
> To: Peter Mikus -X (pmikus - PANTHEON TECHNOLOGIES at Cisco) 
> 
> Cc: csit-...@lists.fd.io; infra-steer...@lists.fd.io; vpp- 
> d...@lists.fd.io
> Subject: [FD.io Helpdesk #56625] Nexus fd.io.master.centos7 VPP 
> artifacts
> 
> Peter,
> 
> How many artifacts do you need us to retain for your testing?
> 
> Thank you,
> Vanessa
> 
> 
> On Mon Aug 06 04:53:29 2018, pmi...@cisco.com wrote:
> > Hello Vanessa,
> >
> > For CSIT it is not about release or not. We would need to increase 
> > cadence on our weekly jobs to daily. Currently CSIT jobs are all 
> > failing as VPP has more than 10-15 artifacts in week.
> > Our defined stable versions of VPP (updated once a week) are not in 
> > repo anymore or are obsoleting faster than we are updating. This is 
> > impacting everything.
> >
> > Right now we are *blocked* and we need to work new solution do adopt.
> >
> > One of the option is that we will have to start building VPP from 
> > scratch for every job as we cannot use artifacts anymore. This will 
> > cause huge overhead on infrastructure and execution times will 
> > extend as nexus acted as optimization for us.
> > Right now Nexus is not an option for us anymore. This also means 
> > that Nexus artifacts will not be tested by CSIT.
> >
> > Peter Mikus
> > Engineer – Software
> > Cisco Systems Limited
> >
> > -Original Message-
> >   From: Vanessa Valderrama via RT [mailto:fdio- 
> > helpd...@rt.linuxfoundation.org]
> > Sent: Tuesday, July 31, 2018 8:16 PM
> >  To: Peter Mikus -X (pmikus - PANTHEON TECHNOLOGIES at Cisco) 
> > 
> >  Cc: csit-...@lists.fd.io; infra-steer...@lists.fd.io; vpp- 
> > d...@lists.fd.io
> >  Subject: [FD.io Helpdesk 

Re: [vpp-dev] TLS half open lock

2018-08-28 Thread Yu, Ping
Hi, Florin,

Yes, you are right, and all alloc operation is performed by thread 0. An 
interesting thing is that if running “test echo clients nclients 300 uri 
tls://10.10.1.1/” in clients with 4 threads, I can easily catch the case 
one same index be alloc twice by thread 0.

thread 0: alloc: 145
thread 0: alloc: 69
thread 4 free: 151
thread 0: alloc: 151
thread 2 free: 149
thread 3 free: 155
thread 0: alloc: 149
thread 0: alloc: 149
thread 0: alloc: 58
thread 0: alloc: 9
thread 0: alloc: 29
thread 3 free: 146
thread 0: alloc: 146
thread 2 free: 153
thread 0: alloc: 144
thread 0: alloc: 153
thread 0: alloc: 124
thread 3 free: 25
thread 0: alloc: 25

From: Florin Coras [mailto:fcoras.li...@gmail.com]
Sent: Tuesday, August 28, 2018 10:24 AM
To: Yu, Ping 
Cc: Florin Coras (fcoras) ; vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] TLS half open lock

Hi Ping,

The expectation is that all connects/listens come on the main thread (with the 
worker barrier held). In other words, we only need to support a one writer, 
multiple readers scenario.

Florin


On Aug 27, 2018, at 6:29 PM, Yu, Ping 
mailto:ping...@intel.com>> wrote:

Hi, Florin,

To check if it is about to expand is also lockless. Is there any issue if two 
threads check the pool simultaneously, and just one slot is available? One code 
will do normal get, and the other thread is expanding the pool?

Thanks
Ping

From: vpp-dev@lists.fd.io 
[mailto:vpp-dev@lists.fd.io] On Behalf Of Florin Coras
Sent: Tuesday, August 28, 2018 12:51 AM
To: Yu, Ping mailto:ping...@intel.com>>
Cc: Florin Coras (fcoras) mailto:fco...@cisco.com>>; 
vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] TLS half open lock

Hi Ping,

The current implementation only locks the half-open pool if the pool is about 
to expand. This is done to increase speed by avoiding unnecessary locking, 
i.e., if pool is not about to expand, it should be safe to get a new element 
from it without affecting readers. Now the thing to figure out is why this is 
happening. Does the slowdown due to the “big lock” avoid some race or is there 
more to it?

First of all, how many workers do you have configured and how many sessions are 
you allocating/connecting? Do you see failed connects?

Your tls_ctx_half_open_get line numbers don’t match my code. Did you by chance 
modify something else?

Thanks,
Florin



On Aug 27, 2018, at 9:22 AM, Yu, Ping 
mailto:ping...@intel.com>> wrote:

Hello, all

Recently I found that the TLS half open lock is not well implemented, and if 
enabling multiple thread, there are chances to get the following core dump info 
in debug mode.

(gdb) where
#0  0x7f7a0848e428 in __GI_raise (sig=sig@entry=6) at 
../sysdeps/unix/sysv/linux/raise.c:54
#1  0x7f7a0849002a in __GI_abort () at abort.c:89
#2  0x00407f0b in os_panic () at 
/home/pyu4/git-home/vpp_clean/vpp/src/vpp/vnet/main.c:331
#3  0x7f7a08867bd0 in debugger () at 
/home/pyu4/git-home/vpp_clean/vpp/src/vppinfra/error.c:84
#4  0x7f7a08868008 in _clib_error (how_to_die=2, function_name=0x0, 
line_number=0,
fmt=0x7f7a0a0add78 "%s:%d (%s) assertion `%s' fails") at 
/home/pyu4/git-home/vpp_clean/vpp/src/vppinfra/error.c:143
#5  0x7f7a09e10be0 in tls_ctx_half_open_get (ctx_index=48) at 
/home/pyu4/git-home/vpp_clean/vpp/src/vnet/tls/tls.c:126
#6  0x7f7a09e11889 in tls_session_connected_callback (tls_app_index=0, 
ho_ctx_index=48, tls_session=0x7f79c9b6d1c0,
is_fail=0 '\000') at 
/home/pyu4/git-home/vpp_clean/vpp/src/vnet/tls/tls.c:404
#7  0x7f7a09d5ea6e in session_stream_connect_notify (tc=0x7f79c9b655fc, 
is_fail=0 '\000')
at /home/pyu4/git-home/vpp_clean/vpp/src/vnet/session/session.c:648
#8  0x7f7a099cb969 in tcp46_syn_sent_inline (vm=0x7f79c8a25100, 
node=0x7f79c9a60500, from_frame=0x7f79c8b2a9c0, is_ip4=1)
at /home/pyu4/git-home/vpp_clean/vpp/src/vnet/tcp/tcp_input.c:2306
#9  0x7f7a099cbe00 in tcp4_syn_sent (vm=0x7f79c8a25100, 
node=0x7f79c9a60500, from_frame=0x7f79c8b2a9c0)
at /home/pyu4/git-home/vpp_clean/vpp/src/vnet/tcp/tcp_input.c:2387
#10 0x7f7a08fefa35 in dispatch_node (vm=0x7f79c8a25100, 
node=0x7f79c9a60500, type=VLIB_NODE_TYPE_INTERNAL,
dispatch_state=VLIB_NODE_STATE_POLLING, frame=0x7f79c8b2a9c0, 
last_time_stamp=902372436923868)
at /home/pyu4/git-home/vpp_clean/vpp/src/vlib/main.c:988
#11 0x7f7a08feffee in dispatch_pending_node (vm=0x7f79c8a25100, 
pending_frame_index=7, last_time_stamp=902372436923868)
at /home/pyu4/git-home/vpp_clean/vpp/src/vlib/main.c:1138
#12 0x7f7a08ff1bed in vlib_main_or_worker_loop (vm=0x7f79c8a25100, 
is_main=0)
at /home/pyu4/git-home/vpp_clean/vpp/src/vlib/main.c:1554
#13 0x7f7a08ff240c in vlib_worker_loop (vm=0x7f79c8a25100) at 
/home/pyu4/git-home/vpp_clean/vpp/src/vlib/main.c:1634
#14 0x7f7a09035541 in vlib_worker_thread_fn (arg=0x7f79ca4a41c0) at 
/home/pyu4/git-home/vpp_clean/vpp/src/vlib/threads.c:1760
#15 0x7f7a0888aa38 in 

Re: [vpp-dev] IGMP enable issue

2018-08-28 Thread Aleksander Djuric
Hi, Neale

Many thanks!
I did the same job already ) My patch in attach.
I'm trying to test it now and will write later about results.

Best regards,
Aleksander

On Mon, Aug 27, 2018 at 05:40 PM, Neale Ranns wrote:

> 
> 
> 
> Hi Aleksander,
> 
> 
> 
>  
> 
> 
> 
> The API required to enable router mode did not have a CLI equivalent. I
> have added it in :
> 
> 
> 
>  https://gerrit.fd.io/r/#/c/14507/
> 
> 
> 
>  
> 
> 
> 
> now do:
> 
> 
> 
>   igmp enable router 
> 
> 
> 
> when done
> 
> 
> 
>   igmp disable router 
> 
> 
> 
>  
> 
>
--- src/plugins/igmp/igmp_cli.c.orig	2018-08-27 09:58:45.076027254 +0300
+++ src/plugins/igmp/igmp_cli.c	2018-08-27 15:09:17.709417958 +0300
@@ -78,6 +78,72 @@
 /* *INDENT-ON* */
 
 static clib_error_t *
+igmp_enable_disable_command_fn (vlib_main_t * vm, unformat_input_t * input,
+			vlib_cli_command_t * cmd)
+{
+  unformat_input_t _line_input, *line_input = &_line_input;
+  clib_error_t *error = NULL;
+  vnet_main_t *vnm = vnet_get_main ();
+  u32 sw_if_index;
+  u8 enable = 1;
+  igmp_mode_t mode = IGMP_MODE_HOST;
+
+  if (!unformat_user (input, unformat_line_input, line_input))
+{
+  error =
+	clib_error_return (0,
+			   "'help igmp' or 'igmp ?' for help");
+  return error;
+}
+
+  while (unformat_check_input (line_input) != UNFORMAT_END_OF_INPUT)
+{
+  if (unformat (line_input, "enable"))
+	enable = 1;
+  else if (unformat (line_input, "disable"))
+	enable = 0;
+  else
+	if (unformat
+	(line_input, "int %U", unformat_vnet_sw_interface, vnm,
+	 _if_index));
+  else
+	if (unformat (line_input, "mode host"));
+  else
+	if (unformat (line_input, "mode router"))
+	  mode = IGMP_MODE_ROUTER;
+  else
+	{
+	  error =
+	clib_error_return (0, "unknown input '%U'", format_unformat_error,
+			   line_input);
+	  goto done;
+	}
+}
+
+  if ((vnet_sw_interface_get_flags (vnm, sw_if_index)
+   && VNET_SW_INTERFACE_FLAG_ADMIN_UP) == 0)
+{
+  error = clib_error_return (0, "Interface is down");
+  goto done;
+}
+
+  igmp_enable_disable (sw_if_index, enable, mode);
+
+done:
+  unformat_free (line_input);
+  return error;
+}
+
+/* *INDENT-OFF* */
+VLIB_CLI_COMMAND (igmp_enable_disable_command, static) = {
+  .path = "igmp",
+  .short_help = "igmp [] "
+"int  mode []",
+  .function = igmp_enable_disable_command_fn,
+};
+/* *INDENT-ON* */
+
+static clib_error_t *
 igmp_listen_command_fn (vlib_main_t * vm, unformat_input_t * input,
 			vlib_cli_command_t * cmd)
 {
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10313): https://lists.fd.io/g/vpp-dev/message/10313
Mute This Topic: https://lists.fd.io/mt/24971765/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] crash in vpp 18.07

2018-08-28 Thread abbas ali chezgi via Lists.Fd.Io
more info after full core dump:

#0  0x7f57b1b6e428 in raise () from /lib/x86_64-linux-gnu/libc.so.6#1  
0x7f57b1b7002a in abort () from /lib/x86_64-linux-gnu/libc.so.6#2  
0x5565b7348dee in os_exit (code=code@entry=1)    at 
/w/workspace/vpp-merge-1807-ubuntu1604/build-data/../src/vpp/vnet/main.c:355#3  
0x7f57b3210035 in unix_signal_handler (signum=, 
si=,     uc=) at 
/w/workspace/vpp-merge-1807-ubuntu1604/build-data/../src/vlib/unix/main.c:157#4 
 #5  0x7f57b29b7321 in ip4_fib_mtrie_lookup_step_one 
(dst_address=0x7f576af32a1a, m=)    at 
/w/workspace/vpp-merge-1807-ubuntu1604/build-data/../src/vnet/ip/ip4_mtrie.h:229#6
  ip4_local_check_src (error0=, last_check=,     ip0=0x7f576af32a0e, b=0x7f576af32900)    at 
/w/workspace/vpp-merge-1807-ubuntu1604/build-data/../src/vnet/ip/ip4_forward.c:1280#7
  ip4_local_inline (head_of_feature_arc=1, frame=, 
node=,     vm=0x7f57b342cf80 )    at 
/w/workspace/vpp-merge-1807-ubuntu1604/build-data/../src/vnet/ip/ip4_forward.c:1514#8
  ip4_local (frame=, node=, vm=0x7f57b342cf80 
)    at 
/w/workspace/vpp-merge-1807-ubuntu1604/build-data/../src/vnet/ip/ip4_forward.c:1534#9
  ip4_local_avx2 (vm=0x7f57b342cf80 , node=0x7f577155c3c0, 
frame=0x7f57721c4400)    at 
/w/workspace/vpp-merge-1807-ubuntu1604/build-data/../src/vnet/ip/ip4_forward.c:1555#10
 0x7f57b31d63a4 in dispatch_node (last_time_stamp=85226775203136, 
frame=0x7f57721c4400,     dispatch_state=VLIB_NODE_STATE_POLLING, 
type=VLIB_NODE_TYPE_INTERNAL, node=0x7f577155c3c0,     vm=0x7f57b342cf80 
)    at 
/w/workspace/vpp-merge-1807-ubuntu1604/build-data/../src/vlib/main.c:988#11 
dispatch_pending_node (vm=vm@entry=0x7f57b342cf80 ,     
pending_frame_index=pending_frame_index@entry=5,     
last_time_stamp=last_time_stamp@entry=85226775203136)    at 
/w/workspace/vpp-merge-1807-ubuntu1604/build-data/../src/vlib/main.c:1138#12 
0x7f57b31d7dee in vlib_main_or_worker_loop (is_main=1, vm=0x7f57b342cf80 
)    at 
/w/workspace/vpp-merge-1807-ubuntu1604/build-data/../src/vlib/main.c:1616#13 
vlib_main_loop (vm=0x7f57b342cf80 )    at 
/w/workspace/vpp-merge-1807-ubuntu1604/build-data/../src/vlib/main.c:1635#14 
vlib_main (vm=vm@entry=0x7f57b342cf80 , 
input=input@entry=0x7f57717e4fa0)    at 
/w/workspace/vpp-merge-1807-ubuntu1604/build-data/../src/vlib/main.c:1826#15 
0x7f57b320f4d3 in thread0 (arg=140014646382464)    at 
/w/workspace/vpp-merge-1807-ubuntu1604/build-data/../src/vlib/unix/main.c:607#16
 0x7f57b234e778 in clib_calljmp ()    at 
/w/workspace/vpp-merge-1807-ubuntu1604/build-data/../src/vppinfra/longjmp.S:110#17
 0x7ffce83886c0 in ?? ()#18 0x7f57b3210576 in vlib_unix_main 
(argc=, argv=)    at 
/w/workspace/vpp-merge-1807-ubuntu1604/build-data/../src/vlib/unix/main.c:674#19
 0x367063740038 in ?? ()#20 0x502074657365722d in ?? ()#21 
0x64207374656b6361 in ?? ()#22 0x6620646570706f72 in ?? ()#23 
0x206b63616c20726f in ?? ()#24 0x696620787220666f in ?? ()#25 
0x6563617073206f66 in ?? ()#26 0x6425203a in ?? ()#27 
0x in ?? ()

   On Tuesday, August 28, 2018, 1:30:26 PM GMT+4:30, abbas ali chezgi via 
Lists.Fd.Io  wrote:  
 
 
this crash accoured in vpp 18.07:
vpp[71]: received signal SIGSEGV, PC 0x7f125c775321, faulting address 
0x7f16ae006978vpp[71]: #0  0x7f125cfcdf7f 0x7f125cfcdf7fvpp[71]: #1  
0x7f125bed6390 0x7f125bed6390vpp[71]: #2  0x7f125c775321 ip4_local_avx2 
+ 0x905vpp[71]: #3  0x7f125cf943a4 0x7f125cf943a4vpp[71]: #4  
0x7f125cf95dee vlib_main + 0x71evpp[71]: #5  0x7f125cfcd4d3 
0x7f125cfcd4d3vpp[71]: #6  0x7f125c10c778 0x7f125c10c778


running options:
unix { 
log /outDir/vpp.log cli-listen /run/vpp/cli.sock } 
api-segment { prefix vpp1 }  
plugins { plugin default { enable } plugin dpdk_plugin.so { disable } } 



thanks.-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10311): https://lists.fd.io/g/vpp-dev/message/10311
Mute This Topic: https://lists.fd.io/mt/25038079/1239119
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [che...@yahoo.com]
-=-=-=-=-=-=-=-=-=-=-=-  
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10312): https://lists.fd.io/g/vpp-dev/message/10312
Mute This Topic: https://lists.fd.io/mt/25038079/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] crash in vpp 18.07

2018-08-28 Thread abbas ali chezgi via Lists.Fd.Io

this crash accoured in vpp 18.07:
vpp[71]: received signal SIGSEGV, PC 0x7f125c775321, faulting address 
0x7f16ae006978vpp[71]: #0  0x7f125cfcdf7f 0x7f125cfcdf7fvpp[71]: #1  
0x7f125bed6390 0x7f125bed6390vpp[71]: #2  0x7f125c775321 ip4_local_avx2 
+ 0x905vpp[71]: #3  0x7f125cf943a4 0x7f125cf943a4vpp[71]: #4  
0x7f125cf95dee vlib_main + 0x71evpp[71]: #5  0x7f125cfcd4d3 
0x7f125cfcd4d3vpp[71]: #6  0x7f125c10c778 0x7f125c10c778


running options:
unix { 
log /outDir/vpp.log cli-listen /run/vpp/cli.sock } 
api-segment { prefix vpp1 }  
plugins { plugin default { enable } plugin dpdk_plugin.so { disable } } 



thanks.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10311): https://lists.fd.io/g/vpp-dev/message/10311
Mute This Topic: https://lists.fd.io/mt/25038079/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] IPv6 Routing Test #vpp

2018-08-28 Thread arda
Hey guys
I have a problem with ipv6 routing using Trex. It seems I have to add 
neighbors' addresses manually, unfortunately, this is the error I receive: "ip6 
source lookup miss" From my point of view, this is because of URPF.
It would be great if you guys share your ideas, or if you have successfully 
tested this concept, I would be pleased to have your VPP and Trex 
configurations.
Thanks
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10309): https://lists.fd.io/g/vpp-dev/message/10309
Mute This Topic: https://lists.fd.io/mt/25037634/21656
Mute #vpp: https://lists.fd.io/mk?hashtag=vpp=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-