[vpp-dev] vpp deadlock - syslog in signal handler

2017-10-17 Thread Gabriel Ganne
Hi,



I have encountered a deadlock in vpp on the raising of a memory alloc exception.

The signal is caught by unix_signal_handler(), which determines this is a fatal 
error and then syslogs the error message.

The problem is that syslog() then tries to allocate a scratchpad memory, and 
deadlocks since allocation is the reason why I'm here in the first place.



clib_warning() functions should be safe because all the memory needed is 
alloc'ed at init, but I don't see how this syslog() call can succeed.

Should I just remove it ?

Or is there a way I don't know about to still make this work ?



Below is a backtrace of the problem:
#0  0xa42e2c0c in __lll_lock_wait_private 
(futex=futex@entry=0xa43869a0 ) at ./lowlevellock.c:33
#1  0xa426b6e8 in __GI___libc_malloc (bytes=bytes@entry=584) at 
malloc.c:2888
#2  0xa425ace8 in __GI___open_memstream (bufloc=0x655b4670, 
bufloc@entry=0x655b46d0, sizeloc=0x655b4678, 
sizeloc@entry=0x655b46d8) at memstream.c:76
#3  0xa42cef18 in __GI___vsyslog_chk (ap=..., fmt=0xa4be2990 "%s", 
flag=-1, pri=27) at ../misc/syslog.c:167
#4  __syslog (pri=pri@entry=27, fmt=fmt@entry=0xa4be2990 "%s") at 
../misc/syslog.c:117
#5  0xa4bd7ab4 in unix_signal_handler (signum=, 
si=, uc=) at 
/home/gannega/vpp/build-data/../src/vlib/unix/main.c:119
#6  
#7  0xa42654e0 in malloc_consolidate (av=av@entry=0xa43869a0 
) at malloc.c:4182
#8  0xa4269354 in malloc_consolidate (av=0xa43869a0 ) 
at malloc.c:4151
#9  _int_malloc (av=av@entry=0xa43869a0 , 
bytes=bytes@entry=32816) at malloc.c:3450
#10 0xa426b5b4 in __GI___libc_malloc (bytes=bytes@entry=32816) at 
malloc.c:2890
#11 0xa4299000 in __alloc_dir (statp=0x655b5d48, flags=0, 
close_fd=true, fd=5) at ../sysdeps/posix/opendir.c:247
#12 opendir_tail (fd=) at ../sysdeps/posix/opendir.c:145
#13 __opendir (name=name@entry=0xa4bdf258 "/sys/bus/pci/devices") at 
../sysdeps/posix/opendir.c:200
#14 0xa4bde088 in foreach_directory_file 
(dir_name=dir_name@entry=0xa4bdf258 "/sys/bus/pci/devices", 
f=f@entry=0xa4baf4a8 , arg=arg@entry=0xa4c0af30 
,
scan_dirs=scan_dirs@entry=0) at 
/home/gannega/vpp/build-data/../src/vlib/unix/util.c:59
#15 0xa4baed64 in linux_pci_init (vm=0xa4c0af30 ) 
at /home/gannega/vpp/build-data/../src/vlib/linux/pci.c:648
#16 0xa4bae504 in vlib_call_init_exit_functions (vm=0xa4c0af30 
, head=, call_once=call_once@entry=1) at 
/home/gannega/vpp/build-data/../src/vlib/init.c:57
#17 0xa4bae548 in vlib_call_all_init_functions (vm=) at 
/home/gannega/vpp/build-data/../src/vlib/init.c:75
#18 0xa4bb3838 in vlib_main (vm=, 
vm@entry=0xa4c0af30 , input=input@entry=0x655b5fc8) 
at /home/gannega/vpp/build-data/../src/vlib/main.c:1748
#19 0xa4bd7c0c in thread0 (arg=281473445834544) at 
/home/gannega/vpp/build-data/../src/vlib/unix/main.c:567
#20 0xa44f3e38 in clib_calljmp () at 
/home/gannega/vpp/build-data/../src/vppinfra/longjmp.S:676


Best regards,

--

Gabriel Ganne
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] Query on configuration on multithread mode

2017-10-17 Thread Ni, Hongjun
Hi vpp-dev,

We are doing performance test in multithread mode, and have a doubt:

(1). For one main thread and one worker thread, then configuring 16K PPPoE 
sessions took about 100 seconds.
(2). For one main thread and eight worker thread, then configuring 16K PPPoE 
sessions took about 260 seconds.
Why is there such a difference of consumed time?

The second question is:
In multi-thread mode, are all created tables in VLIB_INIT_FUNCTION (pppoe_init) 
shared by all worker threads,
Or each worker thread has its own copy of each table?

Thanks a lot,
Hongjun
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] syslog in snat

2017-10-17 Thread Ole Troan
Matt,

> 
> I'll look through and see if I can help conjure something. I plan on using it 
> by siphoning to our central logging system. I didn't look at other ways, what 
> are your suggestions?

If you have an existing system using syslog you need to integrate with then 
sure syslog is the answer. ;-)

First suggestion is as I mentioned earlier, if you are doing this with a CGN 
you should probably look at deterministic NAT to avoid logging altogether.
Otherwise you do have some new stuff that might be interesting like Kafka.

Or we could consider getting data out of VPP in some VPP specific way across 
shared memory / memif's and then have an external agent converting that format 
to whatever external representation you want.

Long story short, two thoughts in head at the same time, nothing against 
syslog, let's give that a try! :-)

Best regards,
Ole



signature.asc
Description: Message signed with OpenPGP
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] SNAT hash tables

2017-10-17 Thread Ole Troan
Yuliang,

> Get it. It use spinlock (in vppinfra/bihash_template.c):
> 
> while (__sync_lock_test_and_set (h->writer_lock, 1))
> 
> On Mon, Oct 16, 2017 at 11:39 AM, Yuliang Li  wrote:
> Hi all,
> 
> I am curious in the SNAT implementation. I see that SNAT's hash tables are 
> shared by all worker threads. Does SNAT use lock to avoid multiple threads 
> updating the hash table concurrently? If not, how does it avoid race 
> condition?

Are you on latest?
The "global" NAT state space is essentially split across all workers. So there 
is no locking.

typedef struct {
  /* Main lookup tables */
  clib_bihash_8_8_t out2in;
  clib_bihash_8_8_t in2out;

  /* Find-a-user => src address lookup */
  clib_bihash_8_8_t user_hash;

  /* User pool */
  snat_user_t * users;

  /* Session pool */
  snat_session_t * sessions;

  /* Pool of doubly-linked list elements */
  dlist_elt_t * list_pool;

  u32 snat_thread_index;
} snat_main_per_thread_data_t;


Best regards,
Ole



signature.asc
Description: Message signed with OpenPGP
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] vhost-user stable implementation?: VPP native or DPDK

2017-10-17 Thread Prabhjot Singh Sethi
Hi Steven,

we are also looking at using dpdk based virtualethernet interface, so
looking back at this thread i was wondering what is the problem with
dpdk based solution.

1. you mentioned dpdk based interface can be created via vdev in dpdk
clause of startup file, can you share some example there. only thing i
have seen as an example so far is associating a bond interface with
physical slave interfaces, is it possible to create one for VM ?
2. can you please share a link to the patch submitted for dpdk
virtio-user

Regards,
PrabhjotFROM: Steven Luong (sluong) 
SENT: Monday, September 11, 2017 7:46 PM
TO: Guo, Ruijing; Saxena, Nitin; vpp-dev@lists.fd.io
SUBJECT: Re: [vpp-dev] vhost-user stable implementation?: VPP native
or DPDK
  

 If you create the VIrtualEthernet interface via the CLI or binary
API, it always uses VPP native. If you create the virtual interface
via vdev in the dpdk clause in the startup file, it uses dpdk’s
vhost-user.

  

 The problem is in DPDK virtio-user that they don’t comply with
virtio1.0 spec. I submitted a patch for them. I don’t think they
took it yet.

  

 Steven

  

 FROM:  on behalf of "Guo, Ruijing"

DATE: Sunday, September 10, 2017 at 5:36 PM
TO: "Saxena, Nitin" , "vpp-dev@lists.fd.io"

SUBJECT: Re: [vpp-dev] vhost-user stable implementation?: VPP native
or DPDK

   

  

 Just for your reference:

 I am using vpp 17.07. The default one is vpp native. But it cannot
work with virtio-user. So I change to vhost-user in dpdk.

  

  

 FROM: vpp-dev-boun...@lists.fd.io
[mailto:vpp-dev-boun...@lists.fd.io] ON BEHALF OF Saxena, Nitin
SENT: Monday, September 11, 2017 1:22 AM
TO: vpp-dev@lists.fd.io
SUBJECT: [vpp-dev] vhost-user stable implementation?: VPP native or
DPDK

     

 Hi All

  

 I went through following video regarding vhost-user.

  

 https://www.youtube.com/watch?v=z-ZRof2hDP0 [1]

  

 The question is in this video it has been told by default VPP
implementation of vhost-user being used and not the dpdk one. Since
this video is 1 yr old and I am using vpp version 1704. Can
anyone please comment which vhost-user code is by default enabled in
vpp1704 - is it dpdk one or VPP native? I can see in
vpp-master/RELEASE.md that DPDK vhost was deprecated in vpp1609? Is it
fixed now?

  

 Thanks,

 Nitin

 

Links:
--
[1] https://www.youtube.com/watch?v=z-ZRof2hDP0

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] vpp deadlock - syslog in signal handler

2017-10-17 Thread Dave Barach (dbarach)
In almost all cases, the glibc malloc heap will not be pickled since it's not 
used on a regular basis.

For some effort, one could replace the syslog library code, I guess.

Thanks... Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Gabriel Ganne
Sent: Tuesday, October 17, 2017 4:18 AM
To: vpp-dev 
Subject: [vpp-dev] vpp deadlock - syslog in signal handler


Hi,



I have encountered a deadlock in vpp on the raising of a memory alloc exception.

The signal is caught by unix_signal_handler(), which determines this is a fatal 
error and then syslogs the error message.

The problem is that syslog() then tries to allocate a scratchpad memory, and 
deadlocks since allocation is the reason why I'm here in the first place.



clib_warning() functions should be safe because all the memory needed is 
alloc'ed at init, but I don't see how this syslog() call can succeed.

Should I just remove it ?

Or is there a way I don't know about to still make this work ?



Below is a backtrace of the problem:
#0  0xa42e2c0c in __lll_lock_wait_private 
(futex=futex@entry=0xa43869a0 ) at ./lowlevellock.c:33
#1  0xa426b6e8 in __GI___libc_malloc (bytes=bytes@entry=584) at 
malloc.c:2888
#2  0xa425ace8 in __GI___open_memstream (bufloc=0x655b4670, 
bufloc@entry=0x655b46d0, sizeloc=0x655b4678, 
sizeloc@entry=0x655b46d8) at memstream.c:76
#3  0xa42cef18 in __GI___vsyslog_chk (ap=..., fmt=0xa4be2990 "%s", 
flag=-1, pri=27) at ../misc/syslog.c:167
#4  __syslog (pri=pri@entry=27, fmt=fmt@entry=0xa4be2990 "%s") at 
../misc/syslog.c:117
#5  0xa4bd7ab4 in unix_signal_handler (signum=, 
si=, uc=) at 
/home/gannega/vpp/build-data/../src/vlib/unix/main.c:119
#6  
#7  0xa42654e0 in malloc_consolidate (av=av@entry=0xa43869a0 
) at malloc.c:4182
#8  0xa4269354 in malloc_consolidate (av=0xa43869a0 ) 
at malloc.c:4151
#9  _int_malloc (av=av@entry=0xa43869a0 , 
bytes=bytes@entry=32816) at malloc.c:3450
#10 0xa426b5b4 in __GI___libc_malloc (bytes=bytes@entry=32816) at 
malloc.c:2890
#11 0xa4299000 in __alloc_dir (statp=0x655b5d48, flags=0, 
close_fd=true, fd=5) at ../sysdeps/posix/opendir.c:247
#12 opendir_tail (fd=) at ../sysdeps/posix/opendir.c:145
#13 __opendir (name=name@entry=0xa4bdf258 "/sys/bus/pci/devices") at 
../sysdeps/posix/opendir.c:200
#14 0xa4bde088 in foreach_directory_file 
(dir_name=dir_name@entry=0xa4bdf258 "/sys/bus/pci/devices", 
f=f@entry=0xa4baf4a8 , arg=arg@entry=0xa4c0af30 
,
scan_dirs=scan_dirs@entry=0) at 
/home/gannega/vpp/build-data/../src/vlib/unix/util.c:59
#15 0xa4baed64 in linux_pci_init (vm=0xa4c0af30 ) 
at /home/gannega/vpp/build-data/../src/vlib/linux/pci.c:648
#16 0xa4bae504 in vlib_call_init_exit_functions (vm=0xa4c0af30 
, head=, call_once=call_once@entry=1) at 
/home/gannega/vpp/build-data/../src/vlib/init.c:57
#17 0xa4bae548 in vlib_call_all_init_functions (vm=) at 
/home/gannega/vpp/build-data/../src/vlib/init.c:75
#18 0xa4bb3838 in vlib_main (vm=, 
vm@entry=0xa4c0af30 , input=input@entry=0x655b5fc8) 
at /home/gannega/vpp/build-data/../src/vlib/main.c:1748
#19 0xa4bd7c0c in thread0 (arg=281473445834544) at 
/home/gannega/vpp/build-data/../src/vlib/unix/main.c:567
#20 0xa44f3e38 in clib_calljmp () at 
/home/gannega/vpp/build-data/../src/vppinfra/longjmp.S:676


Best regards,

--

Gabriel Ganne
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] vhash/pfhash

2017-10-17 Thread Damjan Marion


> On 13 Oct 2017, at 18:10, Brian Brooks  wrote:
> 
> Hi,
>  
> Will vhash and pfhash continue to be unused?
> Which hash is used the most often?
>  
> Thanks,
> Brian

Likely not, I don’t see it is used at any place in the code….

We mainly use bihash and standard hash_* / mhash_*.

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] Fwd: FW: [FD.io Helpdesk #47101] AutoReply: No joy: ping6 gerrit.fd.io

2017-10-17 Thread Trishan de Lanerolle via RT
Hi Vanessa/Andy,
Can you look at this ticket. I can log into gerrit.
Trishan


-- Forwarded message --
From: Dave Barach (dbarach) 
Date: Tue, Oct 17, 2017 at 9:59 AM
Subject: FW: [FD.io Helpdesk #47101] AutoReply: No joy: ping6 gerrit.fd.io
To: "Emran Chaudhry (emran)" , Trishan de Lanerolle <
tdelanero...@linuxfoundation.org>, "Ed Warnicke (eaw)" , "
tsc-priv...@lists.fd.io" 


Folks,

It's not OK for a ticket like this one to sit for > 12hrs waiting for
someone to take a look at it.

The inevitable result: routine use of the "emergency" alias.

Thanks… Dave

-Original Message-
From: FD.io Helpdesk via RT [mailto:fdio-helpd...@rt.linuxfoundation.org]
Sent: Monday, October 16, 2017 5:03 PM
To: Dave Barach (dbarach) 
Subject: [FD.io Helpdesk #47101] AutoReply: No joy: ping6 gerrit.fd.io

Greetings,

Your support ticket regarding:
"No joy: ping6 gerrit.fd.io",
has been entered in our ticket tracker.  A summary of your ticket appears
below.

If you have any follow-up related to this issue, please reply to this email.

You may also follow up on your open tickets by visiting
https://rt.linuxfoundation.org/ -- if you have not logged into RT before,
you will need to follow the "Forgot your password" link to set an RT
password.

--
The Linux Foundation Support Team


-
It looks like gerrit.fd.io has dropped off the ipv6 radar screen. Appears
not to be a DNS problem or other problem on my end:

$ ping6 gerrit.fd.io
PING gerrit.fd.io(2604:e100:1:0:f816:3eff:fe7e:8731) 56 data bytes
^C
--- gerrit.fd.io ping statistics ---
4 packets transmitted, 0 received, 100% packet loss, time 3022ms

$ ping6 www.google.com
PING www.google.com(iad30s07-in-x04.1e100.net) 56 data bytes
64 bytes from iad30s07-in-x04.1e100.net: icmp_seq=1 ttl=49 time=33.4 ms
64 bytes from iad30s07-in-x04.1e100.net: icmp_seq=2 ttl=49 time=30.4 ms
^C
--- www.google.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 30.413/31.943/33.473/1.530 ms

Please investigate AYEC.

Thanks... Dave





-- 
Trishan R. de Lanerolle
Program Manager,  Networking
Linux Foundation
voice: +1.203.699.6401
skype: tdelanerolle
email: tdelanero...@linuxfoundation.org

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] SNAT hash tables

2017-10-17 Thread Yuliang Li
The RELEASE.md says it is for 17.04. But in vppctl, 'show ver' shows I am
using v17.10-rc0~19-g58eb866

What is latest version?

Thanks,
Yuliang

On Tue, Oct 17, 2017 at 3:01 AM, Ole Troan  wrote:

> Yuliang,
>
> > Get it. It use spinlock (in vppinfra/bihash_template.c):
> >
> > while (__sync_lock_test_and_set (h->writer_lock, 1))
> >
> > On Mon, Oct 16, 2017 at 11:39 AM, Yuliang Li 
> wrote:
> > Hi all,
> >
> > I am curious in the SNAT implementation. I see that SNAT's hash tables
> are shared by all worker threads. Does SNAT use lock to avoid multiple
> threads updating the hash table concurrently? If not, how does it avoid
> race condition?
>
> Are you on latest?
> The "global" NAT state space is essentially split across all workers. So
> there is no locking.
>
> typedef struct {
>   /* Main lookup tables */
>   clib_bihash_8_8_t out2in;
>   clib_bihash_8_8_t in2out;
>
>   /* Find-a-user => src address lookup */
>   clib_bihash_8_8_t user_hash;
>
>   /* User pool */
>   snat_user_t * users;
>
>   /* Session pool */
>   snat_session_t * sessions;
>
>   /* Pool of doubly-linked list elements */
>   dlist_elt_t * list_pool;
>
>   u32 snat_thread_index;
> } snat_main_per_thread_data_t;
>
>
> Best regards,
> Ole
>
>


-- 
Yuliang Li
PhD student
Department of Computer Science
Yale University
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Query on configuration on multithread mode

2017-10-17 Thread Damjan Marion

My wild guess here is that you are doing barrier sync on each session.With more 
workers sync takes longer….

> On 17 Oct 2017, at 08:56, Ni, Hongjun  wrote:
> 
> Hi vpp-dev,
>  
> We are doing performance test in multithread mode, and have a doubt:
>  
> (1). For one main thread and one worker thread, then configuring 16K PPPoE 
> sessions took about100 seconds.
> (2). For one main thread and eight worker thread, then configuring 16K PPPoE 
> sessions took about260 seconds.
> Why is there such a difference of consumed time?
>  
> The second question is:
> In multi-thread mode, are all created tables in VLIB_INIT_FUNCTION 
> (pppoe_init) shared by all worker threads,
> Or each worker thread has its own copy of each table?
>  
> Thanks a lot,
> Hongjun
> ___
> vpp-dev mailing list
> vpp-dev@lists.fd.io 
> https://lists.fd.io/mailman/listinfo/vpp-dev 
> 
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] 128-bit integer scalar type

2017-10-17 Thread Damjan Marion


> On 13 Oct 2017, at 19:42, Brian Brooks  wrote:
> 
> Hi,
>  
> Are there cases where unsigned __int128 can be used instead of u32x4 or 
> similar-width vector.h type abstraction for code that does plain C-style 
> bitwise/arith ops or assignments?
>  
> Thanks,
> Brian

Probably nobody investigated as u32x4 simply works. Is there any problem with 
u32x4?

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] vhost-user stable implementation?: VPP native or DPDK

2017-10-17 Thread Steven Luong (sluong)
Prabhjot,

To use the dpdk vhost-user, you have to specify each virtual interface in the 
startup.conf file, something like this.

dpdk {
  no-pci
  file-prefix virtio_user_
  vdev virtio_user0,path=/tmp/sock0,mac=52:54:00:00:04:01
}

You would see VirtioUser instead of VirtualEthernet in the show interface 
output. VirtualEthernet is displayed if you are using vpp native vhost-user.
DBGvpp# sh int
sh int
  Name   Idx   State  Counter  Count
VirtioUser0/0/0   1 up
local00down
DBGvpp#

The problem with using dpdk vhost-user is static binding. Every virtual 
interface has to be specified in startup.conf prior to bringing up VPP. If you 
add another interface in the startup.conf file, you have to restart VPP.
If you do want to try it out for fun, I remember I need to allocate 1G 
hugepages instead of 2M hugepages in order to get dpdk vhost-user or dpdk 
virtio driver up. I don’t remember the former or the latter really requires 1G 
hugepages. It took me a while to figure that part out.

Steven

From: Prabhjot Singh Sethi 
Date: Tuesday, October 17, 2017 at 3:50 AM
To: Saxena Nitin , "Steven Luong (sluong)" 
, Guo Ruijing , "vpp-dev@lists.fd.io" 

Subject: Re: [vpp-dev] vhost-user stable implementation?: VPP native or DPDK

Hi Steven,
we are also looking at using dpdk based virtualethernet interface, so looking 
back at this thread i was wondering what is the problem with dpdk based 
solution.

1. you mentioned dpdk based interface can be created via vdev in dpdk clause of 
startup file, can you share some example there. only thing i have seen as an 
example so far is associating a bond interface with physical slave interfaces, 
is it possible to create one for VM ?
2. can you please share a link to the patch submitted for dpdk virtio-user

Regards,
Prabhjot
From: Steven Luong (sluong) 
Sent: Monday, September 11, 2017 7:46 PM
To: Guo, Ruijing; Saxena, Nitin; vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] vhost-user stable implementation?: VPP native or DPDK


If you create the VIrtualEthernet interface via the CLI or binary API, it 
always uses VPP native. If you create the virtual interface via vdev in the 
dpdk clause in the startup file, it uses dpdk’s vhost-user.



The problem is in DPDK virtio-user that they don’t comply with virtio1.0 spec. 
I submitted a patch for them. I don’t think they took it yet.



Steven



From:  on behalf of "Guo, Ruijing" 

Date: Sunday, September 10, 2017 at 5:36 PM
To: "Saxena, Nitin" , "vpp-dev@lists.fd.io" 

Subject: Re: [vpp-dev] vhost-user stable implementation?: VPP native or DPDK





Just for your reference:

I am using vpp 17.07. The default one is vpp native. But it cannot work with 
virtio-user. So I change to vhost-user in dpdk.





From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Saxena, Nitin
Sent: Monday, September 11, 2017 1:22 AM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] vhost-user stable implementation?: VPP native or DPDK



Hi All



I went through following video regarding vhost-user.



https://www.youtube.com/watch?v=z-ZRof2hDP0



The question is in this video it has been told by default VPP implementation of 
vhost-user being used and not the dpdk one. Since this video is 1 yr old and I 
am using vpp version 1704. Can anyone please comment which vhost-user code is 
by default enabled in vpp1704 - is it dpdk one or VPP native? I can see in 
vpp-master/RELEASE.md that DPDK vhost was deprecated in vpp1609? Is it fixed 
now?



Thanks,

Nitin
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] [FD.io Helpdesk #47101] No joy: ping6 gerrit.fd.io

2017-10-17 Thread Anton Baranov via RT
We're working with our cloud provider to fix the issue.


On Tue Oct 17 10:39:05 2017, abaranov wrote:
> Thishan:
> 
> I'm checking this right now
> 
> Regards,


-- 
Anton Baranov
Systems and Network Administrator
The Linux Foundation
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


Re: [vpp-dev] [FD.io Helpdesk #47101] No joy: ping6 gerrit.fd.io

2017-10-17 Thread Trishan de Lanerolle via RT
Thank you Anton, Can you tell us the root cause after it's resolved.
Trishan


On Tue, Oct 17, 2017 at 11:56 AM, Anton Baranov via RT <
fdio-helpd...@rt.linuxfoundation.org> wrote:

>
> https://rt.linuxfoundation.org/Ticket/Display.html?id=47101 >
>
> We're working with our cloud provider to fix the issue.
>
>
> On Tue Oct 17 10:39:05 2017, abaranov wrote:
> > Thishan:
> >
> > I'm checking this right now
> >
> > Regards,
>
>
> --
> Anton Baranov
> Systems and Network Administrator
> The Linux Foundation
>



-- 
Trishan R. de Lanerolle
Program Manager,  Networking
Linux Foundation
voice: +1.203.699.6401
skype: tdelanerolle
email: tdelanero...@linuxfoundation.org

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


Re: [vpp-dev] [FD.io Helpdesk #47101] No joy: ping6 gerrit.fd.io

2017-10-17 Thread Dave Barach (dbarach)
Ack... Thanks… Dave

-Original Message-
From: Anton Baranov via RT [mailto:fdio-helpd...@rt.linuxfoundation.org] 
Sent: Tuesday, October 17, 2017 11:57 AM
To: Dave Barach (dbarach) 
Cc: vpp-dev@lists.fd.io
Subject: [FD.io Helpdesk #47101] No joy: ping6 gerrit.fd.io

We're working with our cloud provider to fix the issue.


On Tue Oct 17 10:39:05 2017, abaranov wrote:
> Thishan:
> 
> I'm checking this right now
> 
> Regards,


-- 
Anton Baranov
Systems and Network Administrator
The Linux Foundation
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] [FD.io Helpdesk #47101] No joy: ping6 gerrit.fd.io

2017-10-17 Thread Dave Barach via RT
Ack... Thanks… Dave

-Original Message-
From: Anton Baranov via RT [mailto:fdio-helpd...@rt.linuxfoundation.org] 
Sent: Tuesday, October 17, 2017 11:57 AM
To: Dave Barach (dbarach) 
Cc: vpp-dev@lists.fd.io
Subject: [FD.io Helpdesk #47101] No joy: ping6 gerrit.fd.io

We're working with our cloud provider to fix the issue.


On Tue Oct 17 10:39:05 2017, abaranov wrote:
> Thishan:
> 
> I'm checking this right now
> 
> Regards,


-- 
Anton Baranov
Systems and Network Administrator
The Linux Foundation

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] bug/issue notification

2017-10-17 Thread Justin Iurman
Dave,

Thanks for that, I didn't know install.sh was deprecated. What about build.sh ? 
They both seem to be included for execution in VagrantFile.

Anyway, if I follow the link you gave me, where should I start from ? If I 
start with "Build a VPP Package" section, isn't it a duplicate since, I think, 
the build process (build.sh) is executed during "vagrant up" ? Indeed, all deb 
packages are already built in /vpp/build-root/ when I logged into it with 
"vagrant ssh". So, my guess was to directly start with "Install a VPP Package" 
section, which is actually what is executed with the now-deprecated install.sh.

What should I do ?

Justin

- Mail original -
De: "Dave Wallace" 
À: "Justin Iurman" , "vpp-dev" 
Envoyé: Lundi 16 Octobre 2017 16:06:11
Objet: Re: [vpp-dev] bug/issue notification

Justin,

install.sh has not been used for many releases and should probably just 
be removed as it has not been maintained with the updating of the packaging.

In order to install VPP in the VM, just follow the recipe in
https://wiki.fd.io/view/VPP/Build,_install,_and_test_images

Thanks,
-daw-

On 10/16/2017 07:56 AM, Justin Iurman wrote:
> Hey guys,
>
> Here are two issues I faced while installing/running VPP. It would be great 
> to fix them.
>
> 1) Install not executed (at least, it seems that it's the case...) - tested 
> with current version 18.01 (cloned today) but it's the same with previous 
> versions too.
>
> git clone https://gerrit.fd.io/r/vpp
> cd vpp/build-root/vagrant
> vagrant up
>
> **waiting during the installation of the VM**
> Note: install.sh does not seem to be executed during this process (can't see 
> related outputs).
>
> In the VM when logged in with vagrant ssh:
>
> vvp <-- not found
> cd /vpp/build-root/vagrant
> ./install.sh <- Permission denied
>
> First problem: need to manually chmod +x install.sh (maybe that's the reason 
> why ?).
>
> Then, execute it again. It finishes but tells me there is a dependency error 
> (second problem): package python-cffi is not installed. To manually fix it, I 
> do the following:
>
> sudo apt-get install python-cffi
> sudo apt-get -f install
>
> After that, install goes fine. Also, note that I've a plugin where a 
> reference to "vlibsocket/api.h" needs to be removed from plugin.c and 
> plugin_test.c in order to compile (removed/moved in this new version of VPP 
> ?).
>
> 2) This bug seems to be gone since last week with vpp 18.01 (cloned today: 
> Monday 16 October 2017). But I'll mention it, just in case you didn't hear 
> about it. When running: "sudo vppctl -s /run/vpp/cli-vpp1.sock create 
> host-interface name vpp1out", an error was returned (sorry, I don't remember 
> exactly what was said). And this error was not in earlier versions (around 
> the 25 September 2017 I'd say).
>
> Thanks !
>
> Justin
> ___
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] [FD.io Helpdesk #47101] No joy: ping6 gerrit.fd.io

2017-10-17 Thread Trishan de Lanerolle via RT
Thanks Anton. Do we have any automated tests that could notify us of
similar ipv6 outages?
Trishan


On Tue, Oct 17, 2017 at 12:48 PM, Anton Baranov via RT <
fdio-helpd...@rt.linuxfoundation.org> wrote:

>
> https://rt.linuxfoundation.org/Ticket/Display.html?id=47101 >
>
> Trishan:
>
> The issue should be fixed now. Could you please confirm?
>
> Our cloud provider had an issues with ipv6 routes advertising: they have
> to replace core router on Monday and this new router was not advertising
> ipv6 router announcement.
>
> Regards,
>
> On Tue Oct 17 11:57:40 2017, trishan wrote:
> > Thank you Anton, Can you tell us the root cause after it's resolved.
> > Trishan
> >
> >
> > On Tue, Oct 17, 2017 at 11:56 AM, Anton Baranov via RT <
> > fdio-helpd...@rt.linuxfoundation.org> wrote:
> >
> > >
> > > https://rt.linuxfoundation.org/Ticket/Display.html?id=47101 >
> > >
> > > We're working with our cloud provider to fix the issue.
> > >
> > >
> > > On Tue Oct 17 10:39:05 2017, abaranov wrote:
> > > > Thishan:
> > > >
> > > > I'm checking this right now
> > > >
> > > > Regards,
> > >
> > >
> > > --
> > > Anton Baranov
> > > Systems and Network Administrator
> > > The Linux Foundation
> > >
> >
> >
> >
>
>
> --
> Anton Baranov
> Systems and Network Administrator
> The Linux Foundation
>



-- 
Trishan R. de Lanerolle
Program Manager,  Networking
Linux Foundation
voice: +1.203.699.6401
skype: tdelanerolle
email: tdelanero...@linuxfoundation.org

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


Re: [vpp-dev] SNAT hash tables

2017-10-17 Thread Ole Troan
`
> 
> The RELEASE.md says it is for 17.04. But in vppctl, 'show ver' shows I am 
> using v17.10-rc0~19-g58eb866
> 
> What is latest version?

We have done quite a lot of performance work on the NAT plugin over the last 
few weeks.
By latest, I meant that you would be best off following the git repo directly.

Best regards,
Ole


> 
> Thanks,
> Yuliang
> 
> On Tue, Oct 17, 2017 at 3:01 AM, Ole Troan  wrote:
> Yuliang,
> 
> > Get it. It use spinlock (in vppinfra/bihash_template.c):
> >
> > while (__sync_lock_test_and_set (h->writer_lock, 1))
> >
> > On Mon, Oct 16, 2017 at 11:39 AM, Yuliang Li  wrote:
> > Hi all,
> >
> > I am curious in the SNAT implementation. I see that SNAT's hash tables are 
> > shared by all worker threads. Does SNAT use lock to avoid multiple threads 
> > updating the hash table concurrently? If not, how does it avoid race 
> > condition?
> 
> Are you on latest?
> The "global" NAT state space is essentially split across all workers. So 
> there is no locking.
> 
> typedef struct {
>   /* Main lookup tables */
>   clib_bihash_8_8_t out2in;
>   clib_bihash_8_8_t in2out;
> 
>   /* Find-a-user => src address lookup */
>   clib_bihash_8_8_t user_hash;
> 
>   /* User pool */
>   snat_user_t * users;
> 
>   /* Session pool */
>   snat_session_t * sessions;
> 
>   /* Pool of doubly-linked list elements */
>   dlist_elt_t * list_pool;
> 
>   u32 snat_thread_index;
> } snat_main_per_thread_data_t;
> 
> 
> Best regards,
> Ole
> 
> 
> 
> 
> --
> Yuliang Li
> PhD student
> Department of Computer Science
> Yale University



signature.asc
Description: Message signed with OpenPGP
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] [FD.io Helpdesk #47101] No joy: ping6 gerrit.fd.io

2017-10-17 Thread Anton Baranov via RT
Trishan:

Monitoring has been set. We'll be notified if that happens again.

Thank you,

On Tue Oct 17 12:51:05 2017, trishan wrote:
> Thanks Anton. Do we have any automated tests that could notify us of
> similar ipv6 outages?
> Trishan
> 
> 
> On Tue, Oct 17, 2017 at 12:48 PM, Anton Baranov via RT <
> fdio-helpd...@rt.linuxfoundation.org> wrote:
> 
> >
> > https://rt.linuxfoundation.org/Ticket/Display.html?id=47101 >
> >
> > Trishan:
> >
> > The issue should be fixed now. Could you please confirm?
> >
> > Our cloud provider had an issues with ipv6 routes advertising: they have
> > to replace core router on Monday and this new router was not advertising
> > ipv6 router announcement.
> >
> > Regards,
> >
> > On Tue Oct 17 11:57:40 2017, trishan wrote:
> > > Thank you Anton, Can you tell us the root cause after it's resolved.
> > > Trishan
> > >
> > >
> > > On Tue, Oct 17, 2017 at 11:56 AM, Anton Baranov via RT <
> > > fdio-helpd...@rt.linuxfoundation.org> wrote:
> > >
> > > >
> > > > https://rt.linuxfoundation.org/Ticket/Display.html?id=47101 >
> > > >
> > > > We're working with our cloud provider to fix the issue.
> > > >
> > > >
> > > > On Tue Oct 17 10:39:05 2017, abaranov wrote:
> > > > > Thishan:
> > > > >
> > > > > I'm checking this right now
> > > > >
> > > > > Regards,
> > > >
> > > >
> > > > --
> > > > Anton Baranov
> > > > Systems and Network Administrator
> > > > The Linux Foundation
> > > >
> > >
> > >
> > >
> >
> >
> > --
> > Anton Baranov
> > Systems and Network Administrator
> > The Linux Foundation
> >
> 
> 
> 


-- 
Anton Baranov
Systems and Network Administrator
The Linux Foundation
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


[vpp-dev] [FD.io Helpdesk #47101] No joy: ping6 gerrit.fd.io

2017-10-17 Thread Anton Baranov via RT
Trishan:

The issue should be fixed now. Could you please confirm?

Our cloud provider had an issues with ipv6 routes advertising: they have to 
replace core router on Monday and this new router was not advertising ipv6 
router announcement. 

Regards,

On Tue Oct 17 11:57:40 2017, trishan wrote:
> Thank you Anton, Can you tell us the root cause after it's resolved.
> Trishan
> 
> 
> On Tue, Oct 17, 2017 at 11:56 AM, Anton Baranov via RT <
> fdio-helpd...@rt.linuxfoundation.org> wrote:
> 
> >
> > https://rt.linuxfoundation.org/Ticket/Display.html?id=47101 >
> >
> > We're working with our cloud provider to fix the issue.
> >
> >
> > On Tue Oct 17 10:39:05 2017, abaranov wrote:
> > > Thishan:
> > >
> > > I'm checking this right now
> > >
> > > Regards,
> >
> >
> > --
> > Anton Baranov
> > Systems and Network Administrator
> > The Linux Foundation
> >
> 
> 
> 


-- 
Anton Baranov
Systems and Network Administrator
The Linux Foundation
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


[vpp-dev] how to fix scan_device Segmentation fault issue?

2017-10-17 Thread wang.hui56
Hi all:

I run the 1710 and master branch vpp, there is some segmentation fault about 
scan_device, Is a known issue, and how to avoid it ?







(gdb) run -c /etc/vpp/startup.conf

Starting program: /usr/bin/vpp -c /etc/vpp/startup.conf

[Thread debugging using libthread_db enabled]

Using host libthread_db library "/lib64/libthread_db.so.1".

vlib_plugin_early_init:356: plugin path /usr/lib/vpp_plugins

load_one_plugin:184: Loaded plugin: acl_plugin.so (Access Control Lists)

load_one_plugin:184: Loaded plugin: dpdk_plugin.so (Data Plane Development Kit 
(DPDK))

load_one_plugin:184: Loaded plugin: flowprobe_plugin.so (Flow per Packet)

load_one_plugin:184: Loaded plugin: gtpu_plugin.so (GTPv1-U)

load_one_plugin:184: Loaded plugin: ila_plugin.so (Identifier-locator 
addressing for IPv6)

load_one_plugin:184: Loaded plugin: ioam_plugin.so (Inbound OAM)

load_one_plugin:114: Plugin disabled (default): ixge_plugin.so

load_one_plugin:184: Loaded plugin: lb_plugin.so (Load Balancer)

load_one_plugin:184: Loaded plugin: libsixrd_plugin.so (IPv6 Rapid Deployment 
on IPv4 Infrastructure (RFC5969))

load_one_plugin:184: Loaded plugin: memif_plugin.so (Packet Memory Interface 
(experimetal))

load_one_plugin:184: Loaded plugin: nat_plugin.so (Network Address Translation)

load_one_plugin:184: Loaded plugin: pppoe_plugin.so (PPPoE)

/usr/bin/vpp[22917]: load_one_plugin:63: Loaded plugin: 
/usr/lib/vpp_api_test_plugins/acl_test_plugin.so

/usr/bin/vpp[22917]: load_one_plugin:63: Loaded plugin: 
/usr/lib/vpp_api_test_plugins/dpdk_test_plugin.so

/usr/bin/vpp[22917]: load_one_plugin:63: Loaded plugin: 
/usr/lib/vpp_api_test_plugins/flowprobe_test_plugin.so

/usr/bin/vpp[22917]: load_one_plugin:63: Loaded plugin: 
/usr/lib/vpp_api_test_plugins/gtpu_test_plugin.so

/usr/bin/vpp[22917]: load_one_plugin:63: Loaded plugin: 
/usr/lib/vpp_api_test_plugins/ioam_export_test_plugin.so

/usr/bin/vpp[22917]: load_one_plugin:63: Loaded plugin: 
/usr/lib/vpp_api_test_plugins/ioam_pot_test_plugin.so

/usr/bin/vpp[22917]: load_one_plugin:63: Loaded plugin: 
/usr/lib/vpp_api_test_plugins/ioam_trace_test_plugin.so

/usr/bin/vpp[22917]: load_one_plugin:63: Loaded plugin: 
/usr/lib/vpp_api_test_plugins/ioam_vxlan_gpe_test_plugin.so

/usr/bin/vpp[22917]: load_one_plugin:63: Loaded plugin: 
/usr/lib/vpp_api_test_plugins/lb_test_plugin.so

/usr/bin/vpp[22917]: load_one_plugin:63: Loaded plugin: 
/usr/lib/vpp_api_test_plugins/memif_test_plugin.so

/usr/bin/vpp[22917]: load_one_plugin:63: Loaded plugin: 
/usr/lib/vpp_api_test_plugins/nat_test_plugin.so

/usr/bin/vpp[22917]: load_one_plugin:63: Loaded plugin: 
/usr/lib/vpp_api_test_plugins/pppoe_test_plugin.so

/usr/bin/vpp[22917]: load_one_plugin:63: Loaded plugin: 
/usr/lib/vpp_api_test_plugins/udp_ping_test_plugin.so

/usr/bin/vpp[22917]: load_one_plugin:63: Loaded plugin: 
/usr/lib/vpp_api_test_plugins/vxlan_gpe_ioam_export_test_plugin.so




Program received signal SIGSEGV, Segmentation fault.

scan_device (arg=0x77bb1260 , dev_dir_name=0x7fffb597de98 
"/sys/bus/pci/devices/:00:02.0", ignored=)

at 
/home/wanghui/vpp_1710/vpp/extras/rpm/vpp-17.10.0/build-data/../src/vlib/linux/pci.c:603

603 
/home/wanghui/vpp_1710/vpp/extras/rpm/vpp-17.10.0/build-data/../src/vlib/linux/pci.c:
 No such file or directory.

Missing separate debuginfos, use: debuginfo-install 
vpp-17.10-rc2~8_g50328c9.x86_64

(gdb) bt

#0  scan_device (arg=0x77bb1260 , 
dev_dir_name=0x7fffb597de98 "/sys/bus/pci/devices/:00:02.0", 
ignored=)

at 
/home/wanghui/vpp_1710/vpp/extras/rpm/vpp-17.10.0/build-data/../src/vlib/linux/pci.c:603

#1  0x779a18a0 in foreach_directory_file 
(dir_name=dir_name@entry=0x779a2a4f "/sys/bus/pci/devices", 

f=f@entry=0x7795bff0 , arg=arg@entry=0x77bb1260 
, scan_dirs=scan_dirs@entry=0)

at 
/home/wanghui/vpp_1710/vpp/extras/rpm/vpp-17.10.0/build-data/../src/vlib/unix/util.c:87

#2  0x7795b99f in linux_pci_init (vm=0x77bb1260 )

at 
/home/wanghui/vpp_1710/vpp/extras/rpm/vpp-17.10.0/build-data/../src/vlib/linux/pci.c:648

#3  0x7795967d in vlib_call_init_exit_functions (vm=0x77bb1260 
, head=, call_once=call_once@entry=1)

at 
/home/wanghui/vpp_1710/vpp/extras/rpm/vpp-17.10.0/build-data/../src/vlib/init.c:57

#4  0x779596c3 in vlib_call_all_init_functions (vm=)

at 
/home/wanghui/vpp_1710/vpp/extras/rpm/vpp-17.10.0/build-data/../src/vlib/init.c:75

#5  0x779623b5 in vlib_main (vm=vm@entry=0x77bb1260 
, input=input@entry=0x7fffb604dfa0)

at 
/home/wanghui/vpp_1710/vpp/extras/rpm/vpp-17.10.0/build-data/../src/vlib/main.c:1748

#6  0x77999413 in thread0 (arg=140737349620320) at 
/home/wanghui/vpp_1710/vpp/extras/rpm/vpp-17.10.0/build-data/../src/vlib/unix/main.c:567

#7  0x76777278 in clib_calljmp () at 
/home/wanghui/vpp_1710/vpp/extras/rpm/vpp-17.10.0/build-data/../src/vppinfra/longjmp.S:110

#8  0x7fffd2d0 in ?? ()

#9  0x7799a155 in vlib_unix_main