[vpp-dev] vpp 2101 , we have to reduce the frame size to 128 for our use case

2021-08-26 Thread chetan bhasin
Hi,

We have a use case where we have to reduce the frame-size to 128,
changing VLIB_FRAME_SIZE to 128 will do the trick ? or do we need to change
anything else other than this ? Please advise.

Thanks,
Chetan

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20035): https://lists.fd.io/g/vpp-dev/message/20035
Mute This Topic: https://lists.fd.io/mt/85179707/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] Job Failures due to 'Java Connection Closed Exception'

2021-08-26 Thread Dave Wallace

Folks,

Starting around 1:15pm DST / 5:15pm UTC, there has been a 20-30% job 
failure rate due to 'Java Connection Closed Exception' causing a 100% 
verification failure rate of VPP gerrit changes.  I have been working 
with Vanessa in LF-IT and Mohammed at Vexxhost to resolve the issue.


Based on past causes of connection resets, all network paths between the 
Nomad cluster and Jenkins instance were tested for latency and packet 
loss without any issues being uncovered.  Jenkins was restarted which 
unfortunately did not resolve the issue.  Then the primary Nomad server 
which Jenkins is configured to connect to for spinning up executors was 
rebooted.  This too failed to resolve the issue.


Further investigation tonight with Mohammed's assistance (a huge THANK 
YOU to Mohammed for staying up late debugging this), seems to indicate 
that the docker containers are dying prematurely. However, the nomad 
logs are also being removed at the same time so there is presently no 
means of verifying if the containers are being terminated due to 
internal events.  The next step is to temporarily disable or reduce the 
frequency of nomad garbage collection, archive the nomad logs and then 
collate them with the system logs to determine the order of events that 
cause the docker containers to be terminated.


Thank you for your patience as the root cause of this outage is being 
investigated & fixed.

-daw-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20034): https://lists.fd.io/g/vpp-dev/message/20034
Mute This Topic: https://lists.fd.io/mt/85179512/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] VPP 21.10: 4 weeks before RC1

2021-08-26 Thread Andrew Yourtchenko
Hi all,

It’s this time of the year again!

A kind reminder that we are just under 4 weeks from the 21.10 RC1 milestone and 
a stable/2110 branch pull - it will happen on 22 September at noon UTC,
according to the release plan at 
https://wiki.fd.io/view/Projects/vpp/Release_Plans/Release_Plan_21.10

Please plan accordingly.

--a /* your friendly 21.10 release manager */
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20033): https://lists.fd.io/g/vpp-dev/message/20033
Mute This Topic: https://lists.fd.io/mt/85168917/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPp 2101 (pcap trace)

2021-08-26 Thread chetan bhasin
Hi Benoit,

I have tried those options but it is not working , it is only capturing the
section classify filter rule that is based on dst ip address.

1) Tried with classify filter [Only dest ip with *10.20.36.168 *is coming
in pcap]

classify filter pcap mask l3 ip4 src match l3 ip4 src 10.20.36.168

*classify filter pcap mask l3 ip4 dst match l3 ip4 dst 10.20.36.168*

pcap trace rx tx max 100 filter file capture.pcap


vpp# show classify filter

Filter Used By Table(s)

-- 

packet tracer: first table none

pcap rx/tx/drop:   first table 1


vpp# show classify tables index 1 verbose

  TableIdx  Sessions   NextTbl  NextNode

 1 1-1-1

  Heap: base 0x7f5db406c000, size 128k, locked, unmap-on-destroy, name
'classify'

  page stats: page-size 4K, total 32, mapped 2, not-mapped 0,
unknown 30

numa 0: 2 pages, 8k bytes

  total: 127.95K, used: 1.31K, free: 126.64K, trimmable: 126.56K

  nbuckets 8, skip 1 match 2 flag 0 offset 0

  mask 

  linear-search buckets 0


[7]: heap offset 1216, elts 2, normal

0: [1216]: next_index 0 advance 0 opaque 0 action 0 metadata 0

k: 0a1424a8

hits 45, last_heard 0.00


1 active elements

1 free lists

0 linear-search buckets


[root@bfs-dl360g10-47-vm14 ~]# tcpdump -n -r /tmp/capture.pcap

reading from file /tmp/capture.pcap, link-type EN10MB (Ethernet)

01:00:33.671990 IP 10.20.35.126 > 10.20.36.168: ICMP echo reply, id 26102,
seq 9478, length 64

01:00:33.672834 IP 10.20.35.126 > 10.20.36.168: ICMP echo reply, id 26102,
seq 9478, length 64

01:00:33.672839 IP 10.20.35.126 > 10.20.36.168: ICMP echo reply, id 26102,
seq 9478, length 64

01:00:34.674316 IP 10.20.35.126 > 10.20.36.168: ICMP echo reply, id 26102,
seq 9479, length 64

01:00:34.675239 IP 10.20.35.126 > 10.20.36.168: ICMP echo reply, id 26102,
seq 9479, length 64

01:00:34.675239 IP 10.20.35.126 > 10.20.36.168: ICMP echo reply, id 26102,
seq 9479, length 64

01:00:35.676526 IP 10.20.35.126 > 10.20.36.168: ICMP echo reply, id 26102,
seq 9480, length 64

01:00:35.676565 IP 10.20.35.126 > 10.20.36.168: ICMP echo reply, id 26102,
seq 9480, length 64

01:00:35.676566 IP 10.20.35.126 > 10.20.36.168: ICMP echo reply, id 26102,
seq 9480, length 64



2) Default behaviour without any classify filter .

vpp# *pcap trace rx tx max 100 file capture.pcap*

vpp# *pcap trace off*

Write 100 packets to /tmp/capture.pcap, and stop capture...





*tcpdump -n -r /tmp/capture.pcap |grep ICMP*

reading from file /tmp/capture.pcap, link-type EN10MB (Ethernet)

01:02:36.635239 IP 10.20.36.168 > 10.20.35.126: ICMP echo request, id
26102, seq 11266, length 64

01:02:36.636018 IP 10.20.36.168 > 10.20.35.126: ICMP echo request, id
26102, seq 11266, length 64

01:02:36.636018 IP 10.20.36.168 > 10.20.35.126: ICMP echo request, id
26102, seq 11266, length 64

01:02:36.636108 IP 10.20.35.126 > 10.20.36.168: ICMP echo reply, id 26102,
seq 11266, length 64

01:02:36.636975 IP 10.20.35.126 > 10.20.36.168: ICMP echo reply, id 26102,
seq 11266, length 64

01:02:36.636975 IP 10.20.35.126 > 10.20.36.168: ICMP echo reply, id 26102,
seq 11266, length 64


Regards,

Chetan



On Thu, Aug 26, 2021 at 6:00 PM chetan bhasin 
wrote:

> Hi Ben,
>
> Thanks for your response . Let me try this and get back to you.
>
> Thanks,
> Chetan
>
> On Thu, Aug 26, 2021 at 5:52 PM Benoit Ganne (bganne) 
> wrote:
>
>> > We want to capture all packets with src ip or dest ip as 10.20.30.40 .I
>> > have tried with classify filter but no success. Looks like I am missing
>> > something.
>> > Can anybody please suggest the commands .
>>
>> Something like this should do it:
>>
>> ~# vppctl classify filter pcap mask l3 ip4 src match l3 ip4 src
>> 10.20.30.40
>> ~# vppctl classify filter pcap mask l3 ip4 dst match l3 ip4 dst
>> 10.20.30.40
>> ~# vppctl pcap trace rx tx max 1000 filter
>> 
>> ~# vppctl pcap trace rx tx off
>>
>> Best
>> ben
>>
>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20032): https://lists.fd.io/g/vpp-dev/message/20032
Mute This Topic: https://lists.fd.io/mt/85158833/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Linux CP: crash in lcp_node.c

2021-08-26 Thread Pim van Pelt
Hoi Matt,

Thanks a lot for those tips - it helped me understand the problem better.
I'll answer your questions but I think the problem is a no-op, because I
was using the standard 16K buffers, and they ~immedately run out with 8
threads. Raising to 128K raises total consumption to the low 30K mark, and
stays there. No more issues seen after that.
- Issue occurs with both IPv4 only, IPv6 only and mixed traffic.
- Yes, other packets are being forwarded. The problem is present with no
traffic as well.
- No other control plane activity (no FRR or Bird running), just connected
routes (the 10 I mentioned above: 5 on a plain phy and 5 on a BondEthernet)
- The problem greatly accelerated when sending traffic to BondEthernet0's
LIP; I was doing 20Gbit of traffic on a 2x10G LAG and the buffer use shot
up immediately to 16K and flatlined
Pool NameIndex NUMA  Size  Data Size  Total  Avail  Cached
Used
default-numa-0 0 0   2496 2048168000  163
 16637

Roger on the suggestion for the patch - I've used an 8-threaded VPP with
128K buffers for about 4 seconds of fping / arp cache flushes while
iperf'ing against it; and it's still fine:
Pool NameIndex NUMA  Size  Data Size  Total  Avail  Cached
Used
default-numa-0 0 0   2496 2048   128520  97603   1422
 29495

Patch sent in https://gerrit.fd.io/r/c/vpp/+/33606

groet,
Pim

On Wed, Aug 25, 2021 at 5:44 PM Matthew Smith  wrote:

> Hi Pim,
>
> Responses are inline...
>
> On Tue, Aug 24, 2021 at 4:47 AM Pim van Pelt  wrote:
>
>> Hoi,
>>
>> I've noticed that when a linuxcp enabled VPP 21.06 with multiple threads
>> receives many ARP requests, eventually it crashes in lcp_arp_phy_node in
>> lcp_node.c:675 and :775 because we do a vlib_buffer_copy() which returns
>> NULL, after which we try to dereference the result. How to repro:
>> 1) create a few interfaces/subints and give them IP addresses in Linux
>> and VPP. I made 5 phy subints and 5 subints on a bondethernet.
>> 2) rapidly fping the Linux CP and at the same time continuously flush the
>> neighbor cache on the Linux namespace:
>> On the vpp machine in 'dataplane' namespace:
>>   while :; do ip nei flush all; done
>> On a Linux machine connected to VPP:
>>   while :; do fping -c 1 -B 1 -p 10 10.1.1.2 10.1.2.2 10.1.3.2
>> 10.1.4.2 10.1.5.2 10.0.1.2 10.0.2.2 10.0.3.2 10.0.4.2 10.0.5.2
>> 2001:db8:1:1::2 2001:db8:1:2::2 2001:db8:1:3::2 2001:db8:1:4::2
>> 2001:db8:1:5::2 2001:db8:0:1::2 2001:db8:0:2::2 2001:db8:0:3::2
>> 2001:db8:0:4::2 2001:db8:0:5::2; done
>>
>> VPP will now be seeing lots of ARP traffic to and from the host. After a
>> while, c0 or c1 from lcp_node.c:675 and lcp_node.c:775 will be NULL and
>> cause a crash.
>> I temporarily worked around this by simply adding:
>>
>> @@ -675,6 +675,10 @@ VLIB_NODE_FN (lcp_arp_phy_node)
>>
>>   c0 = vlib_buffer_copy (vm, b0);
>>
>>   vlib_buffer_advance (b0, len0);
>>
>>
>>
>> + // pim(2021-08-24) -- address SIGSEGV when copy
>> returns NULL
>>
>> + if (!c0)
>>
>> +   continue;
>>
>> +
>>
>>   /* Send to the host */
>>
>>   vnet_buffer (c0)->sw_if_index[VLIB_TX] =
>>
>> lip0->lip_host_sw_if_index;
>>
>> but I'm not very comfortable in this part of VPP, and I'm sure there's a
>> better way to catch the buffer copy failing?
>>
>
> No, checking whether the return value is null is the correct way to detect
> failure.
>
>
>
>> I haven't quite understood this code yet, but shouldn't we free c0 and c1
>> in these functions?
>>
>
> No, c0 and c1 are enqueued to another node (interface-output). The buffers
> are freed after being transmitted or dropped by subsequent nodes. Freeing
> them in this node while also enqueuing them would result in problems.
>
>
>
>> It seems that when I'm doing my rapid ping/arp/flush exercise above, VPP
>> is slowly consuming more memory (as seen by show memory main-heap; all 4
>> threads are monotonously growing by a few hundred kB per minute of runtime).
>>
>
> I made a quick attempt to reproduce the issue and was unsuccessful. Though
> I did not use a bond interface or subinterfaces, just physical interfaces.
>
> How many buffers are being allocated (vppctl show buffers)? Does the issue
> occur if you only send IPv4 packets instead of both IPv4 and IPv6? Are
> other packets besides the ICMP and ARP being forwarded while you're running
> this test? Is there any other control plane activity occurring during the
> test (E.g. BGP adding routes)?
>
>
>
>> If somebody could help me take a look, I'd appreciate it.
>>
>
>  It would be better to make your patch like this:
>
>   if (c0)
> {
>   /* Send to the host */
>   vnet_buffer (c0)->sw_if_index[VLIB_TX] =
> lip0->lip_host_sw_if_index;
>   reply_copies[n_copies++] = vlib_get_buffer_inde

Re: [vpp-dev] master branch build failed #vpp-dev

2021-08-26 Thread Damjan Marion via lists.fd.io
But it is good news for pretty much everybody else involved :)
Simply too old, too many missing dependencies, etc…..

— 
Damjan

> On 26.08.2021., at 14:29, jiangxiaom...@outlook.com wrote:
> 
> it's not a good news for me 
> 
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20030): https://lists.fd.io/g/vpp-dev/message/20030
Mute This Topic: https://lists.fd.io/mt/85151908/21656
Mute #vpp-dev:https://lists.fd.io/g/vpp-dev/mutehashtag/vpp-dev
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPp 2101 (pcap trace)

2021-08-26 Thread chetan bhasin
Hi Ben,

Thanks for your response . Let me try this and get back to you.

Thanks,
Chetan

On Thu, Aug 26, 2021 at 5:52 PM Benoit Ganne (bganne) 
wrote:

> > We want to capture all packets with src ip or dest ip as 10.20.30.40 .I
> > have tried with classify filter but no success. Looks like I am missing
> > something.
> > Can anybody please suggest the commands .
>
> Something like this should do it:
>
> ~# vppctl classify filter pcap mask l3 ip4 src match l3 ip4 src 10.20.30.40
> ~# vppctl classify filter pcap mask l3 ip4 dst match l3 ip4 dst 10.20.30.40
> ~# vppctl pcap trace rx tx max 1000 filter
> 
> ~# vppctl pcap trace rx tx off
>
> Best
> ben
>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20029): https://lists.fd.io/g/vpp-dev/message/20029
Mute This Topic: https://lists.fd.io/mt/85158833/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] master branch build failed #vpp-dev

2021-08-26 Thread jiangxiaoming
it's not a good news for me

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20028): https://lists.fd.io/g/vpp-dev/message/20028
Mute This Topic: https://lists.fd.io/mt/85151908/21656
Mute #vpp-dev:https://lists.fd.io/g/vpp-dev/mutehashtag/vpp-dev
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPp 2101 (pcap trace)

2021-08-26 Thread Benoit Ganne (bganne) via lists.fd.io
> We want to capture all packets with src ip or dest ip as 10.20.30.40 .I
> have tried with classify filter but no success. Looks like I am missing
> something.
> Can anybody please suggest the commands .

Something like this should do it:

~# vppctl classify filter pcap mask l3 ip4 src match l3 ip4 src 10.20.30.40
~# vppctl classify filter pcap mask l3 ip4 dst match l3 ip4 dst 10.20.30.40
~# vppctl pcap trace rx tx max 1000 filter

~# vppctl pcap trace rx tx off

Best
ben

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20027): https://lists.fd.io/g/vpp-dev/message/20027
Mute This Topic: https://lists.fd.io/mt/85158833/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] master branch build failed #vpp-dev

2021-08-26 Thread Damjan Marion via lists.fd.io
We don’t support CentOS 7 anymore…

— 
Damjan

> On 26.08.2021., at 14:01, jiangxiaom...@outlook.com wrote:
> 
> I build vpp on centos 7 with devtoolset-9. i find the using of devtoolset-9 
> was removeed in the commit, is there any purpose for it? 
> commit a5167edc66c639e139ffb5de4336c54bb3d8a871
> Author: Damjan Marion 
> Date:   Fri Jul 2 16:04:26 2021 +0200
>  
> build: remove unused files and sections
> 
> Type: make
> Change-Id: Ia1d8c53c5fb02f7e5c86efab6e6ccd0fdb16bc96
> Signed-off-by: Damjan Marion 
>  
> diff --git a/build-data/packages/libmemif.mk b/build-data/packages/libmemif.mk
> index acc0d6425..a4676af45 100644
> --- a/build-data/packages/libmemif.mk
> +++ b/build-data/packages/libmemif.mk
> @@ -26,11 +26,6 @@ libmemif_cmake_args += 
> -DCMAKE_C_FLAGS="$($(TAG)_TAG_CFLAGS)"
>  libmemif_cmake_args += -DCMAKE_SHARED_LINKER_FLAGS="$($(TAG)_TAG_LDFLAGS)"
>  libmemif_cmake_args += 
> -DCMAKE_PREFIX_PATH:PATH="$(PACKAGE_INSTALL_DIR)/../vpp"
>  
> -# Use devtoolset on centos 7
> -ifneq ($(wildcard /opt/rh/devtoolset-9/enable),)
> -libmemif_cmake_args += 
> -DCMAKE_PROGRAM_PATH:PATH="/opt/rh/devtoolset-9/root/bin"
> -endif
> -
>  libmemif_configure = \
>cd $(PACKAGE_BUILD_DIR) && \
>$(CMAKE) -G Ninja $(libmemif_cmake_args) $(call 
> find_source_fn,$(PACKAGE_SOURCE))$(PACKAGE_SUBDIR)
> diff --git a/build-data/packages/sample-plugin.mk 
> b/build-data/packages/sample-plugin.mk
> index 34188f9e7..546164c0d 100644
> --- a/build-data/packages/sample-plugin.mk
> +++ b/build-data/packages/sample-plugin.mk
> @@ -30,11 +30,6 @@ sample-plugin_cmake_args += 
> -DCMAKE_C_FLAGS="$($(TAG)_TAG_CFLAGS)"
>  sample-plugin_cmake_args += 
> -DCMAKE_SHARED_LINKER_FLAGS="$($(TAG)_TAG_LDFLAGS)"
>  sample-plugin_cmake_args += 
> -DCMAKE_PREFIX_PATH:PATH="$(PACKAGE_INSTALL_DIR)/../vpp"
>  
> -# Use devtoolset on centos 7
> -ifneq ($(wildcard /opt/rh/devtoolset-9/enable),)
> -sample-plugin_cmake_args += 
> -DCMAKE_PROGRAM_PATH:PATH="/opt/rh/devtoolset-9/root/bin"
> -endif
> -
>  sample-plugin_configure = \
>cd $(PACKAGE_BUILD_DIR) && \
>$(CMAKE) -G Ninja $(sample-plugin_cmake_args) \
> diff --git a/build-data/packages/vpp.mk b/build-data/packages/vpp.mk
> index 7db450e05..ad1d1fc9a 100644
> --- a/build-data/packages/vpp.mk
> +++ b/build-data/packages/vpp.mk
> @@ -30,16 +30,6 @@ vpp_cmake_args += 
> -DCMAKE_PREFIX_PATH:PATH="$(vpp_cmake_prefix_path)"
>  ifeq ("$(V)","1")
> 
>  
> 
>  
> 
> 
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20026): https://lists.fd.io/g/vpp-dev/message/20026
Mute This Topic: https://lists.fd.io/mt/85151908/21656
Mute #vpp-dev:https://lists.fd.io/g/vpp-dev/mutehashtag/vpp-dev
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] master branch build failed #vpp-dev

2021-08-26 Thread jiangxiaoming
I build vpp on centos 7 with devtoolset-9. i find the using of devtoolset-9 was 
removeed in the commit, is there any purpose for it?

> 
> commit a5167edc66c639e139ffb5de4336c54bb3d8a871
> Author: Damjan Marion 
> Date:   Fri Jul 2 16:04:26 2021 +0200
> 
> build: remove unused files and sections
> 
> Type: make
> Change-Id: Ia1d8c53c5fb02f7e5c86efab6e6ccd0fdb16bc96
> Signed-off-by: Damjan Marion 
> 
> diff --git a/build-data/packages/libmemif.mk
> b/build-data/packages/libmemif.mk
> index acc0d6425..a4676af45 100644
> --- a/build-data/packages/libmemif.mk
> +++ b/build-data/packages/libmemif.mk
> @@ -26,11 +26,6 @@ libmemif_cmake_args +=
> -DCMAKE_C_FLAGS="$($(TAG)_TAG_CFLAGS)"
> libmemif_cmake_args += -DCMAKE_SHARED_LINKER_FLAGS="$($(TAG)_TAG_LDFLAGS)"
> 
> libmemif_cmake_args +=
> -DCMAKE_PREFIX_PATH:PATH="$(PACKAGE_INSTALL_DIR)/../vpp"
> 
> -# Use devtoolset on centos 7
> -ifneq ($(wildcard /opt/rh/devtoolset-9/enable),)
> -libmemif_cmake_args +=
> -DCMAKE_PROGRAM_PATH:PATH="/opt/rh/devtoolset-9/root/bin"
> -endif
> -
> libmemif_configure = \
> cd $(PACKAGE_BUILD_DIR) && \
> $(CMAKE) -G Ninja $(libmemif_cmake_args) $(call
> find_source_fn,$(PACKAGE_SOURCE))$(PACKAGE_SUBDIR)
> diff --git a/build-data/packages/sample-plugin.mk
> b/build-data/packages/sample-plugin.mk
> index 34188f9e7..546164c0d 100644
> --- a/build-data/packages/sample-plugin.mk
> +++ b/build-data/packages/sample-plugin.mk
> @@ -30,11 +30,6 @@ sample-plugin_cmake_args +=
> -DCMAKE_C_FLAGS="$($(TAG)_TAG_CFLAGS)"
> sample-plugin_cmake_args +=
> -DCMAKE_SHARED_LINKER_FLAGS="$($(TAG)_TAG_LDFLAGS)"
> sample-plugin_cmake_args +=
> -DCMAKE_PREFIX_PATH:PATH="$(PACKAGE_INSTALL_DIR)/../vpp"
> 
> -# Use devtoolset on centos 7
> -ifneq ($(wildcard /opt/rh/devtoolset-9/enable),)
> -sample-plugin_cmake_args +=
> -DCMAKE_PROGRAM_PATH:PATH="/opt/rh/devtoolset-9/root/bin"
> -endif
> -
> sample-plugin_configure = \
> cd $(PACKAGE_BUILD_DIR) && \
> $(CMAKE) -G Ninja $(sample-plugin_cmake_args) \
> diff --git a/build-data/packages/vpp.mk b/build-data/packages/vpp.mk
> index 7db450e05..ad1d1fc9a 100644
> --- a/build-data/packages/vpp.mk
> +++ b/build-data/packages/vpp.mk
> @@ -30,16 +30,6 @@ vpp_cmake_args +=
> -DCMAKE_PREFIX_PATH:PATH="$(vpp_cmake_prefix_path)"
> ifeq ("$(V)","1")
> 
> 

> 
> 
> 

> 
> 
>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20025): https://lists.fd.io/g/vpp-dev/message/20025
Mute This Topic: https://lists.fd.io/mt/85151908/21656
Mute #vpp-dev:https://lists.fd.io/g/vpp-dev/mutehashtag/vpp-dev
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] VPp 2101 (pcap trace)

2021-08-26 Thread chetan bhasin
Hi,

We want to capture all packets with src ip or dest ip as 10.20.30.40 .I
have tried with classify filter but no success. Looks like I am missing
something.

Can anybody please suggest the commands .

Thanks,
Chetan

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20024): https://lists.fd.io/g/vpp-dev/message/20024
Mute This Topic: https://lists.fd.io/mt/85158833/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] master branch build failed #vpp-dev

2021-08-26 Thread Damjan Marion via lists.fd.io
Your C compiler is too old….  You should use gcc 8+ or clang 7+

— 
Damjan

> On 26.08.2021., at 03:31, jiangxiaom...@outlook.com wrote:
> 
> Hi all,
>  Vpp master branch build failed, anyone have the save issue
> 
> @@@ Configuring vpp in 
> /home/dev/code/vpp/build-root/build-vpp_debug-native/vpp 
> -- The C compiler identification is GNU 4.8.5
> -- Check for working C compiler: /usr/lib64/ccache/cc
> -- Check for working C compiler: /usr/lib64/ccache/cc - works
> -- Detecting C compiler ABI info
> -- Detecting C compiler ABI info - done
> -- Detecting C compile features
> -- Detecting C compile features - done
> -- Performing Test compiler_flag_march_haswell
> -- Performing Test compiler_flag_march_haswell - Failed
> -- Performing Test compiler_flag_mtune_haswell
> -- Performing Test compiler_flag_mtune_haswell - Failed
> -- Performing Test compiler_flag_march_tremont
> -- Performing Test compiler_flag_march_tremont - Failed
> -- Performing Test compiler_flag_mtune_tremont
> -- Performing Test compiler_flag_mtune_tremont - Failed
> -- Performing Test compiler_flag_march_skylake_avx512
> -- Performing Test compiler_flag_march_skylake_avx512 - Failed
> -- Performing Test compiler_flag_mtune_skylake_avx512
> -- Performing Test compiler_flag_mtune_skylake_avx512 - Failed
> -- Performing Test compiler_flag_mprefer_vector_width_256
> -- Performing Test compiler_flag_mprefer_vector_width_256 - Failed
> -- Performing Test compiler_flag_march_icelake_client
> -- Performing Test compiler_flag_march_icelake_client - Failed
> -- Performing Test compiler_flag_mtune_icelake_client
> -- Performing Test compiler_flag_mtune_icelake_client - Failed
> -- Performing Test compiler_flag_mprefer_vector_width_512
> -- Performing Test compiler_flag_mprefer_vector_width_512 - Failed
> -- Looking for ccache
> -- Looking for ccache - found
> -- Performing Test compiler_flag_no_address_of_packed_member
> -- Performing Test compiler_flag_no_address_of_packed_member - Success
> -- Looking for pthread.h
> -- Looking for pthread.h - found
> -- Performing Test CMAKE_HAVE_LIBC_PTHREAD
> -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed
> -- Check if compiler accepts -pthread
> -- Check if compiler accepts -pthread - yes
> -- Found Threads: TRUE  
> -- Performing Test HAVE_FCNTL64
> -- Performing Test HAVE_FCNTL64 - Failed
> -- Found OpenSSL: /usr/lib64/libcrypto.so (found version "1.1.1i")  
> -- The ASM compiler identification is GNU
> -- Found assembler: /usr/lib64/ccache/cc
> -- Looking for libuuid
> -- Found uuid in /usr/include
> -- libbpf headers not found - af_xdp plugin disabled
> -- Intel IPSecMB found: 
> /home/dev/code/vpp/build-root/install-vpp_debug-native/external/include
> -- dpdk plugin needs libdpdk.a library - found at 
> /home/dev/code/vpp/build-root/install-vpp_debug-native/external/lib/libdpdk.a
> -- Found DPDK 21.5.0 in 
> /home/dev/code/vpp/build-root/install-vpp_debug-native/external/include
> -- dpdk plugin needs numa library - found at /usr/lib64/libnuma.so
> -- linux-cp plugin needs libnl-3.so library - found at /usr/lib64/libnl-3.so
> -- linux-cp plugin needs libnl-route-3.so.200 library - found at 
> /usr/lib64/libnl-route-3.so.200
> -- Found quicly 0.1.3-vpp in 
> /home/dev/code/vpp/build-root/install-vpp_debug-native/external/include
> -- rdma plugin needs libibverbs.a library - found at 
> /home/dev/code/vpp/build-root/install-vpp_debug-native/external/lib/libibverbs.a
> -- rdma plugin needs librdma_util.a library - found at 
> /home/dev/code/vpp/build-root/install-vpp_debug-native/external/lib/librdma_util.a
> -- rdma plugin needs libmlx5.a library - found at 
> /home/dev/code/vpp/build-root/install-vpp_debug-native/external/lib/libmlx5.a
> -- Performing Test IBVERBS_COMPILES_CHECK
> -- Performing Test IBVERBS_COMPILES_CHECK - Success
> -- -- libdaq headers not found - snort3 DAQ disabled
> -- -- libsrtp2.a library not found - srtp plugin disabled
> -- tlsmbedtls plugin needs mbedtls library - found at /usr/lib64/libmbedtls.so
> -- tlsmbedtls plugin needs mbedx509 library - found at 
> /usr/lib64/libmbedx509.so
> -- tlsmbedtls plugin needs mbedcrypto library - found at 
> /usr/lib64/libmbedcrypto.so
> -- Looking for SSL_set_async_callback
> -- Looking for SSL_set_async_callback - not found
> -- Found picotls in 
> /home/dev/code/vpp/build-root/install-vpp_debug-native/external/include and 
> /home/dev/code/vpp/build-root/install-vpp_debug-native/external/lib/libpicotls-core.a
> -- subunit library not found - vapi tests disabled
> -- Found Python3: /usr/bin/python3.6 (found version "3.6.8") found 
> components: Interpreter 
> -- Configuration:
> VPP version : 21.10-rc0~274-gee04de5
> VPP library version : 21.10
> GIT toplevel dir: /home/dev/code/vpp
> Build type  : debug
> C flags : 
> Linker flags (apps) : 
> Linker flags (libs) : 
> Host processor  : x86_64
> Target processor: x86_64
> Prefix path : /opt/vpp/external/x86_64 
> /home/dev/code/vpp/b