+csit-dev

-Maciek

> On 15 May 2017, at 18:09, Neale Ranns (nranns) <nra...@cisco.com> wrote:
> 
> 
> I hope so. We changed the version from 17.05-vpp1 to 17.05-vpp2 today. I 
> suspect your build started while we were in flux. If the next one fails, 
> we’ll look deeper. 
> 
> Most Jobs are passing again. Some intermittent issues with VXLAN tests, which 
> may be infra related.
> 
> We continue to monitor the situation.
> 
> /neale
> 
> -----Original Message-----
> From: <vpp-dev-boun...@lists.fd.io> on behalf of "Kinsella, Ray" 
> <ray.kinse...@intel.com>
> Date: Monday, 15 May 2017 at 17:10
> To: "vpp-dev@lists.fd.io" <vpp-dev@lists.fd.io>
> Subject: Re: [vpp-dev] CSIT borked on master
> 
> 
>    Does it explain why I am getting a build failure on CentOS ?
> 
>    
> https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-verify-master-centos7/5402/console.log.gz
> 
>    + /usr/lib/rpm/brp-python-bytecompile /usr/bin/python 1
>    + /usr/lib/rpm/redhat/brp-python-hardlink
>    + /usr/lib/rpm/redhat/brp-java-repack-jars
>    Processing files: vpp-dpdk-devel-17.05-vpp1.x86_64
>    Provides: vpp-dpdk-devel = 17.05-vpp1 vpp-dpdk-devel(x86-64) = 17.05-vpp1
>    Requires(rpmlib): rpmlib(CompressedFileNames) <= 3.0.4-1 
>    rpmlib(FileDigests) <= 4.6.0-1 rpmlib(PartialHardlinkSets) <= 4.0.4-1 
>    rpmlib(PayloadFilesHavePrefix) <= 4.0-1
>    Requires: /bin/bash /bin/sh /usr/bin/env ld-linux-x86-64.so.2()(64bit) 
>    ld-linux-x86-64.so.2(GLIBC_2.3)(64bit) libc.so.6()(64bit) 
>    libc.so.6(GLIBC_2.14)(64bit) libc.so.6(GLIBC_2.2.5)(64bit) 
>    libc.so.6(GLIBC_2.3)(64bit) libc.so.6(GLIBC_2.3.2)(64bit) 
>    libc.so.6(GLIBC_2.3.4)(64bit) libc.so.6(GLIBC_2.4)(64bit) 
>    libc.so.6(GLIBC_2.7)(64bit) libc.so.6(GLIBC_2.8)(64bit) 
>    libcrypto.so.10()(64bit) libcrypto.so.10(libcrypto.so.10)(64bit) 
>    libdl.so.2()(64bit) libdl.so.2(GLIBC_2.2.5)(64bit) libm.so.6()(64bit) 
>    libm.so.6(GLIBC_2.2.5)(64bit) libpthread.so.0()(64bit) 
>    libpthread.so.0(GLIBC_2.12)(64bit) libpthread.so.0(GLIBC_2.2.5)(64bit) 
>    libpthread.so.0(GLIBC_2.3.2)(64bit) libpthread.so.0(GLIBC_2.3.4)(64bit) 
>    librt.so.1()(64bit) librt.so.1(GLIBC_2.2.5)(64bit) rtld(GNU_HASH)
>    Checking for unpackaged file(s): /usr/lib/rpm/check-files 
>    
> /w/workspace/vpp-verify-master-centos7/dpdk/rpm/BUILDROOT/vpp-dpdk-17.05-vpp1.x86_64
>    Wrote: 
>    
> /w/workspace/vpp-verify-master-centos7/dpdk/rpm/RPMS/x86_64/vpp-dpdk-devel-17.05-vpp1.x86_64.rpm
>    Executing(%clean): /bin/sh -e /var/tmp/rpm-tmp.bH2a6n
>    + umask 022
>    + cd /w/workspace/vpp-verify-master-centos7/dpdk/rpm/BUILD
>    + /usr/bin/rm -rf 
>    
> /w/workspace/vpp-verify-master-centos7/dpdk/rpm/BUILDROOT/vpp-dpdk-17.05-vpp1.x86_64
>    + exit 0
>    mv rpm/RPMS/x86_64/*.rpm .
>    git clean -fdx rpm
>    Removing rpm/BUILD/
>    Removing rpm/BUILDROOT/
>    Removing rpm/RPMS/
>    Removing rpm/SOURCES/
>    Removing rpm/SPECS/
>    Removing rpm/SRPMS/
>    Removing rpm/tmp/
>    make[2]: Leaving directory `/w/workspace/vpp-verify-master-centos7/dpdk'
>    sudo rpm -Uih vpp-dpdk-devel-17.05-vpp1.x86_64.rpm
>    ########################################
>       package vpp-dpdk-devel-17.05-vpp2.x86_64 (which is newer than 
>    vpp-dpdk-devel-17.05-vpp1.x86_64) is already installed
>    make[1]: *** [install-rpm] Error 2
>    make[1]: Leaving directory `/w/workspace/vpp-verify-master-centos7/dpdk'
>    make: *** [dpdk-install-dev] Error 2
>    Build step 'Execute shell' marked build as failure
> 
> 
>    Ray K
> 
>    On 15/05/2017 11:54, Damjan Marion (damarion) wrote:
>> 
>> This issue is caused by bug in DPDK 17.05 caused by following commit:
>> 
>> http://dpdk.org/browse/dpdk/commit/?id=ee1843b
>> 
>> It happens only with old QEMU emulation (I repro it with “pc-1.0”) which
>> VIRL uses.
>> 
>> Fix (revert) is in gerrit:
>> 
>> https://gerrit.fd.io/r/#/c/6690/
>> 
>> Regards,
>> 
>> Damjan
>> 
>> 
>>> On 13 May 2017, at 20:34, Neale Ranns (nranns) <nra...@cisco.com
>>> <mailto:nra...@cisco.com>> wrote:
>>> 
>>> 
>>> Hi Chris,
>>> 
>>> Yes, every CSIT job on master is borked.
>>> I think I’ve narrowed this down to all VAT sw_interface_dump returning
>>> bogus/garbage MAC addresses. No Idea why, can’t repro yet. I’ve a
>>> speculative DPDK 17.05 bump backout job in the queue, for purposes of
>>> elimination.
>>> 
>>> Regards,
>>> /neale
>>> 
>>> 
>>> 
>>>> *From: *"Luke, Chris" <chris_l...@comcast.com
>>>> <mailto:chris_l...@comcast.com>>
>>>> *Date: *Saturday, 13 May 2017 at 19:04
>>>> *To: *"Neale Ranns (nranns)" <nra...@cisco.com
>>>> <mailto:nra...@cisco.com>>, "yug...@telincn.com
>>>> <mailto:yug...@telincn.com>" <yug...@telincn.com
>>>> <mailto:yug...@telincn.com>>, vpp-dev <vpp-dev@lists.fd.io
>>>> <mailto:vpp-dev@lists.fd.io>>
>>>> *Subject: *RE: [vpp-dev] Segmentation fault in recursivly lookuping
>>>> fib entry.
>>>> 
>>>> CSIT seems to be barfing on every job at the moment :(
>>>> 
>>>> *From:* vpp-dev-boun...@lists.fd.io
>>>> <mailto:vpp-dev-boun...@lists.fd.io> [mailto:vpp-dev-boun...@lists.fd.io] 
>>>> *On
>>>> Behalf Of *Neale Ranns (nranns)
>>>> *Sent:* Saturday, May 13, 2017 11:20
>>>> *To:* yug...@telincn.com <mailto:yug...@telincn.com>; vpp-dev
>>>> <vpp-dev@lists.fd.io <mailto:vpp-dev@lists.fd.io>>
>>>> *Subject:* Re: [vpp-dev] Segmentation fault in recursivly lookuping
>>>> fib entry.
>>>> 
>>>> 
>>>> https://gerrit.fd.io/r/#/c/6674/
>>>> <https://gerrit.fd.noclick_io/r/#/c/6674/>
>>>> 
>>>> /neale
>>>> 
>>>>> *From: *"yug...@telincn.com <mailto:yug...@telincn.com>"
>>>>> <yug...@telincn.com <mailto:yug...@telincn.com>>
>>>>> *Date: *Saturday, 13 May 2017 at 14:24
>>>>> *To: *"Neale Ranns (nranns)" <nra...@cisco.com
>>>>> <mailto:nra...@cisco.com>>, vpp-dev <vpp-dev@lists.fd.io
>>>>> <mailto:vpp-dev@lists.fd.io>>
>>>>> *Subject: *Re: Re: [vpp-dev] Segmentation fault in recursivly
>>>>> lookuping fib entry.
>>>>> 
>>>>> Hi neale,
>>>>> Could you leave me a msg then?
>>>>> 
>>>>> Thanks,
>>>>> Ewan
>>>>> 
>>>>> ------------------------------------------------------------------------
>>>>> yug...@telincn.com <mailto:yug...@telincn.com>
>>>>>> 
>>>>>> *From:* Neale Ranns (nranns) <mailto:nra...@cisco.com>
>>>>>> *Date:* 2017-05-13 20:33
>>>>>> *To:* yug...@telincn.com <mailto:yug...@telincn.com>; vpp-dev
>>>>>> <mailto:vpp-dev@lists.fd.io>
>>>>>> *Subject:* Re: [vpp-dev] Segmentation fault in recursivly lookuping
>>>>>> fib entry.
>>>>>> Hi Ewan,
>>>>>> 
>>>>>> That’s a bug. I’ll fix it ASAP.
>>>>>> 
>>>>>> Thanks,
>>>>>> neale
>>>>>> 
>>>>>>> *From: *<vpp-dev-boun...@lists.fd.io
>>>>>>> <mailto:vpp-dev-boun...@lists.fd.io>> on behalf of
>>>>>>> "yug...@telincn.com <mailto:yug...@telincn.com>"
>>>>>>> <yug...@telincn.com <mailto:yug...@telincn.com>>
>>>>>>> *Date: *Saturday, 13 May 2017 at 03:24
>>>>>>> *To: *vpp-dev <vpp-dev@lists.fd.io <mailto:vpp-dev@lists.fd.io>>
>>>>>>> *Subject: *[vpp-dev] Segmentation fault in recursivly lookuping
>>>>>>> fib entry.
>>>>>>> 
>>>>>>> Hi, all
>>>>>>> Below are my main configs, others are default.
>>>>>>> When i knock into this
>>>>>>> cmd  "vppctl ip route 0.0.0.0/0 via 10.10.40.1" to add one default
>>>>>>> route,
>>>>>>> the vpp crashed, it looks like this
>>>>>>> func fib_entry_get_resolving_interface call itself recursivly
>>>>>>> till vpp's crash.
>>>>>>> Is there something wrong?
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> config  info
>>>>>>> 
>>>>>>> root@ubuntu:/usr/src/1704/VBRASV100R001/vpp1704/build-root# vppctl show 
>>>>>>> int addr
>>>>>>> GigabitEthernet2/6/0 (up):
>>>>>>>  192.168.60.1/24
>>>>>>> GigabitEthernet2/7/0 (up):
>>>>>>>  10.10.55.51/24
>>>>>>> host-vGE2_6_0 (up):
>>>>>>> host-vGE2_7_0 (up):
>>>>>>> local0 (dn):
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> root@ubuntu:/usr/src/1704/VBRASV100R001/vpp1704/build-root# vppctl show 
>>>>>>> ip fib
>>>>>>> ipv4-VRF:0, fib_index 0, flow hash: src dst sport dport proto
>>>>>>> 0.0.0.0/0
>>>>>>>  unicast-ip4-chain
>>>>>>>  [@0]: dpo-load-balance: [index:0 buckets:1 uRPF:0 to:[142:12002]]
>>>>>>>    [0] [@0]: dpo-drop ip4
>>>>>>> 0.0.0.0/32
>>>>>>>  unicast-ip4-chain
>>>>>>>  [@0]: dpo-load-balance: [index:1 buckets:1 uRPF:1 to:[0:0]]
>>>>>>>    [0] [@0]: dpo-drop ip4
>>>>>>> 10.10.55.0/24
>>>>>>>  unicast-ip4-chain
>>>>>>>  [@0]: dpo-load-balance: [index:10 buckets:1 uRPF:9 to:[0:0]]
>>>>>>>    [0] [@4]: ipv4-glean: GigabitEthernet2/7/0
>>>>>>> 10.10.55.51/32
>>>>>>>  unicast-ip4-chain
>>>>>>>  [@0]: dpo-load-balance: [index:11 buckets:1 uRPF:10 to:[0:0]]
>>>>>>>    [0] [@2]: dpo-receive: 10.10.55.51 on GigabitEthernet2/7/0
>>>>>>> 192.168.60.0/24
>>>>>>>  unicast-ip4-chain
>>>>>>>  [@0]: dpo-load-balance: [index:8 buckets:1 uRPF:7 to:[0:0]]
>>>>>>>    [0] [@4]: ipv4-glean: GigabitEthernet2/6/0
>>>>>>> 192.168.60.1/32
>>>>>>>  unicast-ip4-chain
>>>>>>>  [@0]: dpo-load-balance: [index:9 buckets:1 uRPF:8 to:[60:3600]]
>>>>>>>    [0] [@2]: dpo-receive: 192.168.60.1 on GigabitEthernet2/6/0
>>>>>>> 192.168.60.30/32
>>>>>>>  unicast-ip4-chain
>>>>>>>  [@0]: dpo-load-balance: [index:12 buckets:1 uRPF:11 to:[60:3600]]
>>>>>>>    [0] [@5]: ipv4 via 192.168.60.30 GigabitEthernet2/6/0: 
>>>>>>> f44d3016eac1000c2904f74e0800
>>>>>>> 224.0.0.0/4
>>>>>>>  unicast-ip4-chain
>>>>>>>  [@0]: dpo-load-balance: [index:3 buckets:1 uRPF:3 to:[0:0]]
>>>>>>>    [0] [@0]: dpo-drop ip4
>>>>>>> 240.0.0.0/4
>>>>>>>  unicast-ip4-chain
>>>>>>>  [@0]: dpo-load-balance: [index:2 buckets:1 uRPF:2 to:[0:0]]
>>>>>>>    [0] [@0]: dpo-drop ip4
>>>>>>> 255.255.255.255/32
>>>>>>>  unicast-ip4-chain
>>>>>>>  [@0]: dpo-load-balance: [index:4 buckets:1 uRPF:4 to:[0:0]]
>>>>>>>    [0] [@0]: dpo-drop ip4
>>>>>>> 
>>>>>>> root@ubuntu:/usr/src/1704/VBRASV100R001/vpp1704/build-root#
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> Thread 1 "vpp_main" received signal SIGSEGV, Segmentation fault.
>>>>>>> fib_entry_get_resolving_interface (entry_index=12) at 
>>>>>>> /usr/src/1704/VBRASV100R001/vpp1704/build-data/../src/vnet/fib/fib_entry.c:1325
>>>>>>> 1325     fib_entry = fib_entry_get(entry_index);
>>>>>>> (gdb)
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> #97831 0x00007fe5435a36a8 in fib_path_get_resolving_interface 
>>>>>>> (path_index=<optimized out>) at 
>>>>>>> /usr/src/1704/VBRASV100R001/vpp1704/build-data/../src/vnet/fib/fib_path.c:1637
>>>>>>> #97832 0x00007fe5435a06f3 in fib_path_list_get_resolving_interface 
>>>>>>> (path_list_index=<optimized out>) at 
>>>>>>> /usr/src/1704/VBRASV100R001/vpp1704/build-data/../src/vnet/fib/fib_path_list.c:617
>>>>>>> #97833 0x00007fe54359a7b5 in fib_entry_get_resolving_interface 
>>>>>>> (entry_index=<optimized out>) at 
>>>>>>> /usr/src/1704/VBRASV100R001/vpp1704/build-data/../src/vnet/fib/fib_entry.c:1327
>>>>>>> #97834 0x00007fe5435a36a8 in fib_path_get_resolving_interface 
>>>>>>> (path_index=<optimized out>) at 
>>>>>>> /usr/src/1704/VBRASV100R001/vpp1704/build-data/../src/vnet/fib/fib_path.c:1637
>>>>>>> #97835 0x00007fe5435a06f3 in fib_path_list_get_resolving_interface 
>>>>>>> (path_list_index=<optimized out>) at 
>>>>>>> /usr/src/1704/VBRASV100R001/vpp1704/build-data/../src/vnet/fib/fib_path_list.c:617
>>>>>>> #97836 0x00007fe54359a7b5 in fib_entry_get_resolving_interface 
>>>>>>> (entry_index=<optimized out>) at 
>>>>>>> /usr/src/1704/VBRASV100R001/vpp1704/build-data/../src/vnet/fib/fib_entry.c:1327
>>>>>>> #97837 0x00007fe5435a36a8 in fib_path_get_resolving_interface 
>>>>>>> (path_index=<optimized out>) at 
>>>>>>> /usr/src/1704/VBRASV100R001/vpp1704/build-data/../src/vnet/fib/fib_path.c:1637
>>>>>>> #97838 0x00007fe5435a06f3 in fib_path_list_get_resolving_interface 
>>>>>>> (path_list_index=<optimized out>) at 
>>>>>>> /usr/src/1704/VBRASV100R001/vpp1704/build-data/../src/vnet/fib/fib_path_list.c:617
>>>>>>> #97839 0x00007fe54359a7b5 in fib_entry_get_resolving_interface 
>>>>>>> (entry_index=entry_index@entry=0) at 
>>>>>>> /usr/src/1704/VBRASV100R001/vpp1704/build-data/../src/vnet/fib/fib_entry.c:1327
>>>>>>> #97840 0x00007fe5432687e1 in arp_input (vm=0x7fe5448882a0 
>>>>>>> <vlib_global_main>, node=0x7fe5021e8580, frame=0x7fe504cc4c00) at 
>>>>>>> /usr/src/1704/VBRASV100R001/vpp1704/build-data/../src/vnet/ethernet/arp.c:1381
>>>>>>> 
>>>>>>> #97841 0x00007fe5446350d9 in dispatch_node (vm=0x7fe5448882a0 
>>>>>>> <vlib_global_main>, node=0x7fe5021e8580, type=<optimized out>, 
>>>>>>> dispatch_state=VLIB_NODE_STATE_POLLING, frame=<optimized out>,
>>>>>>>    last_time_stamp=498535532086144) at 
>>>>>>> /usr/src/1704/VBRASV100R001/vpp1704/build-data/../src/vlib/main.c:998
>>>>>>> #97842 0x00007fe5446353cd in dispatch_pending_node 
>>>>>>> (vm=vm@entry=0x7fe5448882a0 <vlib_global_main>, p=0x7fe504ce589c, 
>>>>>>> last_time_stamp=<optimized out>)
>>>>>>>    at 
>>>>>>> /usr/src/1704/VBRASV100R001/vpp1704/build-data/../src/vlib/main.c:1144
>>>>>>> #97843 0x00007fe544635e3d in vlib_main_or_worker_loop (is_main=1, 
>>>>>>> vm=0x7fe5448882a0 <vlib_global_main>) at 
>>>>>>> /usr/src/1704/VBRASV100R001/vpp1704/build-data/../src/vlib/main.c:1588
>>>>>>> #97844 vlib_main_loop (vm=0x7fe5448882a0 <vlib_global_main>) at 
>>>>>>> /usr/src/1704/VBRASV100R001/vpp1704/build-data/../src/vlib/main.c:1608
>>>>>>> #97845 vlib_main (vm=vm@entry=0x7fe5448882a0 <vlib_global_main>, 
>>>>>>> input=input@entry=0x7fe501bd9fa0) at 
>>>>>>> /usr/src/1704/VBRASV100R001/vpp1704/build-data/../src/vlib/main.c:1736
>>>>>>> #97846 0x00007fe54466eee3 in thread0 (arg=140622674035360) at 
>>>>>>> /usr/src/1704/VBRASV100R001/vpp1704/build-data/../src/vlib/unix/main.c:507
>>>>>>> #97847 0x00007fe542b41c60 in clib_calljmp () at 
>>>>>>> /usr/src/1704/VBRASV100R001/vpp1704/build-data/../src/vppinfra/longjmp.S:110
>>>>>>> #97848 0x00007ffd39782140 in ?? ()
>>>>>>> #97849 0x00007fe54466f8dd in vlib_unix_main (argc=<optimized out>, 
>>>>>>> argv=<optimized out>) at 
>>>>>>> /usr/src/1704/VBRASV100R001/vpp1704/build-data/../src/vlib/unix/main.c:604
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> ------------------------------------------------------------------------
>>>>>>> yug...@telincn.com <mailto:yug...@telincn.com>
>>> _______________________________________________
>>> vpp-dev mailing list
>>> vpp-dev@lists.fd.io <mailto:vpp-dev@lists.fd.io>
>>> https://lists.fd.io/mailman/listinfo/vpp-dev
>> 
>> 
>> 
>> _______________________________________________
>> vpp-dev mailing list
>> vpp-dev@lists.fd.io
>> https://lists.fd.io/mailman/listinfo/vpp-dev
>> 
>    _______________________________________________
>    vpp-dev mailing list
>    vpp-dev@lists.fd.io
>    https://lists.fd.io/mailman/listinfo/vpp-dev
> 
> 
> _______________________________________________
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev

_______________________________________________
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Reply via email to