Re: [vpp-dev] Test failing

2017-10-11 Thread Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco)
Hello Marco,

Regarding failure of vpp-csit-verify-virl-master job - the issue is not the TC 
mentioned by you below. "TC02: DUT reports packet flow for traffic with local 
destination address" is also marked as non-critical as it is expected to fail 
now. You can see it in the console log too:

08:00:53 TC02: DUT reports packet flow for traffic with local destination 
address :: [Top] TG-DUT1-DUT2-TG. [Cfg] On DUT1 configure I... | FAIL |
08:00:53 Traffic script execution failed
08:00:53 

08:01:22 TC03: DUT reports packet flow for traffic with remote destination 
address :: [Top] TG-DUT1-DUT2-TG. [Cfg] On DUT1 configure ... | PASS |
08:01:22 

08:01:22 Tests.Vpp.Func.Telemetry.Eth2P-Ethip6-Ip6Base-Ip6Ipfixbase-Func :: 
*IPFIX ipv6 test cases*  | PASS |
08:01:22 0 critical tests, 0 passed, 0 failed
08:01:22 3 tests total, 2 passed, 1 failed

=> there is no critical test failure (only critical tests can block the 
verification)


The failed test is " Eth2P-Ethip6-Ip6Base-Ipolicemarkbase-Func .TC01: VPP 
policer 2R3C Color-aware marks packet":

08:07:42 TC01: VPP policer 2R3C Color-aware marks packet :: [Top] TG=DUT1.  
   
[ WARN ] Tests.Vpp.Func.Ip6.Eth2P-Ethip6-Ip6Base-Ipolicemarkbase-Func - TC01: 
VPP policer 2R3C Color-aware marks packet 
08:07:42 The VPP PIDs are not equal!
08:07:42 Test Setup VPP PIDs: {'10.30.52.214': 9591, '10.30.52.215': 6792}
08:07:42 Test Teardown VPP PIDs: {'10.30.52.214': 9863, '10.30.52.215': 6792}
08:07:42 Tests.Vpp.Func.Ip6.Eth2P-Ethip6-Ip6Base-Ipolicemarkbase-Func - TC01: 
VPP policer 2R3C Color-aware marks packet 
08:07:42 The VPP PIDs are not equal!
08:07:42 Test Setup VPP PIDs: {'10.30.52.214': 9591, '10.30.52.215': 6792}
08:07:42 Test Teardown VPP PIDs: {'10.30.52.214': 9863, '10.30.52.215': 6792}
08:07:42 | FAIL |
08:07:42 Teardown failed:
08:07:42 SSHTimeout: Timeout exception.
08:07:42 Current contents of stdout buffer: 
08:07:42 Current contents of stderr buffer:

It seems that there was VPP restart during execution of this test.

Regards,
Jan

-Original Message-
From: Marco Varlese [mailto:marco.varl...@suse.com] 
Sent: Wednesday, October 11, 2017 10:49
To: Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco) 
<jgel...@cisco.com>; Ed Kern (ejk) <e...@cisco.com>
Cc: vpp-dev@lists.fd.io; csit-...@lists.fd.io
Subject: Re: [vpp-dev] Test failing

Today I can't get a build +1 Verified.

Different sort of errors and completely random...

https://jenkins.fd.io/job/vpp-verify-master-centos7/7520/console
07:11:39 make[2]: Leaving directory `/w/workspace/vpp-verify-master- 
centos7/dpdk'
07:11:39 sudo rpm -Uih vpp-dpdk-devel-17.08-vpp1.x86_64.rpm
07:11:39 
07:11:39package vpp-dpdk-devel-17.08-vpp2.x86_64 (which is newer than
vpp-dpdk-devel-17.08-vpp1.x86_64) is already installed
07:11:39 make[1]: *** [install-rpm] Error 2
07:11:39 make[1]: Leaving directory `/w/workspace/vpp-verify-master- 
centos7/dpdk'
07:11:39 make: *** [dpdk-install-dev] Error 2
07:11:39 Build step 'Execute shell' marked build as failure

https://jenkins.fd.io/job/vpp-csit-verify-virl-master/7533/console
08:29:44 08:00:53 TC02: DUT reports packet flow for traffic with local 
destination address :: [Top] TG-DUT1-DUT2-TG. [Cfg] On DUT1 configure I... | 
FAIL |
08:29:44 08:00:53 Traffic script execution failed
08:29:44

08:29:44 Final result of all test
loops:  
   | FAIL |
08:29:44 1 critical test has failed.
08:29:44



- Marco

On Mon, 2017-10-09 at 07:26 +, Jan Gelety -X (jgelety - PANTHEON 
TECHNOLOGIES at Cisco) wrote:
> Hello Marco,
> 
> I had a look on the test case log 
> (https://jenkins.fd.io/job/vpp-csit-verify-v
> irl-master/7480/robot/report/log.html) - it's always the best action 
> you can do to check what test case really caused the failure as there 
> are some tests expected to fail because of non-fixed issues in VPP - 
> and failed test is
> 
> TEST TC01: Route IPv4 packet through LISP with Bridge Domain setup.
> 
> Where no running VPP was detected (no VPP pid returned) during the 
> test case setup phase.
> 
> Regards,
> Jan
> 
> -Original Message-
> From: Marco Varlese [mailto:marco.varl...@suse.com]
> Sent: Monday, October 09, 2017 09:04
> To: Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco) 
> &

Re: [vpp-dev] Test failing

2017-10-09 Thread Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco)
Hello Marco,

I had a look on the test case log 
(https://jenkins.fd.io/job/vpp-csit-verify-virl-master/7480/robot/report/log.html)
 - it's always the best action you can do to check what test case really caused 
the failure as there are some tests expected to fail because of non-fixed 
issues in VPP - and failed test is

TEST TC01: Route IPv4 packet through LISP with Bridge Domain setup.

Where no running VPP was detected (no VPP pid returned) during the test case 
setup phase.

Regards,
Jan

-Original Message-
From: Marco Varlese [mailto:marco.varl...@suse.com] 
Sent: Monday, October 09, 2017 09:04
To: Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco) 
<jgel...@cisco.com>; Ed Kern (ejk) <e...@cisco.com>
Cc: vpp-dev@lists.fd.io; csit-...@lists.fd.io
Subject: Re: [vpp-dev] Test failing

Please, take a look at this logs:

https://jenkins.fd.io/job/vpp-csit-verify-virl-master/7480/console

After Ed and yourself mentioned it... maybe the cause of the failure is linked 
to this failure instead?
14:32:52 14:11:12 Tests.Vpp.Func.Ip4 Tunnels.Lisp.Eth2P-Ethip4Lisp-
L2Bdbasemaclrn-Func :: *ip4-lispgpe-ip4 encapsulation test cases*  |
FAIL |
14:32:52 14:11:12 1 critical test, 0 passed, 1 failed
14:32:52 14:11:12 1 test total, 0 passed, 1 failed


Cheers,
Marco

On Fri, 2017-10-06 at 15:49 +, Jan Gelety -X (jgelety - PANTHEON 
TECHNOLOGIES at Cisco) wrote:
> + csit-dev
> 
> Hello Marco,
> 
> The mentioned test case is not responsible for -1 in verification as 
> this test case is marked as non-critical. Please, provide links to 
> affected jobs to find which TC is really failing.
> 
> Thanks,
> Jan
> 
> -Original Message-
> From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] 
> On Behalf Of Ed Kern (ejk)
> Sent: Friday, October 06, 2017 17:07
> To: Marco Varlese <marco.varl...@suse.com>
> Cc: vpp-dev@lists.fd.io
> Subject: Re: [vpp-dev] Test failing
> 
> could you throw me some example jobs?
> 
> thanks,
> 
> Ed
> 
> 
> > On Oct 6, 2017, at 8:54 AM, Marco Varlese <marco.varl...@suse.com> wrote:
> > 
> > Hi all,
> > 
> > I have seen this many times these days...
> > I wonder if it's a infra hiccup or something is really broken?
> > 
> > The "recheck" is becoming the norm to get a clean +1 Verified... :(
> > 
> > 14:32:52 14:00:32 TC04: VPP doesn't send DHCPv4 REQUEST after OFFER 
> > with wrong
> > XID :: Configure DHCPv4 client on interface to TG. If server   | FAIL |
> > 14:32:52 14:00:32 Expected error 'DHCP REQUEST Rx timeout' but got 
> > 'Traffic script execution failed'.
> > 
> > 
> > Cheers,
> > Marco
> > ___
> > vpp-dev mailing list
> > vpp-dev@lists.fd.io
> > https://lists.fd.io/mailman/listinfo/vpp-dev
> 
> ___
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev
> 
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


Re: [vpp-dev] Test failing

2017-10-06 Thread Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco)
+ csit-dev

Hello Marco,

The mentioned test case is not responsible for -1 in verification as this test 
case is marked as non-critical. Please, provide links to affected jobs to find 
which TC is really failing.

Thanks,
Jan

-Original Message-
From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Ed Kern (ejk)
Sent: Friday, October 06, 2017 17:07
To: Marco Varlese 
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] Test failing

could you throw me some example jobs?

thanks,

Ed


> On Oct 6, 2017, at 8:54 AM, Marco Varlese  wrote:
> 
> Hi all,
> 
> I have seen this many times these days...
> I wonder if it's a infra hiccup or something is really broken?
> 
> The "recheck" is becoming the norm to get a clean +1 Verified... :(
> 
> 14:32:52 14:00:32 TC04: VPP doesn't send DHCPv4 REQUEST after OFFER with wrong
> XID :: Configure DHCPv4 client on interface to TG. If server   | FAIL |
> 14:32:52 14:00:32 Expected error 'DHCP REQUEST Rx timeout' but got 
> 'Traffic script execution failed'.
> 
> 
> Cheers,
> Marco
> ___
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


Re: [vpp-dev] 17.07 CIST Failures : FW: Change in vpp[stable/1704]: DHCP complete event includes the subnet mask

2017-07-06 Thread Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco)
Hello Neale,

If I am correct, it is necessary to move/copy vagrant directory from 
build-root/vagrant/ to extras/vagrant/ in stable/1704 branch.

Probably it should be done for all stable vpp branches.

Regards,
Jan

-Original Message-
From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Neale Ranns (nranns)
Sent: Thursday, July 06, 2017 12:57
To: csit-...@lists.fd.io
Cc: vpp-dev 
Subject: [vpp-dev] 17.07 CIST Failures : FW: Change in vpp[stable/1704]: DHCP 
complete event includes the subnet mask


Hi all,

The CSIT jobs on 17.07 have a consistent failure as of some time last night – 
error below.
Could I please ask for an investigation as a matter of some urgency.

Thanks,
Neale

  
08:42:24 make[1]: Entering directory 
'/w/workspace/vpp-csit-verify-virl-1704/dpdk'
08:42:24 Makefile:170: warning: overriding recipe for target 
'/w/workspace/vpp-csit-verify-virl-1704/dpdk/'
08:42:24 Makefile:164: warning: ignoring old recipe for target 
'/w/workspace/vpp-csit-verify-virl-1704/dpdk/'
08:42:25 ==
08:42:25  Up-to-date DPDK package already installed
08:42:25 ==
08:42:25 make[1]: Leaving directory 
'/w/workspace/vpp-csit-verify-virl-1704/dpdk'
08:42:25 + '[' x == xTrue ']'
08:42:25 + extras/vagrant/build.sh
08:42:25 /tmp/hudson5219485721951253061.sh: line 89: extras/vagrant/build.sh: 
No such file or directory

-Original Message-
From: "fd.io JJB (Code Review)" 
Reply-To: "jobbuil...@projectrotterdam.info" 
Date: Thursday, 6 July 2017 at 10:53
To: "Neale Ranns (nranns)" 
Subject: Change in vpp[stable/1704]: DHCP complete event includes the subnet 
mask

fd.io JJB has posted comments on this change. ( https://gerrit.fd.io/r/7426 
)

Change subject: DHCP complete event includes the subnet mask
..


Patch Set 3: Verified-1

Build Failed 

https://jenkins.fd.io/job/vpp-csit-verify-virl-1704/167/ : FAILURE

No problems were identified. If you know why this problem occurred, please 
add a suitable Cause for it. ( 
https://jenkins.fd.io/job/vpp-csit-verify-virl-1704/167/ )

Logs: 
https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-csit-verify-virl-1704/167

https://jenkins.fd.io/job/vpp-verify-1704-centos7/168/ : FAILURE

No problems were identified. If you know why this problem occurred, please 
add a suitable Cause for it. ( 
https://jenkins.fd.io/job/vpp-verify-1704-centos7/168/ )

Logs: 
https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-verify-1704-centos7/168

https://jenkins.fd.io/job/vpp-docs-verify-1704/167/ : SUCCESS

Logs: 
https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-docs-verify-1704/167

https://jenkins.fd.io/job/vpp-make-test-docs-verify-1704/167/ : SUCCESS

Logs: 
https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-make-test-docs-verify-1704/167

https://jenkins.fd.io/job/vpp-verify-1704-ubuntu1604/167/ : SUCCESS

Logs: 
https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-verify-1704-ubuntu1604/167

-- 
To view, visit https://gerrit.fd.io/r/7426
To unsubscribe, visit https://gerrit.fd.io/r/settings

Gerrit-MessageType: comment
Gerrit-Change-Id: Ia603fec7a769dd947c58b73ec8502e34906cc4b3
Gerrit-PatchSet: 3
Gerrit-Project: vpp
Gerrit-Branch: stable/1704
Gerrit-Owner: Neale Ranns 
Gerrit-Reviewer: fd.io JJB 
Gerrit-HasComments: No


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] [FD.io Helpdesk #41921] connection interruptiones between jenkins executor and VIRL servers

2017-06-20 Thread Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco)
Hello Anton,

Thanks for the fast response. We will check local firewall setting as you 
proposed.

Regards,
Jan

-Original Message-
From: Anton Baranov via RT [mailto:fdio-helpd...@rt.linuxfoundation.org] 
Sent: Tuesday, June 20, 2017 17:13
To: Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco) <jgel...@cisco.com>
Cc: csit-...@lists.fd.io; vpp-dev@lists.fd.io
Subject: [FD.io Helpdesk #41921] connection interruptiones between jenkins 
executor and VIRL servers

Jan: 

This is what I got  from fdio jenkins server (i did the tests with 
10.30.{52,53}.2 hosts: 

$ ip ro get 10.30.52.2
10.30.52.2 via 10.30.48.1 dev eth0  src 10.30.48.5
cache

The traffic is going directly through neutron router.. so we don't block any 
traffic on our firewall

$ ping -q -c4 10.30.52.2
PING 10.30.52.2 (10.30.52.2) 56(84) bytes of data.

--- 10.30.52.2 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3001ms rtt 
min/avg/max/mdev = 0.496/0.789/1.509/0.419 ms

I was able to reach the host in 10.30.52.0/24 network from jenkins server 

$ nc -nv 10.30.52.2 22
Ncat: Version 6.40 ( http://nmap.org/ncat )
Ncat: Connection refused.

Looks like access is blocked there. Could you check your local firewall setting 
and make sure you allow port 22/tcp ?

The above is also true for 10.30.{53,54}.0/24 subnets

Regards,

On Tue Jun 20 10:51:15 2017, jgel...@cisco.com wrote:
> Hello Vanessa,
> 
> Thanks for the info.
> 
> Just few remarks:
> 
> 1. virl1 (10.30.51.28) - nodes of simulations started there are using 
> subnet 10.30.52.0/24 and we are experiencing ssh timeouts in this 
> subnet
> 
> 2. virl2 (10.30.51.29) -  nodes of simulations started there were 
> using subnet 10.30.53.0/24 and we were experiencing ssh timeouts in 
> this subnet;
>  -  at the moment we switched the 
> subnet back to 10.30.51.0/24 and assigned there IP pool 10.30.51.106 -
> 10.30.51.180
>  - new tests started - let you 
> know the result tomorrow
> 
> 3. virl3 (10.30.51.30) - nodes of simulations started there were using 
> subnet 10.30.51.0/24 and IP pool is set to 10.30.51.181 - 10.30.51.254 
> and we didn't experience ssh timeouts in this subnet;
> 
> 
> So would it be possible to check routes for subnets 10.30.52.0/24,
> 10.30.53.0/24 and also for 10.30.54.0/24 (that is planned for vilr3 
> when it will be upgraded)?
> 
> Thank you very much.
> 
> Regards,
> Jan
> 
> -Original Message-
>  From: Vanessa Valderrama via RT [mailto:fdio- 
> helpd...@rt.linuxfoundation.org]
> Sent: Friday, June 16, 2017 22:01
> To: Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco) 
> <jgel...@cisco.com>
> Cc: csit-...@lists.fd.io; vpp-dev@lists.fd.io
> Subject: [FD.io Helpdesk #41921] connection interruptiones between 
> jenkins executor and VIRL servers
> 
> We did have the vendor run MTRs.  I've attached the results.
> 
> On Fri Jun 16 15:36:16 2017, valderrv wrote:
> > Jan,
> >
> > I missed this conversation with abranov.  Can this issue be resolved?
> >
> > 
> > abranov: unfortunatley I had no time to check test logs  
> > from last test cases before (because of meeting so I just had a look  
> > to console output) and I found out that ssh failures are not related  
> > to connection between jenkins and virl now (but it was this issue at 
> > the time I wrote the e-mail).
> >   [11:04:06]   They are related to start up pf 
> > nested VM now - so I will ask VIRL support for the help here.
> > 
> >
> > Thank you,
> > Vanessa
> >
> > On Fri Jun 16 12:12:43 2017, valderrv wrote:
> > > Jan,
> > >
> > > We are looking into this issue.
> > >
> > > Thank you,
> > > Vanessa
> > >
> > > On Fri Jun 16 09:12:55 2017, jgel...@cisco.com wrote:
> > > > Hello Anton,
> > > >
> > > > Unfortunately we are still having issues with ssh connection 
> > > > timeouts during tests on virl. Could you, please, have a look on 
> > > > it?
> > > >
> > > > Thank you very much.
> > > > Regards,
> > > > Jan
> > > >
> > > > -Original Message-
> > > >   From: Anton Baranov via RT [mailto:fdio- 
> > > > helpd...@rt.linuxfoundation.org]
> > > > Sent: Wednesday, June 14, 2017 15:45
> > > >  To: Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco) 
> > > > <jgel...@cisco.com>
> > > > Cc: csit-...@lists.fd.io; vpp-dev@lists.fd.io
> > > >  Subject: [FD.io Helpdesk #41921] connection interruptiones 
> > > &

Re: [vpp-dev] [FD.io Helpdesk #41921] connection interruptiones between jenkins executor and VIRL servers

2017-06-20 Thread Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco)
Hello Vanessa,

Thanks for the info.

Just few remarks:

1. virl1 (10.30.51.28) - nodes of simulations started there are using subnet 
10.30.52.0/24 and we are experiencing ssh timeouts in this subnet

2. virl2 (10.30.51.29) -  nodes of simulations started there were using subnet 
10.30.53.0/24 and we were experiencing ssh timeouts in this subnet; 
  -  at the moment we switched the subnet 
back to 10.30.51.0/24 and assigned there IP pool 10.30.51.106 - 10.30.51.180
  - new tests started - let you know the 
result tomorrow

3. virl3 (10.30.51.30) - nodes of simulations started there were using subnet 
10.30.51.0/24 and IP pool is set to 10.30.51.181 - 10.30.51.254 and we didn't 
experience ssh timeouts in this subnet;


So would it be possible to check routes for subnets 10.30.52.0/24, 
10.30.53.0/24 and also for 10.30.54.0/24 (that is planned for vilr3 when it 
will be upgraded)?

Thank you very much.

Regards,
Jan

-Original Message-
From: Vanessa Valderrama via RT [mailto:fdio-helpd...@rt.linuxfoundation.org] 
Sent: Friday, June 16, 2017 22:01
To: Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco) <jgel...@cisco.com>
Cc: csit-...@lists.fd.io; vpp-dev@lists.fd.io
Subject: [FD.io Helpdesk #41921] connection interruptiones between jenkins 
executor and VIRL servers

We did have the vendor run MTRs.  I've attached the results.  

On Fri Jun 16 15:36:16 2017, valderrv wrote:
> Jan,
> 
> I missed this conversation with abranov.  Can this issue be resolved?
> 
> 
>abranov: unfortunatley I had no time to check test logs 
> from last test cases before (because of meeting so I just had a look 
> to console output) and I found out that ssh failures are not related 
> to connection between jenkins and virl now (but it was this issue at 
> the time I wrote the e-mail).
>  [11:04:06]   They are related to start up pf nested 
> VM now - so I will ask VIRL support for the help here.
> 
> 
> Thank you,
> Vanessa
> 
> On Fri Jun 16 12:12:43 2017, valderrv wrote:
> > Jan,
> >
> > We are looking into this issue.
> >
> > Thank you,
> > Vanessa
> >
> > On Fri Jun 16 09:12:55 2017, jgel...@cisco.com wrote:
> > > Hello Anton,
> > >
> > > Unfortunately we are still having issues with ssh connection 
> > > timeouts during tests on virl. Could you, please, have a look on 
> > > it?
> > >
> > > Thank you very much.
> > > Regards,
> > > Jan
> > >
> > > -Original Message-
> > >  From: Anton Baranov via RT [mailto:fdio- 
> > > helpd...@rt.linuxfoundation.org]
> > > Sent: Wednesday, June 14, 2017 15:45
> > > To: Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco) 
> > > <jgel...@cisco.com>
> > > Cc: csit-...@lists.fd.io; vpp-dev@lists.fd.io
> > > Subject: [FD.io Helpdesk #41921] connection interruptiones between 
> > > jenkins executor and VIRL servers
> > >
> > > Jan:
> > >
> > > On  my side I currently don't see any connectivity problems 
> > > between jenkins and VIRL servers. Please let me know if you're 
> > > still having that issue. I'll keep an eye on that problem and if 
> > > it reapears I'll report that to our cloud provider to check 
> > > further.
> > >
> > > Thanks,
> > > --
> > > Anton Baranov
> > > Systems and Network Administrator
> > > The Linux Foundation
> > >
> > > On Wed Jun 14 08:12:45 2017, jgel...@cisco.com wrote:
> > > > Dear  held...@fd.io<mailto:held...@fd.io>
> > > >
> > > > We are observing connection issues between Jenkins executors and 
> > > > VIRL servers that leads to failures of verify jobs 
> > > > (https://jenkins.fd.io/view/vpp/job/vpp-csit-verify-virl-master/
> > > > ,
> > > > https://jenkins.fd.io/view/csit/job/csit-vpp-functional-master-
> > > > ubuntu1604-virl/, https://jenkins.fd.io/view/csit/job/csit-vpp-
> > > > functional-master-centos7-virl/) because of ssh connection 
> > > > timeouts.
> > > >
> > > > Could you, please, have a look on it?
> > > >
> > > > Thank you very much.
> > > >
> > > > Regards,
> > > > Jan
> > >
> > >
> > >
> >
> >



___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


Re: [vpp-dev] [FD.io Helpdesk #41921] connection interruptiones between jenkins executor and VIRL servers

2017-06-16 Thread Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco)
Hello Anton,

Unfortunately we are still having issues with ssh connection timeouts  during 
tests on virl. Could you, please, have a look on it?

Thank you very much.
Regards,
Jan

-Original Message-
From: Anton Baranov via RT [mailto:fdio-helpd...@rt.linuxfoundation.org] 
Sent: Wednesday, June 14, 2017 15:45
To: Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco) <jgel...@cisco.com>
Cc: csit-...@lists.fd.io; vpp-dev@lists.fd.io
Subject: [FD.io Helpdesk #41921] connection interruptiones between jenkins 
executor and VIRL servers

Jan: 

On  my side I currently don't see any connectivity problems between jenkins and 
VIRL servers. Please let me know if you're still having that issue. I'll keep 
an eye on that problem and if it reapears I'll report that to our cloud 
provider to check further. 

Thanks,
--
Anton Baranov
Systems and Network Administrator
The Linux Foundation

On Wed Jun 14 08:12:45 2017, jgel...@cisco.com wrote:
> Dear  held...@fd.io<mailto:held...@fd.io>
> 
> We are observing connection issues between Jenkins executors and VIRL 
> servers that leads to failures of verify jobs 
> (https://jenkins.fd.io/view/vpp/job/vpp-csit-verify-virl-master/,
> https://jenkins.fd.io/view/csit/job/csit-vpp-functional-master-
> ubuntu1604-virl/, https://jenkins.fd.io/view/csit/job/csit-vpp-
> functional-master-centos7-virl/) because of ssh connection timeouts.
> 
> Could you, please, have a look on it?
> 
> Thank you very much.
> 
> Regards,
> Jan



___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


[vpp-dev] FW: [FD.io Helpdesk #41921] AutoReply: connection interruptiones between jenkins executor and VIRL servers

2017-06-14 Thread Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco)
FYI

-Original Message-
From: FD.io Helpdesk via RT [mailto:fdio-helpd...@rt.linuxfoundation.org] 
Sent: Wednesday, June 14, 2017 14:13
To: Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco) <jgel...@cisco.com>
Subject: [FD.io Helpdesk #41921] AutoReply: connection interruptiones between 
jenkins executor and VIRL servers


Greetings,

Your support ticket regarding:
"connection interruptiones between jenkins executor and VIRL servers", 
has been entered in our ticket tracker.  A summary of your ticket appears below.

If you have any follow-up related to this issue, please reply to this email or 
include:

 [FD.io Helpdesk #41921]

in the subject line of subsequent emails.

Thank you,
Linux Foundation Support Team

-
Dear  held...@fd.io<mailto:held...@fd.io>

We are observing connection issues between Jenkins executors and VIRL servers 
that leads to failures of verify jobs 
(https://jenkins.fd.io/view/vpp/job/vpp-csit-verify-virl-master/, 
https://jenkins.fd.io/view/csit/job/csit-vpp-functional-master-ubuntu1604-virl/,
 https://jenkins.fd.io/view/csit/job/csit-vpp-functional-master-centos7-virl/) 
because of ssh connection timeouts.

Could you, please, have a look on it?

Thank you very much.

Regards,
Jan

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


[vpp-dev] connection interruptiones between jenkins executor and VIRL servers

2017-06-14 Thread Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco)
Dear  held...@fd.io

We are observing connection issues between Jenkins executors and VIRL servers 
that leads to failures of verify jobs 
(https://jenkins.fd.io/view/vpp/job/vpp-csit-verify-virl-master/, 
https://jenkins.fd.io/view/csit/job/csit-vpp-functional-master-ubuntu1604-virl/,
 https://jenkins.fd.io/view/csit/job/csit-vpp-functional-master-centos7-virl/) 
because of ssh connection timeouts.

Could you, please, have a look on it?

Thank you very much.

Regards,
Jan
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] VPP unit test failures on Ubuntu16.04

2017-06-06 Thread Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco)
Hello,

We observed vpp unit test failures during verification of patch 
https://gerrit.fd.io/r/#/c/7015/ where the csit operational branch has been 
updated - it has no impact to vpp unit test execution. Failures have occurred 
on containerised PoC builds as well as on official fd.io JJB builds:

http://jenkins.ejkern.net:8080/job/vpp-fake-csit-verify-master/133/console
http://jenkins.ejkern.net:8080/job/vpp-verify-master-ubuntu1604/684/console
https://jenkins.fd.io/job/vpp-verify-master-ubuntu1604/5768/

The verification has passed after another recheck.

Are you aware of such behaviour on ubnuntu16.04 executors or is it something 
new?

Thanks,
Jan


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] [FD.io Helpdesk #41298] Jobs are not triggered from gerrit

2017-05-31 Thread Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco)
Hello Anton,

Do you know, please, what was the root cause and how we can avoid such 
situation in the future?

Thanks,
Jan

-Original Message-
From: Anton Baranov via RT [mailto:fdio-helpd...@rt.linuxfoundation.org] 
Sent: Wednesday, May 31, 2017 16:03
To: Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco) <jgel...@cisco.com>
Cc: csit-...@lists.fd.io; vpp-dev@lists.fd.io
Subject: [FD.io Helpdesk #41298] Jobs are not triggered from gerrit

We triggered rechecks for two gerrit changes: 
- https://gerrit.fd.io/r/#/c/6806/2
- https://gerrit.fd.io/r/#/c/6912/2

Regards,

On Wed May 31 09:58:32 2017, hagbard wrote:
> What did you do to confirm triggers are now happening?
> 
> Ed
> 
> On Wed, May 31, 2017 at 6:55 AM, Anton Baranov via RT < 
> fdio-helpd...@rt.linuxfoundation.org> wrote:
> 
> > Hello,
> >
> > We restarted jenkins service and that fixed jobs triggering.
> >
> > Best regards,
> > --
> > Anton Baranov
> > Systems and Network Administrator
> > The Linux Foundation
> >
> >
> > On Wed May 31 09:20:26 2017, abaranov wrote:
> > > Hello,
> > >
> > > We're checking that issue right now  and will keep you updated
> > >
> > > Thanks,
> > >
> > >
> >
> >
> >
> > ___
> > vpp-dev mailing list
> > vpp-dev@lists.fd.io
> > https://lists.fd.io/mailman/listinfo/vpp-dev
> >


--
Anton Baranov
Systems and Network Administrator
The Linux Foundation
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


Re: [vpp-dev] [csit-dev] VIRL build failure

2017-05-22 Thread Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco)
Hello Ole,

The problem is that requested CSIT operational branch (oper-170313) is not 
available anymore (it has already been deleted).

I created patch to use CSIT release branch in vpp stable/1704: 
https://gerrit.fd.io/r/#/c/6818/

When this patch is merged, please, rebase your patch to use correct CSIT 
version.

Regards,
Jan

-Original Message-
From: csit-dev-boun...@lists.fd.io [mailto:csit-dev-boun...@lists.fd.io] On 
Behalf Of otr...@employees.org
Sent: Sunday, May 21, 2017 11:27
To: csit-...@lists.fd.io
Subject: [csit-dev] VIRL build failure

Guys,

Any idea what this is?

Cheers,
Ole

https://jenkins.fd.io/job/vpp-csit-verify-virl-1704/129/console


08:41:06 + git clone https://gerrit.fd.io/r/csit  --branch oper-170313

08:41:06
Cloning into 'csit'...

08:41:07
fatal: Remote branch oper-170313 not found in upstream origin

08:41:07
Build step 'Execute shell' marked build as failure

08:41:07
$ ssh-agent -k

08:41:07
unset SSH_AUTH_SOCK;

08:41:07
unset SSH_AGENT_PID;

08:41:07
echo Agent pid 2420 killed;

08:41:07
[ssh-agent] Stopped.

08:41:07
Archiving artifacts

08:41:07 Robot results publisher started..
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


Re: [vpp-dev] Fwd: Change in vpp[master]: make test: python interpreter customization

2017-04-13 Thread Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco)
Hello Klement,

Unfortunately, the VIRL simulation has not been started successfully. I  did 
some cleaning action on VIRL servers and forced recheck in your commit. I will 
keep my eyes on the started virl verify job 
https://jenkins.fd.io/job/vpp-csit-verify-virl-master/4910/

Thanks,
Jan

PS: It is better to use DL csit-...@lists.fd.io to report issue with VIRL test.

-Original Message-
From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Klement Sekera -X (ksekera - PANTHEON TECHNOLOGIES at Cisco)
Sent: Thursday, April 13, 2017 08:59
To: vpp-dev 
Subject: [vpp-dev] Fwd: Change in vpp[master]: make test: python interpreter 
customization

Hi,

does anybody know how to make virl pass? This is a make test related change 
only and so far I've seen 3 virl failures. Interestingly, a dependent change on 
this one passed on the first try.

Thanks,
Klement

Forwarded message from fd.io JJB (Code Review) (2017-04-12 18:10:53):
> fd.io JJB has posted comments on this change. ( 
> https://gerrit.fd.io/r/6163 )
> 
> Change subject: make test: python interpreter customization 
> ..
> 
> 
> Patch Set 1: Verified-1
> 
> Build Failed
> 
> https://jenkins.fd.io/job/vpp-csit-verify-virl-master/4899/ : FAILURE
> 
> Logs: 
> https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-csit-verify-vi
> rl-master/4899
> 
> https://jenkins.fd.io/job/vpp-verify-master-ubuntu1404/4901/ : SUCCESS
> 
> Logs: 
> https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-verify-master-
> ubuntu1404/4901
> 
> https://jenkins.fd.io/job/vpp-verify-master-ubuntu1604/4900/ : SUCCESS
> 
> Logs: 
> https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-verify-master-
> ubuntu1604/4900
> 
> https://jenkins.fd.io/job/vpp-verify-master-centos7/4895/ : SUCCESS
> 
> Logs: 
> https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-verify-master-
> centos7/4895
> 
> https://jenkins.fd.io/job/vpp-docs-verify-master/3495/ : SUCCESS
> 
> Logs: 
> https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-docs-verify-ma
> ster/3495
> 
> https://jenkins.fd.io/job/vpp-make-test-docs-verify-master/1223/ : 
> SUCCESS
> 
> Logs: 
> https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-make-test-docs
> -verify-master/1223
> 
> --
> To view, visit https://gerrit.fd.io/r/6163 To unsubscribe, visit 
> https://gerrit.fd.io/r/settings
> 
> Gerrit-MessageType: comment
> Gerrit-Change-Id: I67a658fc927303468cc67f0ac192317ca2907625
> Gerrit-PatchSet: 1
> Gerrit-Project: vpp
> Gerrit-Branch: master
> Gerrit-Owner: Klement Sekera 
> Gerrit-Reviewer: Klement Sekera 
> Gerrit-Reviewer: fd.io JJB 
> Gerrit-HasComments: No
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


[vpp-dev] Centos OS reported on ubuntu jenkins executor

2017-04-12 Thread Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco)
Hello Vanessa,

We observed strange behaviour during one of vpp-csit-verify-virl-master jobs - 
the job has been started on ubuntu1604 executor:

Building remotely on ubuntu1604-basebuild-4c-4g-5409 
(ubuntu1604-basebuild-4c-4g) in workspace 
/w/workspace/vpp-csit-verify-virl-master

However during the execution of the script setup_vpp_dpdk_dev_env.sh it 
reported that it was running on Centos OS:

Loaded plugins: fastestmirror, langpacks
Repodata is over 2 weeks old. Install yum-cron? Or run: yum makecache fast
Determining fastest mirrors
* base: centos.mirror.iweb.ca
* epel: ca.mirror.babylon.network
* extras: centos.mirror.vexxhost.com
* updates: centos.mirror.iweb.ca
Package redhat-lsb-4.1-27.el7.centos.1.x86_64 already installed and latest 
version
Nothing to do
DISTRIB_ID: CentOS
DISTRIB_RELEASE: 7.3.1611
DISTRIB_CODENAME: Core
DISTRIB_DESCRIPTION: "CentOS Linux release 7.3.1611 (Core) "
INSTALLING VPP-DPKG-DEV from apt/yum repo
REPO_URL: 
https://nexus.fd.io/content/repositories/fd.io.master.ubuntu.xenial.main
Loaded plugins: fastestmirror, langpacks
https://nexus.fd.io/content/repositories/fd.io.master.ubuntu.xenial.main/repodata/repomd.xml:
 [Errno 14] HTTPS Error 404 - Not Found
Trying other mirror.
To address this issue please refer to the below knowledge base article

https://access.redhat.com/articles/1320623

If above article doesn't help to resolve this issue please create a bug on 
https://bugs.centos.org/


Could you, please, check it to avoid such situation in the future? Logs are 
available in location:

https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-csit-verify-virl-master/4835/

Thanks,
Jan


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] installation issue of release rpm packages from Nexus on centos7 system

2017-03-07 Thread Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco)
Hello vpp developers,

We are facing following issue during installation of release vpp rpm packages 
downloaded from Nexus on VIRL:

DEBUG: Running with topology double-ring-nested.centos7
DEBUG: Checking if file /tmp/vpp-17.04-rc0~359_g077d6ae~b1983.x86_64.rpm exists
DEBUG: Checking if file 
/tmp/vpp-debuginfo-17.04-rc0~359_g077d6ae~b1983.x86_64.rpm exists
DEBUG: Checking if file /tmp/vpp-devel-17.04-rc0~359_g077d6ae~b1983.x86_64.rpm 
exists
DEBUG: Checking if file /tmp/vpp-dpdk-devel-17.02-vpp1.x86_64.rpm exists
DEBUG: Checking if file /tmp/vpp-lib-17.04-rc0~359_g077d6ae~b1983.x86_64.rpm 
exists
DEBUG: Checking if file 
/tmp/vpp-plugins-17.04-rc0~359_g077d6ae~b1983.x86_64.rpm exists
DEBUG: Starting VIRL topology
...
DEBUG: Upgrading VPP
DEBUG: Upgrading VPP on node 10.30.51.211
DEBUG: Installing RPM packages
DEBUG: Command output was:
Preparing...  

DEBUG: Command stderr was:
   file /usr/include/dpdk conflicts between attempted installs of 
vpp-dpdk-devel-17.02-vpp1.x86_64 and 
vpp-devel-17.04-rc0~359_g077d6ae~b1983.x86_64

DEBUG: Upgrading VPP on node 10.30.51.212
DEBUG: Installing RPM packages
DEBUG: Command output was:
Preparing...  

DEBUG: Command stderr was:
   file /usr/include/dpdk conflicts between attempted installs of 
vpp-dpdk-devel-17.02-vpp1.x86_64 and 
vpp-devel-17.04-rc0~359_g077d6ae~b1983.x86_64


so no VPP is installed...

Could you, please, help us to solve the issue?

Thanks,
Jan

Download settings:
URL=https://nexus.fd.io/service/local/artifact/maven/content
VER=RELEASE
GROUP=io.fd.vpp
ARTIFACTS='vpp vpp-debuginfo vpp-devel vpp-dpdk-devel vpp-lib vpp-plugins'
PACKAGE='rpm rpm.md5'
CLASS=REPO=fd.io.master.centos7

for ART in ${ARTIFACTS}; do
for PAC in $PACKAGE; do
curl 
"${URL}?r=${REPO}=${GROUP}=${ART}=${PAC}=${VER}=${CLASS}" -O -J || 
exit
done
done
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] vpp make test is not working

2017-02-22 Thread Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco)
Hello Florin, Klement, all,

Great, it works again:
…
Ran 133 tests in 148.545s

OK (skipped=7)
make[1]: Leaving directory '/home/vpp/Documents/vpp/test'

Thanks for quick fix.

Regards,
Jan

From: Florin Coras [mailto:fcoras.li...@gmail.com]
Sent: Wednesday, February 22, 2017 20:20
To: Klement Sekera -X (ksekera - PANTHEON TECHNOLOGIES at Cisco) 
<ksek...@cisco.com>
Cc: Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco) 
<jgel...@cisco.com>; vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] vpp make test is not working

Jan,

Could you try with master again? For reference, the patch is here [1]

Thanks,
Florin

[1] https://gerrit.fd.io/r/#/c/5477/

On Feb 22, 2017, at 11:11 AM, Klement Sekera -X (ksekera - PANTHEON 
TECHNOLOGIES at Cisco) <ksek...@cisco.com<mailto:ksek...@cisco.com>> wrote:

Jan,

1. Filip is already working on a fix.
2. make test was accidentally removed from the make verify, Filip will
add it back with his fix.

Regards,
Klement

Quoting Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco) (2017-02-22 
18:30:55)

  Hello,



  Currently all make test are failing with the same error (run locally on my
  development VM system – ubuntu16.04):



  ==

  ERROR: setUpClass (test_ip6.TestIPv6)

  --

  Traceback (most recent call last):

File "/home/vpp/Documents/vpp/test/test_ip6.py", line 31, in setUpClass

  super(TestIPv6, cls).setUpClass()

File "/home/vpp/Documents/vpp/test/framework.py", line 242, in
  setUpClass

  cls.vapi = VppPapiProvider(cls.shm_prefix, cls.shm_prefix, cls)

File "/home/vpp/Documents/vpp/test/vpp_papi_provider.py", line 54, in
  __init__

  self.papi = VPP(jsonfiles)

File "build/bdist.linux-x86_64/egg/vpp_papi/vpp_papi.py", line 78, in
  __init__

  self.add_message(m[0], m[1:])

File "build/bdist.linux-x86_64/egg/vpp_papi/vpp_papi.py", line 286, in
  add_message

  args[field_name] = self.__struct(*f)

File "build/bdist.linux-x86_64/egg/vpp_papi/vpp_papi.py", line 151, in
  __struct

  raise ValueError(1, 'Invalid message type: ' + t)

  ValueError: (1, u'Invalid message type: vl_api_local_locator_t')





  It seems that issue has been introduced by following commit:



  vpp@vpp-VirtualBox:~/Documents/vpp$ git bisect good

  694396dc589b4fe75b1fad02fde1d3c3cdaeef04 is the first bad commit

  commit 694396dc589b4fe75b1fad02fde1d3c3cdaeef04

  Author: Filip Tehlar <fteh...@cisco.com<mailto:fteh...@cisco.com>>

  Date:   Fri Feb 17 14:29:11 2017 +0100



  Add Overlay Network Engine API



  Change-Id: I6b5984df176688f0722a2888e73f05d8ed8b9310

  Signed-off-by: Filip Tehlar <fteh...@cisco.com<mailto:fteh...@cisco.com>>



  :04 04 b3fb6d7c953b74ba85e0345f2987452f643b28dc
  a73d9ebb51dcda1fb4795961af1ce5fce418009b M src



  So there are two questions:



  1.  Could somebody have a look on this issue, please?

  2.  Are these (make test) tests really part of vpp-verify-master-{os}
  jobs (i.e. part of make verify)? I didn’t find any information about
  result of these tests in console output of the test
  ([1]https://jenkins.fd.io/job/vpp-verify-master-ubuntu1604/3963/console).



  Thanks,

  Jan

References

  Visible links
  1. https://jenkins.fd.io/job/vpp-verify-master-ubuntu1604/3963/console
___
vpp-dev mailing list
vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
https://lists.fd.io/mailman/listinfo/vpp-dev

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] [csit-dev] reset_fib API issue in case of IPv6 FIB

2017-02-20 Thread Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco)
Hello Neale,

Thank you very much for the fix and info. I will have a look on test cases.

Regards,
Jan

From: Neale Ranns (nranns)
Sent: Monday, February 20, 2017 18:35
To: Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco) 
<jgel...@cisco.com>; vpp-dev@lists.fd.io
Cc: csit-...@lists.fd.io
Subject: Re: [csit-dev] [vpp-dev] reset_fib API issue in case of IPv6 FIB

Hi Jan,

Thanks for the test code.
I have fixed the crash with:
  https://gerrit.fd.io/r/#/c/5438/

the tests don’t pass, but now because of those pesky IP6 ND packets.

Regards,
neale

From: "Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco)" 
<jgel...@cisco.com<mailto:jgel...@cisco.com>>
Date: Monday, 20 February 2017 at 11:51
To: "Neale Ranns (nranns)" <nra...@cisco.com<mailto:nra...@cisco.com>>, 
"vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>" 
<vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>>
Cc: "csit-...@lists.fd.io<mailto:csit-...@lists.fd.io>" 
<csit-...@lists.fd.io<mailto:csit-...@lists.fd.io>>
Subject: RE: [csit-dev] [vpp-dev] reset_fib API issue in case of IPv6 FIB

Hello Neale,

It’s in review: https://gerrit.fd.io/r/#/c/4433/

Affected tests are skipped there at the moment.

Regards,
Jan

From: Neale Ranns (nranns)
Sent: Monday, February 20, 2017 12:49
To: Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco) 
<jgel...@cisco.com<mailto:jgel...@cisco.com>>; 
vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
Cc: csit-...@lists.fd.io<mailto:csit-...@lists.fd.io>
Subject: Re: [csit-dev] [vpp-dev] reset_fib API issue in case of IPv6 FIB

Hi Jan,

Can you please share the test code, then I can reproduce the problem and debug 
it. Maybe push as a draft to gerrit and add me as a reviewer.

Thanks,
neale

From: "Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco)" 
<jgel...@cisco.com<mailto:jgel...@cisco.com>>
Date: Monday, 20 February 2017 at 09:41
To: "Neale Ranns (nranns)" <nra...@cisco.com<mailto:nra...@cisco.com>>, 
"vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>" 
<vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>>
Cc: "csit-...@lists.fd.io<mailto:csit-...@lists.fd.io>" 
<csit-...@lists.fd.io<mailto:csit-...@lists.fd.io>>
Subject: RE: [csit-dev] [vpp-dev] reset_fib API issue in case of IPv6 FIB

Hello Neale,

I tested it with vpp_lite built up from the master branch. I did rebase to the 
current head (my parent is now 90c55724b583434957cf83555a084770f2efdd7a) but 
still the same issue.

Regards,
Jan

From: Neale Ranns (nranns)
Sent: Friday, February 17, 2017 17:19
To: Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco) 
<jgel...@cisco.com<mailto:jgel...@cisco.com>>; 
vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
Cc: csit-...@lists.fd.io<mailto:csit-...@lists.fd.io>
Subject: Re: [csit-dev] [vpp-dev] reset_fib API issue in case of IPv6 FIB

Hi Jan,

What version of VPP are you testing?

Thanks,
neale

From: <csit-dev-boun...@lists.fd.io<mailto:csit-dev-boun...@lists.fd.io>> on 
behalf of "Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco)" 
<jgel...@cisco.com<mailto:jgel...@cisco.com>>
Date: Friday, 17 February 2017 at 14:48
To: "vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>" 
<vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>>
Cc: "csit-...@lists.fd.io<mailto:csit-...@lists.fd.io>" 
<csit-...@lists.fd.io<mailto:csit-...@lists.fd.io>>
Subject: [csit-dev] [vpp-dev] reset_fib API issue in case of IPv6 FIB

Hello VPP dev team,

Usage of reset_fib API command to reset IPv6 FIB leads to incorrect entry in 
the FIB and to crash of VPP.

Could somebody have a look on Jira ticket https://jira.fd.io/browse/VPP-643, 
please?

Thanks,
Jan

From make test log:

12:14:51,710 API: reset_fib ({'vrf_id': 1, 'is_ipv6': 1})
12:14:51,712 IPv6 VRF ID 1 reset
12:14:51,712 CLI: show ip6 fib
12:14:51,714 show ip6 fib
ipv6-VRF:0, fib_index 0, flow hash: src dst sport dport proto
::/0
  unicast-ip6-chain
  [@0]: dpo-load-balance: [index:5 buckets:1 uRPF:5 to:[30:15175]]
[0] [@0]: dpo-drop ip6
fd01:4::1/128
  unicast-ip6-chain
  [@0]: dpo-load-balance: [index:44 buckets:1 uRPF:5 to:[0:0]]
[0] [@0]: dpo-drop ip6
fd01:7::1/128
  unicast-ip6-chain
  [@0]: dpo-load-balance: [index:71 buckets:1 uRPF:5 to:[0:0]]
[0] [@0]: dpo-drop ip6
fd01:a::1/128
  unicast-ip6-chain
  [@0]: dpo-load-balance: [index:98 buckets:1 uRPF:5 to:[0:0]]
[0] [@0]: dpo-drop ip6
fe80::/10
  unicast-ip6-chain
  [@0]: dpo-load-balance: [index:6 buckets:1 uRPF:6 to:[0:0]]
[0] [@2]: dpo-receive
ipv6-VRF:1, fib_index 1, flow hash: src dst sport dport proto
::/0
  unicast-ip6-chain
  [@0]: dpo-load-balance: [index:15 buckets:1 uRPF:13 to:[0:0]]
[0] [@0]: dpo-drop ip6
fd01:1::/64
  UNRESOLVED
fe80::/10
  

Re: [vpp-dev] [csit-dev] reset_fib API issue in case of IPv6 FIB

2017-02-20 Thread Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco)
Hello Neale,

It’s in review: https://gerrit.fd.io/r/#/c/4433/

Affected tests are skipped there at the moment.

Regards,
Jan

From: Neale Ranns (nranns)
Sent: Monday, February 20, 2017 12:49
To: Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco) 
<jgel...@cisco.com>; vpp-dev@lists.fd.io
Cc: csit-...@lists.fd.io
Subject: Re: [csit-dev] [vpp-dev] reset_fib API issue in case of IPv6 FIB

Hi Jan,

Can you please share the test code, then I can reproduce the problem and debug 
it. Maybe push as a draft to gerrit and add me as a reviewer.

Thanks,
neale

From: "Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco)" 
<jgel...@cisco.com<mailto:jgel...@cisco.com>>
Date: Monday, 20 February 2017 at 09:41
To: "Neale Ranns (nranns)" <nra...@cisco.com<mailto:nra...@cisco.com>>, 
"vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>" 
<vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>>
Cc: "csit-...@lists.fd.io<mailto:csit-...@lists.fd.io>" 
<csit-...@lists.fd.io<mailto:csit-...@lists.fd.io>>
Subject: RE: [csit-dev] [vpp-dev] reset_fib API issue in case of IPv6 FIB

Hello Neale,

I tested it with vpp_lite built up from the master branch. I did rebase to the 
current head (my parent is now 90c55724b583434957cf83555a084770f2efdd7a) but 
still the same issue.

Regards,
Jan

From: Neale Ranns (nranns)
Sent: Friday, February 17, 2017 17:19
To: Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco) 
<jgel...@cisco.com<mailto:jgel...@cisco.com>>; 
vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
Cc: csit-...@lists.fd.io<mailto:csit-...@lists.fd.io>
Subject: Re: [csit-dev] [vpp-dev] reset_fib API issue in case of IPv6 FIB

Hi Jan,

What version of VPP are you testing?

Thanks,
neale

From: <csit-dev-boun...@lists.fd.io<mailto:csit-dev-boun...@lists.fd.io>> on 
behalf of "Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco)" 
<jgel...@cisco.com<mailto:jgel...@cisco.com>>
Date: Friday, 17 February 2017 at 14:48
To: "vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>" 
<vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>>
Cc: "csit-...@lists.fd.io<mailto:csit-...@lists.fd.io>" 
<csit-...@lists.fd.io<mailto:csit-...@lists.fd.io>>
Subject: [csit-dev] [vpp-dev] reset_fib API issue in case of IPv6 FIB

Hello VPP dev team,

Usage of reset_fib API command to reset IPv6 FIB leads to incorrect entry in 
the FIB and to crash of VPP.

Could somebody have a look on Jira ticket https://jira.fd.io/browse/VPP-643, 
please?

Thanks,
Jan

From make test log:

12:14:51,710 API: reset_fib ({'vrf_id': 1, 'is_ipv6': 1})
12:14:51,712 IPv6 VRF ID 1 reset
12:14:51,712 CLI: show ip6 fib
12:14:51,714 show ip6 fib
ipv6-VRF:0, fib_index 0, flow hash: src dst sport dport proto
::/0
  unicast-ip6-chain
  [@0]: dpo-load-balance: [index:5 buckets:1 uRPF:5 to:[30:15175]]
[0] [@0]: dpo-drop ip6
fd01:4::1/128
  unicast-ip6-chain
  [@0]: dpo-load-balance: [index:44 buckets:1 uRPF:5 to:[0:0]]
[0] [@0]: dpo-drop ip6
fd01:7::1/128
  unicast-ip6-chain
  [@0]: dpo-load-balance: [index:71 buckets:1 uRPF:5 to:[0:0]]
[0] [@0]: dpo-drop ip6
fd01:a::1/128
  unicast-ip6-chain
  [@0]: dpo-load-balance: [index:98 buckets:1 uRPF:5 to:[0:0]]
[0] [@0]: dpo-drop ip6
fe80::/10
  unicast-ip6-chain
  [@0]: dpo-load-balance: [index:6 buckets:1 uRPF:6 to:[0:0]]
[0] [@2]: dpo-receive
ipv6-VRF:1, fib_index 1, flow hash: src dst sport dport proto
::/0
  unicast-ip6-chain
  [@0]: dpo-load-balance: [index:15 buckets:1 uRPF:13 to:[0:0]]
[0] [@0]: dpo-drop ip6
fd01:1::/64
  UNRESOLVED
fe80::/10
  unicast-ip6-chain
  [@0]: dpo-load-balance: [index:16 buckets:1 uRPF:14 to:[0:0]]
[0] [@2]: dpo-receive

And later:

12:14:52,170 CLI: packet-generator enable
12:14:57,171 --- addError() TestIP6VrfMultiInst.test_ip6_vrf_02( IP6 VRF  
Multi-instance test 2 - delete 2 VRFs
) called, err is (, IOError(3, 'Waiting for 
reply timed out'), )
12:14:57,172 formatted exception is:
Traceback (most recent call last):
  File "/usr/lib/python2.7/unittest/case.py", line 331, in run
testMethod()
  File "/home/vpp/Documents/vpp/test/test_ip6_vrf_multi_instance.py", line 365, 
in test_ip6_vrf_02
self.run_verify_test()
  File "/home/vpp/Documents/vpp/test/test_ip6_vrf_multi_instance.py", line 322, 
in run_verify_test
self.pg_start()
  File "/home/vpp/Documents/vpp/test/framework.py", line 398, in pg_start
cls.vapi.cli('packet-generator enable')
  File "/home/vpp/Documents/vpp/test/vpp_papi_provider.py", line 169, in cli
r = self.papi.cli_inband(length=len(cli), cmd=cli)
  File "build/bdist.linux-x86_64/egg/vpp_papi/vpp_papi.py", line 305, in 

f = lambda **kwargs: (self._call_vpp(i, msgdef, multipart, **kwargs))
  File "build/bdist.linux-x86_64/egg/vpp_papi/

[vpp-dev] Jenkins jobs are not started

2017-02-10 Thread Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco)
Hello,

No new Jenkins job was not started in the last our and the build queue is 
increasing. Could you, please, have a look on it?

Thanks,
Jan
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Fix for vpp-verify-master-ubuntu1604 build failures

2017-02-02 Thread Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco)
Hello,

I had a look on all of three VIRL servers. I deleted few stuck sessions.

I think the main issue is that one of VIRL servers is in TESTING status (I 
guess because of Thomas Herbert’s Centos7 VIRL image tests) so not used for 
Jenkins jobs. It could lead to situation that we have no enough free IP 
addresses available on VIRL server top finish the simulation (session) start up 
when there is already a high number of running sessions on the VIRL server.

We should get the third VIRL server back to PRODUCTION status as soon as 
possible to increase VIRL capacity.
@Thomas – when do you expect to finish your work on Centos7 preparation for 
VIRL?

We should also have a look on the VIRL server simulation start up procedure to 
improve handling in situation when VIRL simulation is not started successfully 
- maybe try to use another VIRL server if available.

Regards,
Jan

From: Dave Wallace [mailto:dwallac...@gmail.com]
Sent: Thursday, February 02, 2017 06:14
To: vpp-dev <vpp-dev@lists.fd.io>; csit-...@lists.fd.io; Jan Gelety -X (jgelety 
- PANTHEON TECHNOLOGIES at Cisco) <jgel...@cisco.com>
Subject: Re: Fix for vpp-verify-master-ubuntu1604 build failures

Jan, csit-dev,

There have been a number of failures of the vpp-csit-verify-virl-master jobs 
created by my rebase-ing vpp patches (see thread below for details).  Some of 
these failures may be valid test failures.  However, I have looked at a couple 
of them that seem to indicate there may be an issue starting up the VIRL VMs.  
The error signature that I'm seeing is the following error after the three 
simulations are spun up.

[Excerpt from 
https://jenkins.fd.io/job/vpp-csit-verify-virl-master/3637/console]:
 %< 
04:01:32 + VIRL_SID[${index}]='ERROR: Simulation started OK but devices never 
changed to ACTIVE state
04:01:32 Last VIRL response:
04:01:32 {u'\''session-Pv076_'\'': {u'\''~mgmt-lxc'\'': {u'\''vnc-console'\'': 
False, u'\''subtype'\'': u'\''mgmt-lxc'\'', u'\''state'\'': u'\''ABSENT'\'', 
u'\''management-protocol'\'': u'\''ssh'\'', u'\''management-proxy'\'': 
u'\''self'\'', u'\''serial-ports'\'': 0}, u'\''tg1'\'': {u'\''vnc-console'\'': 
True, u'\''subtype'\'': u'\''server'\'', u'\''state'\'': u'\''ABSENT'\'', 
u'\''management-protocol'\'': u'\''ssh'\'', u'\''management-proxy'\'': 
u'\''lxc'\'', u'\''serial-ports'\'': 1}, u'\''sut1'\'': {u'\''vnc-console'\'': 
True, u'\''subtype'\'': u'\''vPP'\'', u'\''state'\'': u'\''ABSENT'\'', 
u'\''management-protocol'\'': u'\''ssh'\'', u'\''management-proxy'\'': 
u'\''lxc'\'', u'\''serial-ports'\'': 1}, u'\''sut2'\'': {u'\''vnc-console'\'': 
True, u'\''subtype'\'': u'\''vPP'\'', u'\''state'\'': u'\''ABSENT'\'', 
u'\''management-protocol'\'': u'\''ssh'\'', u'\''management-proxy'\'': 
u'\''lxc'\'', u'\''serial-ports'\'': 1}}}'
04:01:32 + retval=1
04:01:32 + '[' 1 -ne 0 ']'
04:01:32 + echo 'VIRL simulation start failed on 10.30.51.29'
04:01:32 VIRL simulation start failed on 10.30.51.29
 %< 

Can you please take a look at the most recent vpp-csit-verify-virl-master 
failures to see if this is a CSIT operational issue or a valid test failure?

Thanks,
-daw-
On 2/1/17 10:15 PM, Dave Wallace wrote:
On 2/1/17 9:55 PM, Dave Wallace wrote:

Folks,

After today's ubuntu mirror issue was resolved, Ed, Vanessa, and I discovered 
another failure mode for the vpp-verify-master-ubuntu1604 verify job.  In the 
process of diagnosing the failure, Ed discovered and fixed a bug in "make 
verify" that was the root cause of this issue.

See https://gerrit.fd.io/r/#/c/4993/ for details.

I merged this patch and verified that it resolved the 
vpp-verify-master-ubuntu1604 failure for https://gerrit.fd.io/r/#/c/4897.  I 
have subsequently rebased all patches in the gerrit:vpp queue that were open 
and current.  Any patch that has merge conflicts will need to be rebased 
manually.

Thanks to Ed for his keen eyesight and Vanessa for cancelling her after hours 
plans to stay and help resolve the issue.

I have also cherry-picked 4993 to stable/1701, but that still requires merging.

Please disregard the following, it appears that the other jobs are waiting in 
the build queue and have not been posted to gerrit yet.

I also noticed that stable/1701 only has verify jobs for ubuntu1404 and 
centos7.  We should add a verify job for ubuntu1604 as well.
-daw-


Please help monitor the status of the verify jobs that are now in progress.

Thanks,
-daw-


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] CSIT for 17.01 release

2016-12-20 Thread Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco)
Hello,

I had a look on current status of CSIT related jobs. When the vpp stable/17.01 
branch is created then Jenkins jobs vpp-csit-verify-virl-1701 and 
vpp-csit-verify-hw-perf-1701-{type} (already created) will use CSIT tests from 
CSIT master branch.

When the CSIT release branch rls1701 and corresponding operational branch 
oper-rls1701-YYMMDD are created we will

-submit a patch under vpp stable/17.01 branch to use CSIT tests from 
this branch

-create csit-vpp-verify-master-1701 job (to automatically create CSIT 
1701 operational branches)

-create csit-vpp-verify-1701-semiweekly (to verify VPP deb builds from 
https://nexus.fd.io/content/repositories/fd.io.stable.1701.ubuntu.xenial.main/io/fd/vpp/)

We will provide more information after tomorrow's CSIT weekly meeting.

Regards,
Jan
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] VIRL servers back in operation

2016-12-13 Thread Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco)
Hello,

Issue with VIRL servers is fixed (and small correction that should avoid such 
issue in the future applied) and vpp-csit-verify job can be run now.

Recheck was applied on following affected patches:

https://gerrit.fd.io/r/#/c/4267/1
https://gerrit.fd.io/r/#/c/4197/2
https://gerrit.fd.io/r/#/c/4269/1

Patch https://gerrit.fd.io/r/4217 seems to be a draft (no recheck possible 
there from our side).

Regards,
Jan
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] possible network issues?

2016-11-14 Thread Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco)
Hello,

We are experiencing occasional issues with access to 
https://gerrit.fd.io/r/csit from Jenkins since last Friday.

https://jenkins.fd.io/view/vpp/job/vpp-csit-verify-virl-master/2193/console :


10:55:38 + echo 
'***'

10:55:38 ***

10:55:38 + echo '* VPP BUILD SUCCESSFULLY COMPLETED'

10:55:38 * VPP BUILD SUCCESSFULLY COMPLETED

10:55:38 + echo 
'***'

10:55:38 ***

10:55:38 [vpp-csit-verify-virl-master] $ /bin/bash 
/tmp/hudson898302288913312446.sh

10:55:38 + '[' -x build-root/scripts/csit-test-branch ']'

10:55:38 ++ build-root/scripts/csit-test-branch

10:55:38 + CSIT_BRANCH=oper-161106

10:55:38 + git clone https://gerrit.fd.io/r/csit --branch oper-161106

10:55:38 Cloning into 'csit'...

10:55:49 remote: Server Error

10:55:49 fatal: unable to access 'https://gerrit.fd.io/r/csit/': The requested 
URL returned error: 500

10:55:49 Build step 'Execute shell' marked build as failure

10:55:49 [ssh-agent] Stopped.

10:55:49 Archiving artifacts


https://jenkins.fd.io/view/csit/job/csit-vpp-functional-master-virl/2027/console:


12:09:04 Cloning the remote Git repository

12:09:04 Cloning repository ssh://rotterdam-jobbuil...@gerrit.fd.io:29418/csit

12:09:05  > git init /w/workspace/csit-vpp-functional-master-virl # timeout=10

12:09:05 Fetching upstream changes from 
ssh://rotterdam-jobbuil...@gerrit.fd.io:29418/csit

12:09:05  > git --version # timeout=10

12:09:06 using GIT_SSH to set credentials Rotterdam JJB

12:09:10  > git -c core.askpass=true fetch --tags --progress 
ssh://rotterdam-jobbuil...@gerrit.fd.io:29418/csit 
+refs/heads/*:refs/remotes/origin/*

12:09:30 ERROR: Error cloning remote repo 'origin'

12:09:30 
hudson.plugins.git.GitException:
 Command "git -c core.askpass=true fetch --tags --progress 
ssh://rotterdam-jobbuil...@gerrit.fd.io:29418/csit 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:

12:09:30 stdout:

12:09:30 stderr: ssh: Could not resolve hostname gerrit.fd.io: Temporary 
failure in name resolution

12:09:30 fatal: Could not read from remote repository.

12:09:30

12:09:30 Please make sure you have the correct access rights

12:09:30 and the repository exists.


Could you, please, have a look on it?

Thank you very much.

Regards,
Jan
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] L2BD test failing during MAC learning - VPP-518

2016-10-28 Thread Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco)
Dear vpp developers,

We found out that VPP crashes during the MAC learning when run L2BD test that 
is part of the test framework. The issue is reported here: 
https://jira.fd.io/browse/VPP-518

Regards,
Jan

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] Workaround for VPP restart issue on VIRL

2016-10-26 Thread Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco)
Dear VPP developers,

As it was discussed and proposed on the VPP weekly call this Tuesday we 
implemented workaround (as a short-term solution until we find and fix the root 
cause) to try another restart of VPP when it failed (max. three tries are 
allowed). This workaround has been cherry-picked to csit operational branch 
oper-161024 that is used to verify vpp patches.

Nevertheless If you will experience the issue again, please, let us know (by 
sending an e-mail to csit-...@lists.fd.io) the 
affected job number so we can have a look on it.

Thanks and regards,
Jan
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev