Re: [vpp-dev] User-space TCP Stack

2017-08-09 Thread Florin Coras
Hi Stephen,

The goal is to have a complete userspace stack, i.e., a session layer 
accessible over VPP’s binary API, a transport layer, a POSIX-like wrapper 
library for interacting with the stack and a shared memory mechanism to pass 
data between apps and VPP. We have support for pluggable transports but the 
only one actively being developed right now is TCP. The wrapper library (VCL) 
will soon be published, but Keith can tell you more about it.

Florin 

> On Aug 9, 2017, at 7:55 AM, STEPHEN PETRIDES  
> wrote:
> 
> Hi All,
> 
> I came across this exchange 
> (https://www.mail-archive.com/vpp-dev@lists.fd.io/msg03112.html 
> ) in the 
> mailing list and wanted to learn more about what features of the stack are 
> available currently and what is still being developed.
> 
> For the internal/external apps mentioned, how do these work and what features 
> are currently supported?
> 
> For the wrapper library being developed, what will this look like? Will there 
> be an API or interface for endpoint applications?
> 
> Thank you.
> 
> -- 
> Stephen
> ___
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Spurious make test failure (container POC)

2017-08-09 Thread Ed Kern (ejk)

klement,

ok…ill think about how to do that without too much trouble in its current 
state..

in the meantime…blowing out the cpu and memory a bit changed the error……


21:49:42 create 1k of p2p subifs
  OK
21:49:42 
==
21:51:52 21:53:13,610 Timeout while waiting for child test runner process (last 
test running was `drop rx packet not matching p2p subinterface' in 
`/tmp/vpp-unittest-P2PEthernetIPV6-GDHSDK')!
21:51:52 Killing possible remaining process IDs:  19954 19962 19964



21:45:05 PPPoE Test Case
21:45:05 ===21:48:13,778 Timeout while waiting 
for child test runner process (last test running was `drop rx packet not 
matching p2p subinterface' in `/tmp/vpp-unittest-P2PEthernetIPV6-I0REOQ')!
21:47:45 Killing possible remaining process IDs:  20017 20025 20027



20:48:46 PPPoE Test Case
20:48:46 ===20:51:34,082 Timeout while waiting 
for child test runner process (last test running was `drop rx packet not 
matching p2p subinterface' in `/tmp/vpp-unittest-P2PEthernetIPV6-tQ5sP0')!
20:51:05 Killing possible remaining process IDs:  19919 19927 19929


anything new/different/exciting in here?

Also the memory/cpu expansion (by roughly a third) these failures happen in the 
order of 2/3 minutes as opposed to a 90 leading to timeout failure.


Since the verifies are still happily chugging along I ASSuME that this drop 
packet check isn’t happening in that suite?

Ed





On Aug 9, 2017, at 1:04 PM, Klement Sekera -X (ksekera - PANTHEON TECHNOLOGIES 
at Cisco) > wrote:

Ed,

it'd help if you could collect log.txt from a failed run so we could
peek under the hood... please see my other email in this thread...

Thanks,
Klement

Quoting Ed Kern (ejk) (2017-08-09 20:48:46)
  this is not you…or this patch…
  the make test-debug has had a 90+% failure rate (read not 100%) for at
  least the last 100 builds
  (far back as my current logs go but will probably blow that out a bit now)
  you hit the one that is seen most often… on that create 1k of p2p subifs
  the other much less frequent is

13:40:24 CGNAT TCP session close initiated from outside network 
  OK
13:40:24 =Build timed out 
(after 120 minutes). Marking the build as failed.

  so currently I’m allocating 1 MHz in cpu and 8G in memory for verify
  and also for test-debug runs…
  Im not obviously getting (as you can see) errors about it running out of
  memory but I wonder if thats possibly whats happening..
  its easy enough to blow my allocations out a bit and see if that makes a
  difference..
  If anyone has other ideas to try and happy to give them a shot..
  appreciate the heads up
  Ed

On Aug 9, 2017, at 12:07 PM, Dave Barach (dbarach)
<[1]dbar...@cisco.com> wrote:
Please see [2]https://gerrit.fd.io/r/#/c/7927, and


[3]http://jenkins.ejkern.net:8080/job/vpp-test-debug-master-ubuntu1604/1056/console

The patch in question is highly unlikely to cause this failure...


14:37:11

==
14:37:11 P2P Ethernet tests
14:37:11

==
14:37:11 delete/create p2p
subif  OK
14:37:11 create 100k of p2p
subifsSKIP
14:37:11 create 1k of p2p
subifs  Build timed out
(after 120 minutes). Marking the build as failed.
16:24:49 $ ssh-agent -k
16:24:54 unset SSH_AUTH_SOCK;
16:24:54 unset SSH_AGENT_PID;
16:24:54 echo Agent pid 84 killed;
16:25:07 [ssh-agent] Stopped.
16:25:07 Build was aborted
16:25:09 [WS-CLEANUP] Deleting project workspace...[WS-CLEANUP] done
16:25:11 Finished: FAILURE

Thanks… Dave

References

  Visible links
  1. mailto:dbar...@cisco.com
  2. https://gerrit.fd.io/r/#/c/7927
  3. 
http://jenkins.ejkern.net:8080/job/vpp-test-debug-master-ubuntu1604/1056/console

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] Load-balancer plugin Question

2017-08-09 Thread Michael Borokhovich
Hi,

We are using VPP 1707 and the load balancer plugin.

We have only one public IP available for the load balancer VM and we use
this IP as VIP.

In the VPP 1609, we were able to assign the same VIP to the public
interface and everything worked fine.

However, in VPP 1707, the behaviour has changed. Since VIP is assigned to
the public interface, this interface captures the packets (with dest_ip =
VIP) and they are not forwarded to the application servers.

If we remove the VIP assignment from the public interface, then it is not
able to participate in ARP and thus the load balancer becomes unreachable.

I guess the issue can be solved by assigning another public IP (from the
same subnet as VIP) to the public interface. But we don't have this
additional public IP.

Is there a way to have the VIP assigned to the public interface but still
forward the packets (with dest_ip = VIP) to the application servers? Or
maybe some other configuration may help?

Thanks,
Michael.
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] https://gerrit.fd.io/r/#/c/7856/ Review gerrit please

2017-08-09 Thread Thomas F Herbert

I saw it.

Thanks,

--Tom


On 08/08/2017 11:29 AM, Neale Ranns (nranns) wrote:


Merged.

Thanks,

neale

*From: * on behalf of Thomas F
Herbert 
*Date: *Tuesday, 8 August 2017 at 16:05
*To: *vpp-dev 
*Subject: *[vpp-dev] https://gerrit.fd.io/r/#/c/7856/ Review
gerrit please

All:

Could someone please review this patch?

https://gerrit.fd.io/r/#/c/7856/

I have some additional dependent patches on this patch to submit.

--TFH

-- 
*Thomas F Herbert*

NFV and Fast Data Planes
Office of Technology
*Red Hat*



--
*Thomas F Herbert*
NFV and Fast Data Planes
Office of Technology
*Red Hat*
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Spurious make test failure (container POC)

2017-08-09 Thread Klement Sekera -X (ksekera - PANTHEON TECHNOLOGIES at Cisco)
Ed,

it'd help if you could collect log.txt from a failed run so we could
peek under the hood... please see my other email in this thread...

Thanks,
Klement

Quoting Ed Kern (ejk) (2017-08-09 20:48:46)
>this is not you…or this patch…
>the make test-debug has had a 90+% failure rate (read not 100%) for at
>least the last 100 builds
>(far back as my current logs go but will probably blow that out a bit now)
>you hit the one that is seen most often… on that create 1k of p2p subifs 
>the other much less frequent is 
> 
>  13:40:24 CGNAT TCP session close initiated from outside network  
>  OK
>  13:40:24 =Build timed out 
> (after 120 minutes). Marking the build as failed.
> 
>so currently I’m allocating 1 MHz in cpu and 8G in memory for verify
>and also for test-debug runs…
>Im not obviously getting (as you can see) errors about it running out of
>memory but I wonder if thats possibly whats happening..
>its easy enough to blow my allocations out a bit and see if that makes a
>difference..
>If anyone has other ideas to try and happy to give them a shot..
>appreciate the heads up
>Ed
> 
>  On Aug 9, 2017, at 12:07 PM, Dave Barach (dbarach)
>  <[1]dbar...@cisco.com> wrote:
>  Please see [2]https://gerrit.fd.io/r/#/c/7927, and 
>   
>  
> [3]http://jenkins.ejkern.net:8080/job/vpp-test-debug-master-ubuntu1604/1056/console
>   
>  The patch in question is highly unlikely to cause this failure...
>   
>   
>  14:37:11
>  
> ==
>  14:37:11 P2P Ethernet tests
>  14:37:11
>  
> ==
>  14:37:11 delete/create p2p
>  subif  OK
>  14:37:11 create 100k of p2p
>  subifs    SKIP
>  14:37:11 create 1k of p2p
>  subifs  Build timed out
>  (after 120 minutes). Marking the build as failed.
>  16:24:49 $ ssh-agent -k
>  16:24:54 unset SSH_AUTH_SOCK;
>  16:24:54 unset SSH_AGENT_PID;
>  16:24:54 echo Agent pid 84 killed;
>  16:25:07 [ssh-agent] Stopped.
>  16:25:07 Build was aborted
>  16:25:09 [WS-CLEANUP] Deleting project workspace...[WS-CLEANUP] done
>  16:25:11 Finished: FAILURE
>   
>  Thanks… Dave
> 
> References
> 
>Visible links
>1. mailto:dbar...@cisco.com
>2. https://gerrit.fd.io/r/#/c/7927
>3. 
> http://jenkins.ejkern.net:8080/job/vpp-test-debug-master-ubuntu1604/1056/console
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Spurious make test failure (container POC)

2017-08-09 Thread Klement Sekera -X (ksekera - PANTHEON TECHNOLOGIES at Cisco)
Hi Dave,

this looks like something got stuck - and the only thing I know
which can get the framework stuck like this is if vpp coredumps mid-API,
in which case, python gets stuck doing unix_shared_memory_queue_sub(). I
actually pushed a patch today, which makes the test framework to fork at
start and the tests then run in child process, periodically sending
keep-alives via pipe to parent. There is a default timeout of 120
seconds for each test case, which, if not met will cause the parent to
kill the child and give up. So this will at least make this visible
sooner.

Anyhow, what we really need is to configure the jenkins to save the
/tmp/vpp-unittest-* stuff (or at least /tmp/vpp-unittest-*/log.txt)
so we can see what actually happened.

Alternatively, we could add V=2 to make test in the verify job, but that
would add at least 50MB of data to the console output, which would make
it cumbersome..

Any jenkins wizards around?

Thanks,
Klement

Quoting Dave Barach (dbarach) (2017-08-09 20:07:47)
>Please see [1]https://gerrit.fd.io/r/#/c/7927, and
> 
> 
> 
>
> [2]http://jenkins.ejkern.net:8080/job/vpp-test-debug-master-ubuntu1604/1056/console
> 
> 
> 
>The patch in question is highly unlikely to cause this failure...
> 
> 
> 
> 
> 
>14:37:11
>
> ==
> 
>14:37:11 P2P Ethernet tests
> 
>14:37:11
>
> ==
> 
>14:37:11 delete/create p2p
>subif  OK
> 
>14:37:11 create 100k of p2p
>subifs    SKIP
> 
>14:37:11 create 1k of p2p
>subifs  Build timed out
>(after 120 minutes). Marking the build as failed.
> 
>16:24:49 $ ssh-agent -k
> 
>16:24:54 unset SSH_AUTH_SOCK;
> 
>16:24:54 unset SSH_AGENT_PID;
> 
>16:24:54 echo Agent pid 84 killed;
> 
>16:25:07 [ssh-agent] Stopped.
> 
>16:25:07 Build was aborted
> 
>16:25:09 [WS-CLEANUP] Deleting project workspace...[WS-CLEANUP] done
> 
>16:25:11 Finished: FAILURE
> 
> 
> 
>Thanks… Dave
> 
> 
> 
> References
> 
>Visible links
>1. https://gerrit.fd.io/r/#/c/7927
>2. 
> http://jenkins.ejkern.net:8080/job/vpp-test-debug-master-ubuntu1604/1056/console
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Spurious make test failure (container POC)

2017-08-09 Thread Ed Kern (ejk)
this is not you…or this patch…

the make test-debug has had a 90+% failure rate (read not 100%) for at least 
the last 100 builds
(far back as my current logs go but will probably blow that out a bit now)

you hit the one that is seen most often… on that create 1k of p2p subifs

the other much less frequent is

13:40:24 CGNAT TCP session close initiated from outside network 
  OK
13:40:24 =Build timed out 
(after 120 minutes). Marking the build as failed.

so currently I’m allocating 1 MHz in cpu and 8G in memory for verify and 
also for test-debug runs…

Im not obviously getting (as you can see) errors about it running out of memory 
but I wonder if thats possibly whats happening..

its easy enough to blow my allocations out a bit and see if that makes a 
difference..
If anyone has other ideas to try and happy to give them a shot..

appreciate the heads up

Ed




On Aug 9, 2017, at 12:07 PM, Dave Barach (dbarach) 
> wrote:

Please see https://gerrit.fd.io/r/#/c/7927, and

http://jenkins.ejkern.net:8080/job/vpp-test-debug-master-ubuntu1604/1056/console

The patch in question is highly unlikely to cause this failure...


14:37:11 
==
14:37:11 P2P Ethernet tests
14:37:11 
==
14:37:11 delete/create p2p subif
  OK
14:37:11 create 100k of p2p subifs  
  SKIP
14:37:11 create 1k of p2p subifs
  Build timed out (after 120 minutes). Marking the build as failed.
16:24:49 $ ssh-agent -k
16:24:54 unset SSH_AUTH_SOCK;
16:24:54 unset SSH_AGENT_PID;
16:24:54 echo Agent pid 84 killed;
16:25:07 [ssh-agent] Stopped.
16:25:07 Build was aborted
16:25:09 [WS-CLEANUP] Deleting project workspace...[WS-CLEANUP] done
16:25:11 Finished: FAILURE

Thanks… Dave

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Configuring multiple VFRs

2017-08-09 Thread Michael Borokhovich
Hi Neale,

Indeed, it worked in 1707! I used 1609 previously.

Thanks a lot!
Michael.

On Wed, Aug 9, 2017 at 2:20 PM, Neale Ranns (nranns) 
wrote:

> Hi Michael,
>
>
>
> Those configs will work with newer versions of VPP. Are you able to
> upgrade to 17.07?
>
>
>
> Thanks,
>
> neale
>
>
>
> *From: * on behalf of Michael Borokhovich <
> michael...@gmail.com>
> *Date: *Wednesday, 9 August 2017 at 18:50
> *To: *"vpp-dev@lists.fd.io" 
> *Subject: *[vpp-dev] Configuring multiple VFRs
>
>
>
> Hi,
>
>
>
> We have 3 interfaces. First two belong to different VRFs and the third one
> is a global interface (GigabitEthernet0/6/0).
>
> How we can configure VPP so that the packets received through any of the
> first two interfaces will be sent to the global interface
> (GigabitEthernet0/6/0).
>
>
>
> The default GW is 10.100.4.11, and we tried adding it to the tables but
> nothing worked. Specifically, if we do
>
> "vppctl ip *route* add 10.100.4.0/24 table 1 via GigabitEthernet0/6/0"
>
> the packets are forwarded but *without the Ethernet header.*
>
>
>
> Any help is appreciated. Our config is below.
>
>
>
> Thanks,
>
> Michael.
>
>
>
> *Interfaces config:*
>
>
>
> vppctl set interface ip table GigabitEthernet0/5/0 1
>
> vppctl set int ip address GigabitEthernet0/5/0 10.100.3.11/24
>
> vppctl set int state GigabitEthernet0/5/0 up
>
>
>
> vppctl set interface ip table GigabitEthernet0/4/0 2
>
> vppctl set int ip address GigabitEthernet0/4/0 10.100.1.11/24
>
> vppctl set int state GigabitEthernet0/4/0 up
>
>
>
> vppctl set int ip address GigabitEthernet0/6/0 10.100.4.11/24
>
> vppctl set int state GigabitEthernet0/6/0 up
>
>
>
>
>
> *FIB:*
>
>
>
> Table 0, fib_index 0, flow hash: src dst sport dport proto
>
>  Destination Packets  Bytes Adjacency
>
> 10.100.4.0/24  0   0 weight 1, index 3
>
>   10.100.4.11/24
>
> 10.100.4.11/32 0   0 weight 1, index 4
>
>   10.100.4.11/24
>
>
>
> Table 1, fib_index 1, flow hash: src dst sport dport proto
>
>  Destination Packets  Bytes Adjacency
>
> 10.100.3.0/24  0   0 weight 1, index 5
>
>   10.100.3.11/24
>
> 10.100.3.11/32 0   0 weight 1, index 6
>
>   10.100.3.11/24
>
>
>
> Table 2, fib_index 2, flow hash: src dst sport dport proto
>
>  Destination Packets  Bytes Adjacency
>
> 10.100.1.0/24  0   0 weight 1, index 7
>
>   10.100.1.11/24
>
> 10.100.1.11/32 0   0 weight 1, index 8
>
>   10.100.1.11/24
>
>
>
> *IP addresses:*
>
> GigabitEthernet0/4/0 (up):
>
>   10.100.1.11/24 table 2
>
> GigabitEthernet0/5/0 (up):
>
>   10.100.3.11/24 table 1
>
> GigabitEthernet0/6/0 (up):
>
>   10.100.4.11/24
>
> local0 (dn):
>
>
>
>
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Configuring multiple VFRs

2017-08-09 Thread Neale Ranns (nranns)
Hi Michael,

Those configs will work with newer versions of VPP. Are you able to upgrade to 
17.07?

Thanks,
neale

From:  on behalf of Michael Borokhovich 

Date: Wednesday, 9 August 2017 at 18:50
To: "vpp-dev@lists.fd.io" 
Subject: [vpp-dev] Configuring multiple VFRs

Hi,

We have 3 interfaces. First two belong to different VRFs and the third one is a 
global interface (GigabitEthernet0/6/0).
How we can configure VPP so that the packets received through any of the first 
two interfaces will be sent to the global interface (GigabitEthernet0/6/0).

The default GW is 10.100.4.11, and we tried adding it to the tables but nothing 
worked. Specifically, if we do
"vppctl ip route add 10.100.4.0/24 table 1 via 
GigabitEthernet0/6/0"
the packets are forwarded but without the Ethernet header.

Any help is appreciated. Our config is below.

Thanks,
Michael.

Interfaces config:

vppctl set interface ip table GigabitEthernet0/5/0 1
vppctl set int ip address GigabitEthernet0/5/0 
10.100.3.11/24
vppctl set int state GigabitEthernet0/5/0 up

vppctl set interface ip table GigabitEthernet0/4/0 2
vppctl set int ip address GigabitEthernet0/4/0 
10.100.1.11/24
vppctl set int state GigabitEthernet0/4/0 up

vppctl set int ip address GigabitEthernet0/6/0 
10.100.4.11/24
vppctl set int state GigabitEthernet0/6/0 up


FIB:

Table 0, fib_index 0, flow hash: src dst sport dport proto
 Destination Packets  Bytes Adjacency
10.100.4.0/24  0   0 
weight 1, index 3
  
10.100.4.11/24
10.100.4.11/32 0   0 
weight 1, index 4
  
10.100.4.11/24

Table 1, fib_index 1, flow hash: src dst sport dport proto
 Destination Packets  Bytes Adjacency
10.100.3.0/24  0   0 
weight 1, index 5
  
10.100.3.11/24
10.100.3.11/32 0   0 
weight 1, index 6
  
10.100.3.11/24

Table 2, fib_index 2, flow hash: src dst sport dport proto
 Destination Packets  Bytes Adjacency
10.100.1.0/24  0   0 
weight 1, index 7
  
10.100.1.11/24
10.100.1.11/32 0   0 
weight 1, index 8
  
10.100.1.11/24

IP addresses:
GigabitEthernet0/4/0 (up):
  10.100.1.11/24 table 2
GigabitEthernet0/5/0 (up):
  10.100.3.11/24 table 1
GigabitEthernet0/6/0 (up):
  10.100.4.11/24
local0 (dn):

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] [csit-dev] API Change: Dedicated SW interface Event

2017-08-09 Thread Dave Wallace

+1

On 8/9/2017 1:49 PM, Luke, Chris wrote:

Yeah, I suspect the API was simple enough back at the beginning that it just 
made sense to do it that way. This message, and presumably others, suffer from 
legacy cruft.

Assuming the downstream consumers are okay with it, I fully support blowing out 
the cruft.

Chris.


-Original Message-
From: Neale Ranns (nranns) [mailto:nra...@cisco.com]
Sent: Wednesday, August 9, 2017 13:38
To: Luke, Chris ; vpp-dev@lists.fd.io; csit-
d...@lists.fd.io; honeycomb-...@lists.fd.io
Subject: Re: [csit-dev] API Change: Dedicated SW interface Event


Hi Chris,

I don’t know the history. Convenience probably. In the absence of our
attempt at auto-generating/auto-detecting event-notify pairs, the message
re-use is understandable. I’m just trying to avoid the special cases in those
generators (like-wise with the recent ACL dump addition).

I’ll give it till the end of the week, then press the button if there are no
objections.

Thanks,
neale

-Original Message-
From: "Luke, Chris" 
Date: Wednesday, 9 August 2017 at 14:25
To: "Neale Ranns (nranns)" , "vpp-dev@lists.fd.io"
, "csit-...@lists.fd.io" ,
"honeycomb-...@lists.fd.io" 
Subject: RE: [csit-dev] API Change: Dedicated SW interface Event

 No specific objection, though I don't think I understand why it was done
this way in the first place. Laziness, perhaps?

 Chris.

 > -Original Message-
 > From: csit-dev-boun...@lists.fd.io [mailto:csit-dev-boun...@lists.fd.io]
On
 > Behalf Of Neale Ranns (nranns)
 > Sent: Wednesday, August 9, 2017 8:51
 > To: vpp-dev@lists.fd.io; csit-...@lists.fd.io; honeycomb-...@lists.fd.io
 > Subject: Re: [csit-dev] API Change: Dedicated SW interface Event
 >
 >
 > Hi All,
 >
 > Any objections or support for this proposal?
 >
 > Thanks,
 > neale
 >
 > -Original Message-
 > From:  on behalf of "Neale Ranns
(nranns)"
 > 
 > Date: Monday, 7 August 2017 at 16:02
 > To: "vpp-dev@lists.fd.io" , "csit-...@lists.fd.io"
 d...@lists.fd.io>, "honeycomb-...@lists.fd.io"  d...@lists.fd.io>
 > Subject: [csit-dev] API Change: Dedicated SW interface Event
 >
 >
 > Hi All,
 >
 > I would like to propose the addition of a dedicated SW interface 
event
 > message type rather than overload the set flags request. The over-
loading of
 > types causes problems for the automatic API generation tools.
 >
 > https://gerrit.fd.io/r/#/c/7925/
 >
 > regards,
 > neale
 >
 >
 > ___
 > csit-dev mailing list
 > csit-...@lists.fd.io
 > https://lists.fd.io/mailman/listinfo/csit-dev
 >
 >
 > ___
 > csit-dev mailing list
 > csit-...@lists.fd.io
 > https://lists.fd.io/mailman/listinfo/csit-dev



___
csit-dev mailing list
csit-...@lists.fd.io
https://lists.fd.io/mailman/listinfo/csit-dev


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] Spurious make test failure (container POC)

2017-08-09 Thread Dave Barach (dbarach)
Please see https://gerrit.fd.io/r/#/c/7927, and 

 

http://jenkins.ejkern.net:8080/job/vpp-test-debug-master-ubuntu1604/1056/con
sole 

 

The patch in question is highly unlikely to cause this failure...

 

 

14:37:11

==

14:37:11 P2P Ethernet tests

14:37:11

==

14:37:11 delete/create p2p subif
OK

14:37:11 create 100k of p2p subifs
SKIP

14:37:11 create 1k of p2p subifs
Build timed out (after 120 minutes). Marking the build as failed.

16:24:49 $ ssh-agent -k

16:24:54 unset SSH_AUTH_SOCK;

16:24:54 unset SSH_AGENT_PID;

16:24:54 echo Agent pid 84 killed;

16:25:07 [ssh-agent] Stopped.

16:25:07 Build was aborted

16:25:09 [WS-CLEANUP] Deleting project workspace...[WS-CLEANUP] done

16:25:11 Finished: FAILURE

 

Thanks. Dave

 



smime.p7s
Description: S/MIME cryptographic signature
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] node.js

2017-08-09 Thread Thomas F Herbert

Hi,

I am doing the accounting on required packages for downstream builds. 
Does anyone know why epel-rpm-macros are included in our dependencies. 
This RPM provides an rpm macro which overrides another macro that 
defines the arch's on which node.js runs?


What is the history as to this being in our dependencies:

Makefile line 93

node.js is server side javascript. I don't see any instance or know of 
any reason why we need node.js?


Am I missing something?

--Tom


--
*Thomas F Herbert*
NFV and Fast Data Planes
Office of Technology
*Red Hat*
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] Configuring multiple VFRs

2017-08-09 Thread Michael Borokhovich
Hi,

We have 3 interfaces. First two belong to different VRFs and the third one
is a global interface (GigabitEthernet0/6/0).
How we can configure VPP so that the packets received through any of the
first two interfaces will be sent to the global interface
(GigabitEthernet0/6/0).

The default GW is 10.100.4.11, and we tried adding it to the tables but
nothing worked. Specifically, if we do
"vppctl ip *route* add 10.100.4.0/24 table 1 via GigabitEthernet0/6/0"
the packets are forwarded but *without the Ethernet header.*

Any help is appreciated. Our config is below.

Thanks,
Michael.

*Interfaces config:*

vppctl set interface ip table GigabitEthernet0/5/0 1
vppctl set int ip address GigabitEthernet0/5/0 10.100.3.11/24
vppctl set int state GigabitEthernet0/5/0 up

vppctl set interface ip table GigabitEthernet0/4/0 2
vppctl set int ip address GigabitEthernet0/4/0 10.100.1.11/24
vppctl set int state GigabitEthernet0/4/0 up

vppctl set int ip address GigabitEthernet0/6/0 10.100.4.11/24
vppctl set int state GigabitEthernet0/6/0 up


*FIB:*

Table 0, fib_index 0, flow hash: src dst sport dport proto
 Destination Packets  Bytes Adjacency
10.100.4.0/24  0   0 weight 1, index 3
  10.100.4.11/24
10.100.4.11/32 0   0 weight 1, index 4
  10.100.4.11/24

Table 1, fib_index 1, flow hash: src dst sport dport proto
 Destination Packets  Bytes Adjacency
10.100.3.0/24  0   0 weight 1, index 5
  10.100.3.11/24
10.100.3.11/32 0   0 weight 1, index 6
  10.100.3.11/24

Table 2, fib_index 2, flow hash: src dst sport dport proto
 Destination Packets  Bytes Adjacency
10.100.1.0/24  0   0 weight 1, index 7
  10.100.1.11/24
10.100.1.11/32 0   0 weight 1, index 8
  10.100.1.11/24

*IP addresses:*
GigabitEthernet0/4/0 (up):
  10.100.1.11/24 table 2
GigabitEthernet0/5/0 (up):
  10.100.3.11/24 table 1
GigabitEthernet0/6/0 (up):
  10.100.4.11/24
local0 (dn):
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] [csit-dev] API Change: Dedicated SW interface Event

2017-08-09 Thread Luke, Chris
Yeah, I suspect the API was simple enough back at the beginning that it just 
made sense to do it that way. This message, and presumably others, suffer from 
legacy cruft.

Assuming the downstream consumers are okay with it, I fully support blowing out 
the cruft.

Chris.

> -Original Message-
> From: Neale Ranns (nranns) [mailto:nra...@cisco.com]
> Sent: Wednesday, August 9, 2017 13:38
> To: Luke, Chris ; vpp-dev@lists.fd.io; csit-
> d...@lists.fd.io; honeycomb-...@lists.fd.io
> Subject: Re: [csit-dev] API Change: Dedicated SW interface Event
> 
> 
> Hi Chris,
> 
> I don’t know the history. Convenience probably. In the absence of our
> attempt at auto-generating/auto-detecting event-notify pairs, the message
> re-use is understandable. I’m just trying to avoid the special cases in those
> generators (like-wise with the recent ACL dump addition).
> 
> I’ll give it till the end of the week, then press the button if there are no
> objections.
> 
> Thanks,
> neale
> 
> -Original Message-
> From: "Luke, Chris" 
> Date: Wednesday, 9 August 2017 at 14:25
> To: "Neale Ranns (nranns)" , "vpp-dev@lists.fd.io"
> , "csit-...@lists.fd.io" ,
> "honeycomb-...@lists.fd.io" 
> Subject: RE: [csit-dev] API Change: Dedicated SW interface Event
> 
> No specific objection, though I don't think I understand why it was done
> this way in the first place. Laziness, perhaps?
> 
> Chris.
> 
> > -Original Message-
> > From: csit-dev-boun...@lists.fd.io [mailto:csit-dev-boun...@lists.fd.io]
> On
> > Behalf Of Neale Ranns (nranns)
> > Sent: Wednesday, August 9, 2017 8:51
> > To: vpp-dev@lists.fd.io; csit-...@lists.fd.io; honeycomb-...@lists.fd.io
> > Subject: Re: [csit-dev] API Change: Dedicated SW interface Event
> >
> >
> > Hi All,
> >
> > Any objections or support for this proposal?
> >
> > Thanks,
> > neale
> >
> > -Original Message-
> > From:  on behalf of "Neale Ranns
> (nranns)"
> > 
> > Date: Monday, 7 August 2017 at 16:02
> > To: "vpp-dev@lists.fd.io" , "csit-...@lists.fd.io"
>  > d...@lists.fd.io>, "honeycomb-...@lists.fd.io"  > d...@lists.fd.io>
> > Subject: [csit-dev] API Change: Dedicated SW interface Event
> >
> >
> > Hi All,
> >
> > I would like to propose the addition of a dedicated SW interface 
> event
> > message type rather than overload the set flags request. The over-
> loading of
> > types causes problems for the automatic API generation tools.
> >
> > https://gerrit.fd.io/r/#/c/7925/
> >
> > regards,
> > neale
> >
> >
> > ___
> > csit-dev mailing list
> > csit-...@lists.fd.io
> > https://lists.fd.io/mailman/listinfo/csit-dev
> >
> >
> > ___
> > csit-dev mailing list
> > csit-...@lists.fd.io
> > https://lists.fd.io/mailman/listinfo/csit-dev
> 
> 

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] [csit-dev] API Change: Dedicated SW interface Event

2017-08-09 Thread Neale Ranns (nranns)

Hi Chris,

I don’t know the history. Convenience probably. In the absence of our attempt 
at auto-generating/auto-detecting event-notify pairs, the message re-use is 
understandable. I’m just trying to avoid the special cases in those generators 
(like-wise with the recent ACL dump addition).

I’ll give it till the end of the week, then press the button if there are no 
objections.

Thanks,
neale

-Original Message-
From: "Luke, Chris" 
Date: Wednesday, 9 August 2017 at 14:25
To: "Neale Ranns (nranns)" , "vpp-dev@lists.fd.io" 
, "csit-...@lists.fd.io" , 
"honeycomb-...@lists.fd.io" 
Subject: RE: [csit-dev] API Change: Dedicated SW interface Event

No specific objection, though I don't think I understand why it was done 
this way in the first place. Laziness, perhaps?

Chris.

> -Original Message-
> From: csit-dev-boun...@lists.fd.io [mailto:csit-dev-boun...@lists.fd.io] 
On
> Behalf Of Neale Ranns (nranns)
> Sent: Wednesday, August 9, 2017 8:51
> To: vpp-dev@lists.fd.io; csit-...@lists.fd.io; honeycomb-...@lists.fd.io
> Subject: Re: [csit-dev] API Change: Dedicated SW interface Event
> 
> 
> Hi All,
> 
> Any objections or support for this proposal?
> 
> Thanks,
> neale
> 
> -Original Message-
> From:  on behalf of "Neale Ranns (nranns)"
> 
> Date: Monday, 7 August 2017 at 16:02
> To: "vpp-dev@lists.fd.io" , "csit-...@lists.fd.io" 
 d...@lists.fd.io>, "honeycomb-...@lists.fd.io"  d...@lists.fd.io>
> Subject: [csit-dev] API Change: Dedicated SW interface Event
> 
> 
> Hi All,
> 
> I would like to propose the addition of a dedicated SW interface event
> message type rather than overload the set flags request. The over-loading 
of
> types causes problems for the automatic API generation tools.
> 
> https://gerrit.fd.io/r/#/c/7925/
> 
> regards,
> neale
> 
> 
> ___
> csit-dev mailing list
> csit-...@lists.fd.io
> https://lists.fd.io/mailman/listinfo/csit-dev
> 
> 
> ___
> csit-dev mailing list
> csit-...@lists.fd.io
> https://lists.fd.io/mailman/listinfo/csit-dev



___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] [csit-dev] API Change: Dedicated SW interface Event

2017-08-09 Thread Neale Ranns (nranns)

Hi All,

Any objections or support for this proposal?

Thanks,
neale

-Original Message-
From:  on behalf of "Neale Ranns (nranns)" 

Date: Monday, 7 August 2017 at 16:02
To: "vpp-dev@lists.fd.io" , "csit-...@lists.fd.io" 
, "honeycomb-...@lists.fd.io" 
Subject: [csit-dev] API Change: Dedicated SW interface Event


Hi All,

I would like to propose the addition of a dedicated SW interface event 
message type rather than overload the set flags request. The over-loading of 
types causes problems for the automatic API generation tools.

https://gerrit.fd.io/r/#/c/7925/

regards,
neale


___
csit-dev mailing list
csit-...@lists.fd.io
https://lists.fd.io/mailman/listinfo/csit-dev


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev