Re: [vpp-dev] Streaming Telemetry with gNMI server

2019-02-02 Thread Ni, Hongjun
Hi Yohan and Stevan,

Thank you for your great work!

FD.io has a sub-project named Sweetcomb, which provides gNMI Northbound 
interface to upper application.
Sweetcomb project will push its first release on Feb 6, 2019.
Please take a look at below link for details from Pantheon Technologies:
https://www.youtube.com/watch?v=hTv6hFnyAhE 

Not sure if your work could be integrated with Sweetcomb project?

Thanks a lot,
Hongjun


-Original Message-
From: vpp-dev@lists.fd.io [mailto:vpp-dev@lists.fd.io] On Behalf Of Yohan 
Pipereau
Sent: Sunday, February 3, 2019 5:55 AM
To: vpp-dev@lists.fd.io
Cc: Stevan COROLLER 
Subject: [vpp-dev] Streaming Telemetry with gNMI server

Hi everyone,

Stevan and I have developed a small gRPC server to stream VPP metrics to an 
analytic stack.

That's right, there is already a program to do this in VPP, it is 
vpp_prometheus_export. Here are the main details/improvements regarding our 
implementation:

* Our implementation is based on gNMI specification, a network standard 
co-written by several network actors to allow configuration and telemetry with 
RPCs.
* Thanks to gNMI protobuf file, messages are easier to parse and use a binary 
format for better performances.
* We are using gRPC and Protobuf, so this is a HTTP2 server
* We are using a push model (or streaming) instead of a pull model. This mean 
that clients subscribe to metric paths with a sample interval, and our server 
streams counters according to the sample interval.
* As we said just before, contrary to vpp_prometheus_export, our application 
let clients decide which metric will be streamed and how often.
* For interface related counters, we also provide conversion of interface 
indexes into interface names.
Ex: /if/rx would be output as /if/rx/tap0/thread0 But at this stage, this 
conversion is expensive because it uses a loop to collect vapi interface 
events. It is planned to write paths with interface names in STAT shared memory 
segment to avoid this loop.

Here is the link to our project:
https://github.com/vpp-telemetry-pfe/gnmi-grpc

We have provided a docker scenario to illustrate our work. It can be found in 
docker directory of the project. You can follow the guide named guide.md.

Do not hesitate to give us feedbacks regarding the scenario or the code.

Yohan
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12141): https://lists.fd.io/g/vpp-dev/message/12141
Mute This Topic: https://lists.fd.io/mt/29635594/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Question about GRE tunelling inside Microsoft Azure

2019-02-02 Thread Jim Thompson
GRE doesn’t work inside Azure, as they use it for their internal transport. 

“Multicast, broadcast, IP-in-IP encapsulated packets, and Generic Routing 
Encapsulation (GRE) packets are blocked within VNets.”

https://docs.microsoft.com/en-us/azure/virtual-network/virtual-networks-faq

Jim

> On Feb 2, 2019, at 5:48 AM, Francesco Spinelli  
> wrote:
> 
> Hello VPP experts,
> 
> I'm trying to use GRE tunnelling inside Microsft Azure but I'm having some 
> troubles in making it work.
> 
> Please, attached you can find the configuration I have inside Azure. What I 
> would like to do is to send packets from VM1 to VM4, passing through the two 
> VMs in the middle, in which VPP is deployed. Between VM2 and VM3 I would like 
> to deploy a GRE tunnel.
> 
> More in details, the packets should go from Eth1 of VM1 to eth1 of VM2. Here 
> the packets are passed to eth2 and using a GRE tunnel, the packets should 
> reach eth2 of VM3. 
> 
> Note that inside VM2 and VM3 I have deployed VPP and both the nic eth1 and 
> eth2 are controlled by VPP
> 
> For now, what I have is that the packets are correctly processed by VPP but 
> they get lost between the link VM2 and VM3.
> 
> To setup GRE and ip routes, inside the VPP CLI I typed these commands:
> 
>  create gre tunnel src 10.0.20.4 dst 10.0.20.5 // to create the GRE 
> tunnel between eth2 of VM2 and eth2 of VM3
> set int state gre0 up 
> ip route add 10.0.30.0/24 via gre0 // to route all the packets that have 
> to go towards VM4 via the gre interface
> 
> 
> 
> 
> If I do a trace add dpdk-input and show trace I obtain the following trace:
> 
> 01:50:51:710865: dpdk-input
>   FailsafeEthernet2 rx queue 0
>   buffer 0x293aa: current data 14, length 84, free-list 0, clone-count 0, 
> totlen-nifb 0, trace 0x0
>   ext-hdr-valid 
>   l4-cksum-computed l4-cksum-correct l2-hdr-offset 0 
>   PKT MBUF: port 3, nb_segs 1, pkt_len 98
> buf_len 2176, data_len 98, ol_flags 0x0, data_off 128, phys_addr 
> 0x18e4eb00
> packet_type 0x11 l2_len 0 l3_len 0 outer_l2_len 0 outer_l3_len 0
> rss 0x0 fdir.hi 0x0 fdir.lo 0x0
> Packet Types
>   RTE_PTYPE_L2_ETHER (0x0001) Ethernet packet
>   RTE_PTYPE_L3_IPV4 (0x0010) IPv4 packet without extension headers
>   IP4: a0:3d:6f:00:c1:8f -> 00:0d:3a:95:04:ae
>   ICMP: 10.0.10.5 -> 10.0.30.5
> tos 0x00, ttl 64, length 84, checksum 0xae35
> fragment id 0x506a, flags DONT_FRAGMENT
>   ICMP echo_request checksum 0x19f2
> 01:50:51:710897: ip4-input
>   ICMP: 10.0.10.5 -> 10.0.30.5
> tos 0x00, ttl 64, length 84, checksum 0xae35
> fragment id 0x506a, flags DONT_FRAGMENT
>   ICMP echo_request checksum 0x19f2
> 01:50:51:710902: ip4-lookup
>   fib 0 dpo-idx 4 flow hash: 0x
>   ICMP: 10.0.10.5 -> 10.0.30.5
> tos 0x00, ttl 64, length 84, checksum 0xae35
> fragment id 0x506a, flags DONT_FRAGMENT
>   ICMP echo_request checksum 0x19f2
> 01:50:51:710904: ip4-midchain
> GRE: 10.0.20.4 -> 10.0.20.5
>   tos 0x00, ttl 254, length 108, checksum 0x805a
>   fragment id 0x
> GRE ip4
> 01:50:51:710907: adj-midchain-tx
>   adj-midchain:[4]:ipv4 via 0.0.0.0 gre0: mtu:9000 
> 4500fe2f80c60a0014040a0014050800
>   stacked-on:
> [@3]: ipv4 via 10.0.20.5 FailsafeEthernet4: mtu:9000 
> 123456789abc000d3a9514d90800
> 01:50:51:710908: ip4-rewrite
>   tx_sw_if_index 2 dpo-idx 2 : ipv4 via 10.0.20.5 FailsafeEthernet4: mtu:9000 
> 123456789abc000d3a9514d90800 flow hash: 0x
>   : 123456789abc000d3a9514d90800456cfd2f815a0a0014040a00
>   0020: 140508004554506a40003f01af350a000a050a001e050800
> 01:50:51:710910: FailsafeEthernet4-output
>   FailsafeEthernet4
>   IP4: 00:0d:3a:95:14:d9 -> 12:34:56:78:9a:bc
>   GRE: 10.0.20.4 -> 10.0.20.5
> tos 0x00, ttl 253, length 108, checksum 0x815a
> fragment id 0x
>   GRE ip4
> 01:50:51:710911: FailsafeEthernet4-tx
>   FailsafeEthernet4 tx queue 0
> buffer 0x293aa: current data -24, length 122, free-list 0, clone-count 0, 
> totlen-nifb 0, trace 0x0
>   ext-hdr-valid 
>   l4-cksum-computed l4-cksum-correct l2-hdr-offset 0 
>   PKT MBUF: port 3, nb_segs 1, pkt_len 122
> buf_len 2176, data_len 122, ol_flags 0x0, data_off 104, phys_addr 
> 0x18e4eb00
> packet_type 0x11 l2_len 0 l3_len 0 outer_l2_len 0 outer_l3_len 0
> rss 0x0 fdir.hi 0x0 fdir.lo 0x0
> Packet Types
>   RTE_PTYPE_L2_ETHER (0x0001) Ethernet packet
>   RTE_PTYPE_L3_IPV4 (0x0010) IPv4 packet without extension headers
>   IP4: 00:0d:3a:95:14:d9 -> 12:34:56:78:9a:bc
>   GRE: 10.0.20.4 -> 10.0.20.5
> tos 0x00, ttl 253, length 108, checksum 0x815a
> fragment id 0x
>   GRE ip4
> 
> 
> the part that I don't understand and maybe could be an issue on why GRE is 
> not working is this one:
> 
> 01:50:51:710910: FailsafeEthernet4-output
>   FailsafeEthernet4
>   IP4: 00:0d:3a:95:14:d9 -> 12:34:56:78:9a:bc
> 

[vpp-dev] Acces control list

2019-02-02 Thread Yosvany
Can someone show me the best way to implement ACL on input way for ipv4  tcp 
stream???

VPP make statefull ACL for  tcp stream??

Best Regards.

-- 
Enviado desde mi dispositivo Android con K-9 Mail. Por favor, disculpa mi 
brevedad.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12139): https://lists.fd.io/g/vpp-dev/message/12139
Mute This Topic: https://lists.fd.io/mt/29636641/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Streaming Telemetry with gNMI server

2019-02-02 Thread Yohan Pipereau
Hi everyone,

Stevan and I have developed a small gRPC server to stream VPP metrics to
an analytic stack.

That's right, there is already a program to do this in VPP, it is
vpp_prometheus_export. Here are the main details/improvements regarding
our implementation:

* Our implementation is based on gNMI specification, a network standard
co-written by several network actors to allow configuration and
telemetry with RPCs.
* Thanks to gNMI protobuf file, messages are easier to parse and use a
binary format for better performances.
* We are using gRPC and Protobuf, so this is a HTTP2 server
* We are using a push model (or streaming) instead of a pull model. This
mean that clients subscribe to metric paths with a sample interval, and
our server streams counters according to the sample interval.
* As we said just before, contrary to vpp_prometheus_export, our
application let clients decide which metric will be streamed and how often.
* For interface related counters, we also provide conversion of
interface indexes into interface names.
Ex: /if/rx would be output as /if/rx/tap0/thread0
But at this stage, this conversion is expensive because it uses a loop
to collect vapi interface events. It is planned to write paths with
interface names in STAT shared memory segment to avoid this loop.

Here is the link to our project:
https://github.com/vpp-telemetry-pfe/gnmi-grpc

We have provided a docker scenario to illustrate our work. It can be
found in docker directory of the project. You can follow the guide named
guide.md.

Do not hesitate to give us feedbacks regarding the scenario or the code.

Yohan
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12138): https://lists.fd.io/g/vpp-dev/message/12138
Mute This Topic: https://lists.fd.io/mt/29635594/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Question about GRE tunelling inside Microsoft Azure

2019-02-02 Thread Francesco Spinelli
Hello VPP experts,

I'm trying to use GRE tunnelling inside Microsft Azure but I'm having some 
troubles in making it work.

Please, attached you can find the configuration I have inside Azure. What I 
would like to do is to send packets from VM1 to VM4, passing through the two 
VMs in the middle, in which VPP is deployed. Between VM2 and VM3 I would like 
to deploy a GRE tunnel.

More in details, the packets should go from Eth1 of VM1 to eth1 of VM2. Here 
the packets are passed to eth2 and using a GRE tunnel, the packets should reach 
eth2 of VM3.

Note that inside VM2 and VM3 I have deployed VPP and both the nic eth1 and eth2 
are controlled by VPP

For now, what I have is that the packets are correctly processed by VPP but 
they get lost between the link VM2 and VM3.

To setup GRE and ip routes, inside the VPP CLI I typed these commands:

 create gre tunnel src 10.0.20.4 dst 10.0.20.5 // to create the GRE tunnel 
between eth2 of VM2 and eth2 of VM3
set int state gre0 up
ip route add 10.0.30.0/24 via gre0 // to route all the packets that have to 
go towards VM4 via the gre interface




If I do a trace add dpdk-input and show trace I obtain the following trace:

01:50:51:710865: dpdk-input
  FailsafeEthernet2 rx queue 0
  buffer 0x293aa: current data 14, length 84, free-list 0, clone-count 0, 
totlen-nifb 0, trace 0x0
  ext-hdr-valid
  l4-cksum-computed l4-cksum-correct l2-hdr-offset 0
  PKT MBUF: port 3, nb_segs 1, pkt_len 98
buf_len 2176, data_len 98, ol_flags 0x0, data_off 128, phys_addr 0x18e4eb00
packet_type 0x11 l2_len 0 l3_len 0 outer_l2_len 0 outer_l3_len 0
rss 0x0 fdir.hi 0x0 fdir.lo 0x0
Packet Types
  RTE_PTYPE_L2_ETHER (0x0001) Ethernet packet
  RTE_PTYPE_L3_IPV4 (0x0010) IPv4 packet without extension headers
  IP4: a0:3d:6f:00:c1:8f -> 00:0d:3a:95:04:ae
  ICMP: 10.0.10.5 -> 10.0.30.5
tos 0x00, ttl 64, length 84, checksum 0xae35
fragment id 0x506a, flags DONT_FRAGMENT
  ICMP echo_request checksum 0x19f2
01:50:51:710897: ip4-input
  ICMP: 10.0.10.5 -> 10.0.30.5
tos 0x00, ttl 64, length 84, checksum 0xae35
fragment id 0x506a, flags DONT_FRAGMENT
  ICMP echo_request checksum 0x19f2
01:50:51:710902: ip4-lookup
  fib 0 dpo-idx 4 flow hash: 0x
  ICMP: 10.0.10.5 -> 10.0.30.5
tos 0x00, ttl 64, length 84, checksum 0xae35
fragment id 0x506a, flags DONT_FRAGMENT
  ICMP echo_request checksum 0x19f2
01:50:51:710904: ip4-midchain
GRE: 10.0.20.4 -> 10.0.20.5
  tos 0x00, ttl 254, length 108, checksum 0x805a
  fragment id 0x
GRE ip4
01:50:51:710907: adj-midchain-tx
  adj-midchain:[4]:ipv4 via 0.0.0.0 gre0: mtu:9000 
4500fe2f80c60a0014040a0014050800
  stacked-on:
[@3]: ipv4 via 10.0.20.5 FailsafeEthernet4: mtu:9000 
123456789abc000d3a9514d90800
01:50:51:710908: ip4-rewrite
  tx_sw_if_index 2 dpo-idx 2 : ipv4 via 10.0.20.5 FailsafeEthernet4: mtu:9000 
123456789abc000d3a9514d90800 flow hash: 0x
  : 123456789abc000d3a9514d90800456cfd2f815a0a0014040a00
  0020: 140508004554506a40003f01af350a000a050a001e050800
01:50:51:710910: FailsafeEthernet4-output
  FailsafeEthernet4
  IP4: 00:0d:3a:95:14:d9 -> 12:34:56:78:9a:bc
  GRE: 10.0.20.4 -> 10.0.20.5
tos 0x00, ttl 253, length 108, checksum 0x815a
fragment id 0x
  GRE ip4
01:50:51:710911: FailsafeEthernet4-tx
  FailsafeEthernet4 tx queue 0
buffer 0x293aa: current data -24, length 122, free-list 0, clone-count 0, 
totlen-nifb 0, trace 0x0
  ext-hdr-valid
  l4-cksum-computed l4-cksum-correct l2-hdr-offset 0
  PKT MBUF: port 3, nb_segs 1, pkt_len 122
buf_len 2176, data_len 122, ol_flags 0x0, data_off 104, phys_addr 0x18e4eb00
packet_type 0x11 l2_len 0 l3_len 0 outer_l2_len 0 outer_l3_len 0
rss 0x0 fdir.hi 0x0 fdir.lo 0x0
Packet Types
  RTE_PTYPE_L2_ETHER (0x0001) Ethernet packet
  RTE_PTYPE_L3_IPV4 (0x0010) IPv4 packet without extension headers
  IP4: 00:0d:3a:95:14:d9 -> 12:34:56:78:9a:bc
  GRE: 10.0.20.4 -> 10.0.20.5
tos 0x00, ttl 253, length 108, checksum 0x815a
fragment id 0x
  GRE ip4


the part that I don't understand and maybe could be an issue on why GRE is not 
working is this one:

01:50:51:710910: FailsafeEthernet4-output
  FailsafeEthernet4
  IP4: 00:0d:3a:95:14:d9 -> 12:34:56:78:9a:bc
  GRE: 10.0.20.4 -> 10.0.20.5
tos 0x00, ttl 253, length 108, checksum 0x815a
fragment id 0x
  GRE ip4

where there is a MAC address which does not belong to any nic  of the VMs: 
12:34:56:78:9a:bc

Could you help me  to highlight what I'm missing in my configuration? Perhaps 
I'm missing some route tables?

Thanks in advance for your answers,
Francesco

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12137): https://lists.fd.io/g/vpp-dev/message/12137
Mute This Topic: https://lists.fd.io/mt/29629256/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.