, VRFs, Sub-interfaces,
etc..
So that I can do “dump” and get the corresponding “details”, and call
appropriate SET API for updating specific attribute.
Thanks,
Leela sankar
From: mailto:vpp-dev@lists.fd.io>> on behalf of "John Lo
(loj) via Lists.Fd.Io"
mailto:loj=cisco@lis
You can find L2 bridging related APIs from the file src/vnet/l2/l2.api. The
API to get bridge domain state is bridge_domain_dump and the response info
provided is bridge_domain_details. The API to change bridge domain attributes
is bridge_flags. The API to set bridge domain aging interval is
Packet drop with unresolved IP neighbor adjacency is the expected behavior. As
the packet will trigger an ARP request to resolve the adjacency, subsequent
packets with the same destination address will start to flow if the adjacency
is resolved on receiving a reply. -John
From:
VPP interface by default is in L3 mode and has to be set into L2 mode for
either bridging or cross connect. The reason, setting up an interface cross
connected to the 2nd interface only does not work, is that the second interface
is not in L2 mode. Thus, L2 forwarding cannot output packets on
When you have a full mesh of tunnels among PE routers, the split horizon group
(SHG) of these tunnels should be set to a non-zero value, e.g. 1, to prevent
loops. Thus, packets received from an interface with a non-zero SHG will not be
replicated or broadcasted to another interface in the
Disabling a feature on an interface will not change the graph arcs, as stated
by Dave. The more relevant CLI is to use “show interface features
” to check whether a particular feature is enabled on an
interface or not.–John
From: vpp-dev@lists.fd.io On Behalf Of Dave Barach via
Since qsort.c from vppinfra has the buffer overflow issue, shall we delete this
file to avoid causing more confusion in the future? -John
From: vpp-dev@lists.fd.io On Behalf Of Damjan Marion via
Lists.Fd.Io
Sent: Friday, September 07, 2018 4:30 AM
To: Damjan Marion
Cc: vpp-dev@lists.fd.io
It seems verifications are always failing for patches on stable/1801 branch.
Does anyone know why?
The following patch with a simple one line change is an example which fails all
3 verification attempts:
https://gerrit.fd.io/r/#/c/15222/
Regards,
John
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive
2018-10-11 at 15:00 +0000, John Lo (loj) via Lists.Fd.Io wrote:
It seems verifications are always failing for patches on stable/1801 branch.
Does anyone know why?
The following patch with a simple one line change is an example which fails all
3 verification attempts:
https://gerrit.fd.io/r
If there is a BVI in a BD with sub-interfaces in the same BD which get packets
with VLAN tags, it is best to configure a tag-rewrite operation on the
sub-interfaces to pop their VLAN tags. Then all packets are forwarded in BD
without VLAN tags. The CLI is “set interface l2 tag-rewrite
pop
I have no idea what you are trying to do with PPPoE. From an L2 forwarding
point of view, however, you need to put interfaces into the same bridge domain
(BD) so they can forward packets to each other. If you have only two
interfaces to send packets to each other, you can L2 cross connect
Hi Jeff,
It was not intentional. The commit you mentioned was addressing tunnel
implementations in VPP which uses similar encap/decap approach with
vnet/src/vxlan as the model or “template”. L2TP uses different encap/decap
approach so was not included.
With respect to the dummy tx function,
ists.fd.io<mailto:vpp-dev@lists.fd.io>
[mailto:vpp-dev@lists.fd.io] On Behalf Of John Lo (loj) via Lists.Fd.Io
Sent: Saturday, October 20, 2018 11:06 PM
To: Zhang, Yuwei1 mailto:yuwei1.zh...@intel.com>>;
vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
Cc: vpp-dev@lists.fd.i
The IP path after receiving a VXLAN packet is processing the outer IP header
and not the inner one. The payload of VXLAN packet is expected to be an
ethernet packet. After VXLAN decap, the payload ethernet packet will be
processed in L2 forwarding path where no IP processing is involved.
If
Hi Yuwei,
I see that you are using VXLAN to carry L3 packets which seems to still have
ethernet header. This is not standard usage of VXLAN so I am not aware of a
way to achieve what you desired via some kind of configuration of VPP. A more
optimal (also not standard) way may be to just send
The equivalent of VLAN on a switch in VPP is a bridge domain or BD for short.
One can put interfaces or VLAN sub-interfaces in a BD to form a L2 network
among all interfaces in it. One can also create a loopback interface, put it
in a BD as its BVI (Bridge Virtual Interface) and assign IP
Hi Xue,
L2VPN/EVPN is a pretty generic area including many things. You need to be more
specific about what additional things you expect VPP to do for your usage of
L2VPN/EVPN. Note that if you are thing of protocol related handling for
L2VPN/EVPN, then it won’t be supported as VPP is really
The interface is in L3 mode. Thus, VPP will only allow packets whose
destination MAC is the same as the interface MAC or bcast/mcast MAC. The
error “l3 mac mismatch” means the DMAC here “52:54:00:ea:ca:a5“ is not the same
as the MAC of the interface. -John
From:
verification?
Regards,
Yuwei
From: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
[mailto:vpp-dev@lists.fd.io] On Behalf Of John Lo (loj) via Lists.Fd.Io
Sent: Monday, October 22, 2018 11:56 PM
To: Zhang, Yuwei1 mailto:yuwei1.zh...@intel.com>>;
vpp-dev@lists.fd.io<mailto:vpp-dev@
I believe you can cross connect two gtpu tunnel interfaces on the VPP in your
GW to achieve what you need:
DBGvpp# set int l2 xconnect ?
set interface l2 xconnectset interface l2 xconnect
You need to set L2 xconnect on both gtpu tunnel interfaces to each other.
Regards,
Please include vpp-dev alias on any questions about VPP, instead of unicast an
individual only. Then whoever is familiar with the area you are asking about
may respond. Does anyone know about the potential problem of switching between
L2 and L3 modes on a bonded interface described in this
VPP does not support receiving of VXLAN packets from an unknown VTEP. Thus,
any packet received in a BD from a VXLAN multicast tunnel must have its source
IP match of the remote VTEP of an existing VXLAN unicast tunnel in the same BD.
If no such unicast tunnel is found, packets are dropped.
Yes, the packet trace is confusing to me as well.
When it is in L3 mode, we have:
Memif-input => bond-input => ethernet-input => arp-input.
When it is in L2 mode, we have:
Memif-input => ethernet-input => arp-input
Two items come to mind for L2 mode:
1. Why did it not go through
only on the senders
IP. Now it makes sense.
Thanks,
Neale
-Message d'origine-
De : au nom de "John Lo (loj) via Lists.Fd.Io"
Répondre à : "John Lo (loj)" Date
: lundi 5 novembre 2018 à 16:17 À : Xuekun , "Eyal Bari
(ebari)" , "vpp-dev@lists.fd.io&q
One can put the BD BVI interfaces into its own IP table and leave VXLAN encap
and underlay in default global IP table. We would put the loopN interface into
an IP table before assigning its IP address:
vpp# set int ip table ?
set interface ip table set interface ip table
When a sub-interface is created, matching of tags on the packet to the
sub-interface can be specified as “exact-match”. With exact-match, packet must
have the same number of tags with values matching that specified for the
sub-interface. Otherwise, packets will belong to the best matched
Hi Xue,
VPP does support L2VPN with SR-MPLS in latest master, possibly the just
released 18.07.1 point release. I only verified it on master and not 18.07.1
myself.
Assume we created an SR-MPLS policy:
sr mpls policy add bsid 993 next 104 next 105 next 106
We can then create a MPLS tunnel
From “show interface” output, the rx/tx counters of TenGE4/0/0 matches that of
tx/rx counters of 4/0/1. So L2XC is working between them. For the other L2XC
pair, the rx counter of TenGE7/0/1 matches that of the tx counter of 7/0/0 so
that also seem fine. The rx of TenGE7/0/0 and tx of 7/0/1
In 1804 the rd-cp-process would run frequently in the main thread to cause your
observation of CPU usage. It can be observed in your "show run" output under
main thread that this process went through suspend/run cycle many times
(highlighted in bold red below).
This was fixed under Jira
VPP has supported IPv6 NA/NS for a while. I believe this process was added for
VPP to support RS/RA.
Ole, Can you confirm if my understanding is correct?
Regards,
John
From: vpp-dev@lists.fd.io On Behalf Of sheckman
Sent: Tuesday, September 18, 2018 6:29 PM
To: John Lo (loj) ; Dave Barach
+1 from me. -John
From: vpp-dev@lists.fd.io On Behalf Of Dave Barach via
Lists.Fd.Io
Sent: Wednesday, February 27, 2019 7:38 AM
To: vpp-dev@lists.fd.io
Cc: vpp-dev@lists.fd.io
Subject: [vpp-dev] New vpp project committer nomination: Paul Vinciguerra
In view of significant code contributions
Congratulations Paul. It is great to have you on board as one of the
committers to share the load, I mean “joy” , of VPP. -John
From: vpp-dev@lists.fd.io On Behalf Of Dave Barach via
Lists.Fd.Io
Sent: Thursday, February 28, 2019 5:38 PM
To: vpp-dev@lists.fd.io; Paul Vinciguerra
Cc:
Hi Joe,
Thank you very much for catching this bug. I took a look at your patch which
looks to be the right fix to this problem. Without this fix, I suppose the
work around is to always add BVI interface to a BD last, after all other
interfaces are added in the BD.
Can you push your patch to
As stated by Dave, if you are doing L2 forwarding by putting the VM interfaces
into a bridge domain, it will forward packets based on destination MAC address
and would not be affected by ethertype nor L3/L4 headers. -John
From: vpp-dev@lists.fd.io On Behalf Of Dave Barach via
Lists.Fd.Io
Hi Milan,
The l2_len field in the buffer metadata area is valid only for L2 forwarding
path. It is overlaid with other L3 forwarding related fields. Thus, packets
coming into a bridge domain (BD) will have its l2_len setup correctly. Once
the packet leave the BD via its BVI interface into
Thanks for citing the PPPoE RFC 2516, Damjan. The RFC goes on to describe how
to resolve the MAC address for PPPoE sessions in Section 5 in discovery stage.
As such, there is really no “L2 mode” for PPPoE sessions mentioned in my
previous reply in this thread. The root cause of the problem
Hi Hongjun,
I understand PPP is point to point so there is no MAC as such to check. If it
is PPPoE, are you sure we are supposed to accept packets with any MAC address?
Is PPPoE also a point to point interface only and not on a Ethernet LAN?
With Ethernet, MAC check is required if interface
ery
Initiation (PADI) packet
The Host sends the PADI packet with the DESTINATION_ADDR set to the
broadcast address. The CODE field is set to 0x09 and the SESSION_ID
MUST be set to 0x.
Thanks,
Hongjun
From: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
[mailto:vpp-dev@
I think the way to not create TX nodes for PPPoE interfaces would be to just
remove the .tx_function specified in its device class init in
src/plugin/pppoe/pppoe.c:
/* *INDENT-OFF* */
VNET_DEVICE_CLASS (pppoe_device_class,static) = {
.name = "PPPoE",
.format_device_name = format_pppoe_name,
Hi Raj,
The registration for ARP events with specific IP is different to when
registration for a wild card IP.
If a client register for ARP events for a specific IP address, an event will be
sent by VPP to the client when ARP resolution occurred for the specified IP
address in L3 FIB.
If a
Of John Lo (loj) via
Lists.Fd.Io
Sent: Wednesday, April 17, 2019 12:06 PM
To: Abeeha Aqeel ; hongjun...@intel.com;
vpp-dev@lists.fd.io
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] VPP PPPoE Plugin
Hi Abeeha,
Have you tried the suggestion I made previously in a similar thread to get rid
pp-dev@lists.fd.io On Behalf Of John Lo (loj) via
Lists.Fd.Io
Sent: Thursday, April 18, 2019 11:30 AM
To: Raj ; vpp-dev
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] Using wildcard-ip4-arp-publisher-process
Hi Raj,
The registration for ARP events with specific IP is different to when
registration for
Right now, the only condition for VPP to send GARP would be on bonded
interfaces in active-standby mode when the active link switches from one to the
other. GRAP is sent to help the upstream router/switch converge its routes
faster. There is no support for VPP to send GARP for other events.
Looks like your GTPU tunnel is setup to send decap packets into L2 forwarding
(either via "decap-next l2" CLI or default). Now, you need to setup how to L2
forward the decap packets. You can either put this tunnel interface into a
bridge domain or cross connect it to an output interface for
Hi Abeeha,
Have you tried the suggestion I made previously in a similar thread to get rid
of the dummy_interface_tx nodes for PPPoE sessions?
diff --git a/src/plugins/pppoe/pppoe.c b/src/plugins/pppoe/pppoe.c
index d73a718..f61926d 100644
--- a/src/plugins/pppoe/pppoe.c
+++
The VPP CLI to set interface to an IP table also work on sub-interfaces if the
name of the sub-interface is specified. For example:
vpp# ip table 10
vpp# create sub-interfaces GigabitEthernet4/0/0 10
GigabitEthernet4/0/0.10
vpp# set int ip table GigabitEthernet4/0/0.10 10
vpp# set int ip
To clarify, what's implemented in VPP for IRB (Integrated Routing and Bridging)
is via creation of BVI (Bridge Virtual Interface) interfaces on bridge domains.
Then IP addresses can then be configured on these BVIs to enable IP packets be
routed (or L3 forwarded) among bridge domains or other
I have similar setup with 10GE Intel NIC on bare-metal box running Ubuntu
18.10. I don’t see such an issue in the past up to the current VPP being used
which is 19.04 (unless something changed with 19.08 or master). The 10GE port
is connected to an IXIA tester and when I set interface state
To create GRE tunnel in L2 mode, you can add “teb” keyword in the create CLI
which makes the GRE tunnel work in transparent ethernet bridging mode:
vpp# create gre ?
create gre tunnelcreate gre tunnel src dst
[instance ] [outer-fib-id ] [teb | erspan ] [del]
In
Hi Raj,
A sub-interface with "dot1q inner any" can only work with L2 forwarding via
cross-connect or bridging where packets are forwarded using MAC header without
any changes to MAC header nor VLAN IDs in VLAN tags.
For L3 IP forwarding, any VLAN tags on a packet must be exact match to a
Thank you very much, Dave! Appreciate your quick action. -John
From: vpp-dev@lists.fd.io On Behalf Of Dave Wallace
Sent: Tuesday, October 29, 2019 2:02 PM
To: vpp-dev@lists.fd.io; csit-...@lists.fd.io
Subject: [vpp-dev] VPP 19.04.3 Maintenance Release is complete!
Folks,
The VPP 19.04.3
With the newer VPP, since around 18.04, IP table must be created before an
interface can be set to it. The default global IP table 0 is an exception
because it is already present on startup.
The CLI is:
vpp# ip table ?
ip table ip table [add|del]
Regards,
I don't think 8-10 S-VLANs with 4K C-VLANs totaling ~40K sub-interfaces will be
an issue for VPP to handle, as long as the NICs being polled by device-input
node from DPDK or other device drivers are not at a large scale.
I am not clear what was done with PPPoE to address similar issues. I
AM
To: John Lo (loj)
Cc: Raj ; vpp-dev
Subject: Re: [vpp-dev] QinQ and dot1ad any
On Tue, Dec 17, 2019 at 8:13 AM John Lo (loj) via
Lists.Fd.Io<http://Lists.Fd.Io>
mailto:cisco@lists.fd.io>> wrote:
Thus, sub-interface with "inner-dot1q any" is not an exa
Hi Dave,
My memory was that in data center or cloud environment it is desirable to set
MTU to jumbo frame size to avoid fragmentation by default. Thus the question
would be if it is reasonable to set default MTU size to 9000 for VPP and what
kind of problem is it causing that would be fixed
+1 -John
From: vpp-dev@lists.fd.io On Behalf Of d...@barachs.net
Sent: Monday, March 02, 2020 9:16 AM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] vpp project committers: formal vote to add Matt Smith as a
vpp committer
VPP committers, please vote +1, 0, -1 on adding Matt Smith
Hi Nick,
I reviewed your patch and merged it. Really appreciate your effort to address
the VRF limitation and also refactor the common VTEP handling code.
Regards,
John
From: Nick Zavaritsky
Sent: Monday, March 02, 2020 7:35 AM
To: John Lo (loj)
Cc: vpp-dev@lists.fd.io
Subject: Re:
Hi Nick,
I agree the current bypass node for various tunnel types, including geneve,
gtpu, vxlan, and vxlan_gbp all have this issue in its hash lookup using only
incoming packet DIP without checking VRF. It is generally not an issue if
bypass feature is enabled on all interfaces which are on
Thanks Dave, that would be good. -John
From: Dave Barach (dbarach)
Sent: Wednesday, February 26, 2020 10:08 AM
To: John Lo (loj) ; vpp-dev
Subject: RE: Change interface default MTU to 1500
OK, how about this: I'll add a VLIB_CONFIG_FUNCTION, or [more likely] add a
system default MTU
The MAC address ad:ef:ad:ef:de:ad is a multicast address. That’s why packet
with that destination MAC is flooded in the bridge. Try assign a unicast MAC
address to gtpu_tunnel1.
Regards,
John
From: vpp-dev@lists.fd.io On Behalf Of sunny cupertino
Sent: Friday, February 14, 2020 9:34 PM
To:
+1
-Original Message-
From: vpp-dev@lists.fd.io On Behalf Of Dave Barach via
lists.fd.io
Sent: Tuesday, April 21, 2020 7:40 AM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] vpp project committer nomination: Benoit Ganne
Vpp project committers: please vote +1, 0, -1 on the
Try “make test TEST=acl_plugin”. -John
From: vpp-dev@lists.fd.io On Behalf Of Govindarajan
Mohandoss
Sent: Tuesday, April 28, 2020 11:22 PM
To: Paul Vinciguerra
Cc: Andrew Yourtchenko ; vpp-dev@lists.fd.io; nd
; Lijian Zhang ; Jieqiang Wang
; nd
Subject: Re: [vpp-dev] ACL question
Hi
Hi Damjan,
I took a look at the console log. It seems to me the traceback after vxlan test
case was due to running out of space. So test run was aborted thereafter:
07:56:20
==
07:56:20 VXLAN Test Case
07:56:20
The new vpp cli to show ip4-arp and ip6-neighbor entries is “show ip neighbors”:
DBGvpp# sho ip neighbor
Time IPFlags Ethernet
Interface
8.436410.0.3.3 D00:50:56:88:00:ac
Hi Andrew,
It would be good to include the following two patches:
https://gerrit.fd.io/r/c/vpp/+/27027
https://gerrit.fd.io/r/c/vpp/+/27029
Regards,
John
From: vpp-dev@lists.fd.io On Behalf Of Mrityunjay Kumar
Sent: Wednesday, May 13, 2020 11:11 AM
To: Andrew Yourtchenko
Cc: vpp-dev
Subject:
Hi Laurent,
VPP interface and sub-interface come up in L3 mode by default, unless it is put
into L2 mode for either bridging or cross connect:
DBGvpp# set interface l2 bridge ?
set interface l2 bridge set interface l2 bridge
[bvi|uu-fwd] [shg]
DBGvpp# set interface l2
+1 -John
From: vpp-committ...@lists.fd.io On Behalf Of Dave
Barach via lists.fd.io
Sent: Friday, September 25, 2020 3:14 PM
To: vpp-committ...@lists.fd.io
Cc: vpp-dev@lists.fd.io
Subject: [vpp-committers] VPP committers: VPP PTL vote
Folks,
The self-nomination period closed yesterday. We
Never mind. I have not caught up to newer messages in the thread when I
replied. -John
From: vpp-dev@lists.fd.io On Behalf Of John Lo (loj) via
lists.fd.io
Sent: Friday, June 05, 2020 4:40 PM
To: dmar...@me.com; m...@ciena.com
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] #vpp #vnet
May be it is this one? https://gerrit.fd.io/r/c/vpp/+/26961 -John
From: vpp-dev@lists.fd.io On Behalf Of Damjan Marion via
lists.fd.io
Sent: Friday, June 05, 2020 11:51 AM
To: m...@ciena.com
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] #vpp #vnet apparent buffer prefetch issue - seeing "l3
Make sense to me, as saving trace data is already impacting "normal"
performance. The extra function call is probably not much more overhead. -John
From: vpp-dev@lists.fd.io On Behalf Of Dave Barach via
lists.fd.io
Sent: Monday, June 08, 2020 10:13 AM
To: vpp-dev@lists.fd.io
Subject:
I recently submitted two patches, one for master and the other for stable/2005,
to fix an issue with L3 virtual interfaces not filter input packets with wrong
unicast MAC address:
https://gerrit.fd.io/r/c/vpp/+/27027
https://gerrit.fd.io/r/c/vpp/+/27311
Perhaps it is the issue you are hitting.
Please clarify the following:
> When the bridge has no binding info about MAC-to-port, bridge is flooding
> packets to all interfaces.
1. Is this linux bridge that’s in the kernel so not a bridge domain inside
VPP?
2. So packets are flooded to all interfaces in the bridge. Are you saying
Hi Nagaraju,
No extra config required than standard L3 setup you already have with IP
address/subnet on your interface. Such L3 interface should drop packets with
unicast DMAC which does not match interface MAC. If you can pull/clone the
latest VPP, either master or stable/2005 branch, and
!
--
Regards,
Balaji.
From: mailto:vpp-dev@lists.fd.io>> on behalf of "John Lo
(loj) via lists.fd.io"
mailto:loj=cisco@lists.fd.io>>
Reply-To: "John Lo (loj)" mailto:l...@cisco.com>>
Date: Wednesday, June 3, 2020 at 1:38 PM
To: Nagaraju Vemuri mailto:nagarajuiit..
We can use “show node counters” which should display counter for packets
dropped due to MAC mismatch. -John
From: Nagaraju Vemuri
Sent: Wednesday, June 03, 2020 3:10 PM
To: John Lo (loj)
Cc: Andrew Yourtchenko ; vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] VPP forwarding packets not destined
75 matches
Mail list logo