Hello folks,
Sorry to hear that my patch broke the multicast setup. I agree with Matthew
that this patch should be reverted.
This patch was supposed to fix excessive locks for VRF tables' fib/mfib,
e.g. without this patch:
vpp# create loopback interface
loop0
vpp# set interface state loop0 up
vpp# lcp create 1 host-if loop0
vpp# show lcp
lcp default netns 'dataplane'
lcp lcp-auto-subint off
lcp lcp-sync off
lcp del-static-on-link-down on
lcp del-dynamic-on-link-down on
itf-pair: [0] loop0 tap4096 loop0 12 type tap netns dataplane
after that I assign loop0 to test vrf and then remove everything
# ip link add test type vrf table 10
# ip link set dev loop0 master test
# ip add add dev loop0 10.0.4.1/24
# vppctl
vpp# show ip6 table
[0] table_id:0 ipv6-VRF:0
[1] table_id:10 ipv6-VRF:10
[2] table_id:4294967295 IP6-link-local:loop0
vpp# show ip table
[0] table_id:0 ipv4-VRF:0
[1] table_id:10 ipv4-VRF:10
# vppctl delete loopback interface intfc loop0
# ip link del test
# vppctl
vpp# show ip table
[0] table_id:0 ipv4-VRF:0
[1] table_id:10 ipv4-VRF:10
vpp# show ip6 table
[0] table_id:0 ipv6-VRF:0
[1] table_id:10 ipv6-VRF:10
vpp# show ip fib table 10
ipv4-VRF:10, fib_index:1, flow hash:[src dst sport dport proto flowlabel ]
epoch:0 flags:none locks:[lcp-rt:1, ]
And with patch applied there's absence of table_id 10 for both fib and mfib
as it was expected.
There's also another issue with cleanup of multicast routes (that is
irrelevant to this patch), when interface is in global table we don't do a
proper cleanup either and it ends up with a DELETED interface path:
vpp# create loopback interface
loop0
vpp# set interface state loop0 up
vpp# lcp create 1 host-if loop0
vpp# show lcp
lcp default netns 'dataplane'
lcp lcp-auto-subint off
lcp lcp-sync off
lcp del-static-on-link-down on
lcp del-dynamic-on-link-down on
itf-pair: [0] loop0 tap4096 loop0 14 type tap netns dataplane
vpp# show ip mfib
ipv4-VRF:0, fib_index:0 flags:none
(*, 0.0.0.0/0): flags:Drop,
Interfaces:
multicast-ip4-chain
[@0]: dpo-drop ip4
(*, 224.0.0.1/32):
Interfaces:
multicast-ip4-chain
[@1]: dpo-replicate: [index:0 buckets:1 flags:[has-local ] to:[0:0]]
[0] [@1]: dpo-receive
(*, 224.0.0.2/32):
Interfaces:
multicast-ip4-chain
[@1]: dpo-replicate: [index:1 buckets:1 flags:[has-local ] to:[0:0]]
[0] [@1]: dpo-receive
vpp# show ip mfib
ipv4-VRF:0, fib_index:0 flags:none
(*, 0.0.0.0/0): flags:Drop,
Interfaces:
multicast-ip4-chain
[@0]: dpo-drop ip4
(*, 224.0.0.0/24):
Interfaces:
loop0: Accept,
multicast-ip4-chain
[@1]: dpo-replicate: [index:7 buckets:1 flags:[has-local ] to:[0:0]]
[0] [@1]: dpo-receive
(*, 224.0.0.1/32):
Interfaces:
loop0: Accept,
multicast-ip4-chain
[@1]: dpo-replicate: [index:0 buckets:1 flags:[has-local ] to:[0:0]]
[0] [@1]: dpo-receive
(*, 224.0.0.2/32):
Interfaces:
loop0: Accept,
multicast-ip4-chain
[@1]: dpo-replicate: [index:1 buckets:1 flags:[has-local ] to:[0:0]]
[0] [@1]: dpo-receive
vpp# delete loopback interface intfc loop0
vpp# show ip mfib
ipv4-VRF:0, fib_index:0 flags:none
(*, 0.0.0.0/0): flags:Drop,
Interfaces:
multicast-ip4-chain
[@0]: dpo-drop ip4
(*, 224.0.0.0/24):
Interfaces:
DELETED (1): Accept,
multicast-ip4-chain
[@1]: dpo-replicate: [index:7 buckets:1 flags:[has-local ] to:[0:0]]
[0] [@1]: dpo-receive
(*, 224.0.0.1/32):
Interfaces:
multicast-ip4-chain
[@1]: dpo-replicate: [index:0 buckets:1 flags:[has-local ] to:[0:0]]
[0] [@1]: dpo-receive
(*, 224.0.0.2/32):
Interfaces:
multicast-ip4-chain
[@1]: dpo-replicate: [index:1 buckets:1 flags:[has-local ] to:[0:0]]
[0] [@1]: dpo-receive
On Thu, 6 Nov 2025 at 17:00, Matthew Smith via lists.fd.io <mgsmith=
[email protected]> wrote:
> Hi Pim,
>
> I have been looking into it. The problem appears to be caused by
> https://gerrit.fd.io/r/c/vpp/+/43036. I am working on a fix.
>
> Thanks,
> -Matt
>
>
> On Wed, Nov 5, 2025 at 8:22 PM Pim van Pelt via lists.fd.io <pim=
> [email protected]> wrote:
>
>> Hoi,
>>
>> After 25.10 release was complete, I cut a new amd64 and arm64 release of
>> Containerlab docker images.
>> In the process, I triggered integration failures in Containerlab. We
>> pinned the release to 25.02:
>>
>> https://github.com/srl-labs/containerlab/commit/76761a830b6610994e83874d11679679d375bda9
>>
>> Somewhere between:
>> - https://gerrit.fd.io/r/c/vpp/+/43495
>> - https://gerrit.fd.io/r/c/vpp/+/43639 (likely unrelated)
>> - https://gerrit.fd.io/r/c/vpp/+/43741 and its cherrypick on stable/2510
>> https://gerrit.fd.io/r/c/vpp/+/43808
>>
>> we have caused a regression that breaks mcast ip4 and ip6 in 25.10.
>> I took a look at 25.06 seems to work just fine in a simple topology with
>> Bird2 or FRR on two VPP instances directly connected.
>> In 25.10, OSPF adjacencies do not form, and a packet trace in VPP shows
>> the packets being dropped after multicast lookup.
>>
>> The test is simple: take two VPP instances w/ LCP. Enable OSPF with
>> either Bird or FRR. The regression shows no packets into the TAP (and no
>> OSPF adjacencies formed). The baseline shows packets and adjacencies
>> formed.
>>
>> Rolling back to the parent commit
>> 28707cb5a1b43de81169bfb41bb48e2d09ecf84f (before the first Gerrit),
>> things work again.
>> Can we take another look at these gerrits to see if they had the
>> unintended consequence of breaking (OSPF and other) multicast in LCP?
>>
>> groet,
>> Pim
>>
>> On 2025-11-04 07:42, linuxfoundation.commodore557 via lists.fd.io wrote:
>> > Working with vpp version 25.10.
>> >
>> > After adding manually the mutlicast routes for 224.0.0.0/24 OSPF is
>> > working very well ( adding mc route manually was not required in
>> > version 25.06 ).
>> > The OSPF control plane messages 224.0.0.5 are sent and received.
>> >
>> > But for PIM I have CP messages addressed to 224.0.0.22. Those are
>> > dropped on the tap interface as the pcap drop capture shows ...
>> >
>> > vppctl pcap trace drop intfc tap4096
>> >
>> > =>
>> >
>> > 1 0.000000 192.168.10.141 224.0.0.22 IGMPv3 86
>> > Membership Report / Join group 239.0.1.1 for any sources
>> > 2 0.542558 192.168.10.141 224.0.0.22 IGMPv3 86
>> > Membership Report / Join group 239.0.1.1 for any sources
>> > 3 4.190593 192.168.10.141 224.0.0.22 IGMPv3 142
>> > Membership Report / Join group 239.0.1.1 for any sources / Join group
>> > 224.0.0.6 for any sources / Join group 224.0.1.39 for any sources /
>> > Join group 224.0.1.40 for any sources / Join group 224.0.0.13 for any
>> > sources / Join group 224.0.0.22 for any sources / Join group 224.0.0.2
>> > for any sources / Join group 224.0.0.5 for any sources
>> > 4 14.815603 192.168.10.141 224.0.0.22 IGMPv3 86
>> > Membership Report / Leave group 239.0.1.1
>> > 5 14.817046 192.168.10.141 239.0.1.1 IGMPv3 86
>> > Membership Query, specific for group 239.0.1.1, source {0.0.0.0}
>> > 6 14.817046 192.168.10.141 239.0.1.1 IGMPv3 82
>> > Membership Query, specific for group 239.0.1.1
>> > 7 15.646594 192.168.10.141 224.0.0.22 IGMPv3 86
>> > Membership Report / Leave group 239.0.1.1
>> > 8 15.646699 192.168.10.141 239.0.1.1 IGMPv3 86
>> > Membership Query, specific for group 239.0.1.1, source {0.0.0.0}
>> > 9 15.646700 192.168.10.141 239.0.1.1 IGMPv3 82
>> > Membership Query, specific for group 239.0.1.1
>> > 10 15.817155 192.168.10.141 239.0.1.1 IGMPv3 82
>> > Membership Query, specific for group 239.0.1.1
>> > 11 16.817214 192.168.10.141 239.0.1.1 IGMPv3 82
>> > Membership Query, specific for group 239.0.1.1
>> >
>> > How to get those CP messages through?
>> >
>> > Multicast and promicsousive mode enabled on all involved interfaces
>> >
>> >
>>
>> --
>> Pim van Pelt <[email protected]>
>> PBVP1-RIPE https://ipng.ch/
>>
>>
>>
>>
>
>
>
--
Best regards
Stanislav Zaikin
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#26504): https://lists.fd.io/g/vpp-dev/message/26504
Mute This Topic: https://lists.fd.io/mt/116147008/21656
Group Owner: [email protected]
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/14379924/21656/631435203/xyzzy
[[email protected]]
-=-=-=-=-=-=-=-=-=-=-=-