Hi, Damjan,

It seems the bug is in vpp.

On R230, vpp shows eth0 is down.

vpp# sh hardware-interfaces eth0
              Name                Idx   Link  Hardware
eth0                               2    down  eth0
  Link speed: unknown


*  Ethernet address b4:96:91:23:1e:d6  Intel 82599    carrier down*
    flags: admin-up promisc pmd rx-ip4-cksum
    rx: queues 2 (max 128), desc 512 (min 32 max 4096 align 8)
    tx: queues 3 (max 64), desc 512 (min 32 max 4096 align 8)
    pci: device 8086:154d subsystem 8086:7b11 address 0000:06:00.01 numa 0
    max rx packet len: 15872
    promiscuous: unicast on all-multicast on
    vlan offload: strip off filter off qinq off
    rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum tcp-lro
                       macsec-strip vlan-filter vlan-extend jumbo-frame
scatter
                       security keep-crc
    rx offload active: ipv4-cksum
    tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum sctp-cksum
                       tcp-tso macsec-insert multi-segs security
    tx offload active: none
    rss avail:         ipv4-tcp ipv4-udp ipv4 ipv6-tcp-ex ipv6-udp-ex
ipv6-tcp
                       ipv6-udp ipv6-ex ipv6
    rss active:        none
    tx burst function: (nil)
    rx burst function: ixgbe_recv_pkts_vec

    extended stats:
      mac local errors                                   318
vpp#

However, the testpmd tool shows it is up.

testpmd> show port summary 1
Number of available ports: 2
Port MAC Address       Name         Driver         Status   Link
*1    B4:96:91:23:1E:D6 0000:06:00.1 net_ixgbe      up       10000Mbps*
testpmd>

Does this prove something wrong on vpp side?

Thanks.
Chuan

On Fri, Oct 18, 2019 at 3:06 PM Chuan Han <chuan...@google.com> wrote:

> I built testpmd binary on both r740 and r230, and ran the test. I did see
> testpmd reports some link status change on r230 server. testpmd report on
> r740 is stabler. no status change reported.
>
> r230 log
> ================================
> Press enter to exit
>
> Port 0: link state change event
>
> Port 1: link state change event
>
> Port 1: link state change event
>
> r740 log
> ================================
> Press enter to exitx0 - TX RS bit threshold=32
>
> If it is a dpdk bug, what shall I do? Report to dpdk mailing list?
>
> On Fri, Oct 18, 2019 at 11:55 AM Chuan Han via Lists.Fd.Io <chuanhan=
> google....@lists.fd.io> wrote:
>
>> So, it is a dpdk bug?
>>
>> I am new to dpdk/vpp.
>>
>> How do I run dpdk testpmd? Shall I install dpdk separately on the r230
>> server? Are there any steps to follow?
>>
>> On Fri, Oct 18, 2019 at 10:30 AM Damjan Marion <dmar...@me.com> wrote:
>>
>>> In this case we are purely relying on link state provided by DPDK.
>>> Have you tried to check if same problem exists with DPDK testpmd app?
>>>
>>>
>>> On 18 Oct 2019, at 10:26, Chuan Han via Lists.Fd.Io <
>>> chuanhan=google....@lists.fd.io> wrote:
>>>
>>> I cleaned up startup config a bit. I can still see eth0 down.
>>>
>>> See attachment for config file and log. There are some errors in log,
>>> but I am not sure they are worrisome or not.
>>>
>>> On Thu, Oct 17, 2019 at 5:22 PM Florin Coras <fcoras.li...@gmail.com>
>>> wrote:
>>>
>>>> This looks like a DPDK issue, but I’ll let Damjan be the judge of that.
>>>>
>>>> To see if this is a config issues, could you simplify your startup
>>>> config by
>>>> - removing “workers 0” from the two nics and adding “num-rx-queues 2”
>>>> to the nics or to the default stanza, if you’re running with 2 workers
>>>> - comment out the cryptodev config
>>>>
>>>> If the two nics don’t come up, check if there’s any obvious dpdk error
>>>> in “show log”.
>>>>
>>>> Florin
>>>>
>>>> On Oct 17, 2019, at 4:56 PM, Chuan Han via Lists.Fd.Io
>>>> <http://lists.fd.io/> <chuanhan=google....@lists.fd.io> wrote:
>>>>
>>>> I tried disabling autoneg on R740 side. It is not allowed too. If vpp
>>>> cannot allow two nics to be successfully added to the same vpp instance, it
>>>> seems to be a bug. Is it something which can be easily spotted in the code
>>>> base?
>>>>
>>>> It is also not possible to enforce symmetricity on internet. The other
>>>> party can do anything as long as basic ping works.
>>>>
>>>> On Thu, Oct 17, 2019 at 3:55 PM Chuan Han <chuan...@google.com> wrote:
>>>>
>>>>> If I only put one phy nic, i.e., eth0, to vpp, 'sh hardware' shows it
>>>>> is up. If I put both eth0 and eth1 in vpp, eth0 is always down. It seems
>>>>> something is wrong with the nic or vpp does not support this type of
>>>>> hardware?
>>>>>
>>>>> We tried enabling autoneg on R230. It is not allowed. To avoid
>>>>> asymmetric settings, disabling autoneg on R740 will help?
>>>>>
>>>>> On Thu, Oct 17, 2019 at 3:46 PM Balaji Venkatraman (balajiv) <
>>>>> bala...@cisco.com> wrote:
>>>>>
>>>>>> It plays a role if it is asymmetric at both ends. You could enable it
>>>>>> at both ends and check.
>>>>>>
>>>>>> On Oct 17, 2019, at 3:15 PM, Chuan Han <chuan...@google.com> wrote:
>>>>>>
>>>>>> 
>>>>>> I rebooted the r230 machine and found the phy nic corresponding to
>>>>>> eth has autoneg off.
>>>>>>
>>>>>> root@esdn-relay:~/gnxi/perf_testing/r230# ethtool enp6s0f1
>>>>>> Settings for enp6s0f1:
>>>>>>         Supported ports: [ FIBRE ]
>>>>>>         Supported link modes:   10000baseT/Full
>>>>>>         Supported pause frame use: Symmetric
>>>>>>         Supports auto-negotiation: No
>>>>>>         Supported FEC modes: Not reported
>>>>>>         Advertised link modes:  10000baseT/Full
>>>>>>         Advertised pause frame use: Symmetric
>>>>>>         Advertised auto-negotiation: No
>>>>>>         Advertised FEC modes: Not reported
>>>>>>         Speed: 10000Mb/s
>>>>>>         Duplex: Full
>>>>>> *        Port: Direct Attach Copper*
>>>>>>         PHYAD: 0
>>>>>>         Transceiver: internal
>>>>>> *        Auto-negotiation: off*
>>>>>>         Supports Wake-on: d
>>>>>>         Wake-on: d
>>>>>>         Current message level: 0x00000007 (7)
>>>>>>                                drv probe link
>>>>>>         Link detected: yes
>>>>>> root@esdn-relay:~/gnxi/perf_testing/r230#
>>>>>>
>>>>>> On r740, autoneg is on. It is copper.
>>>>>>
>>>>>> root@esdn-lab:~/gnxi/perf_testing/r740/vpp# ethtool eno3
>>>>>> Settings for eno3:
>>>>>>         Supported ports: [ TP ]
>>>>>>         Supported link modes:   100baseT/Full
>>>>>>                                 1000baseT/Full
>>>>>>                                 10000baseT/Full
>>>>>>         Supported pause frame use: Symmetric
>>>>>>         Supports auto-negotiation: Yes
>>>>>>         Supported FEC modes: Not reported
>>>>>>         Advertised link modes:  100baseT/Full
>>>>>>                                 1000baseT/Full
>>>>>>                                 10000baseT/Full
>>>>>>         Advertised pause frame use: Symmetric
>>>>>>         Advertised auto-negotiation: Yes
>>>>>>         Advertised FEC modes: Not reported
>>>>>>         Speed: 10000Mb/s
>>>>>>         Duplex: Full
>>>>>> *        Port: Twisted Pair*
>>>>>>         PHYAD: 0
>>>>>>         Transceiver: internal
>>>>>> *        Auto-negotiation: on*
>>>>>>         MDI-X: Unknown
>>>>>>         Supports Wake-on: umbg
>>>>>>         Wake-on: g
>>>>>>         Current message level: 0x00000007 (7)
>>>>>>                                drv probe link
>>>>>>         Link detected: yes
>>>>>> root@esdn-lab:~/gnxi/perf_testing/r740/vpp#
>>>>>>
>>>>>> not clear if this plays a role or not.
>>>>>>
>>>>>> On Thu, Oct 17, 2019 at 2:41 PM Chuan Han via Lists.Fd.Io
>>>>>> <http://lists.fd.io/> <chuanhan=google....@lists.fd.io> wrote:
>>>>>>
>>>>>>> Restarting ixia controller does not help. We ended up with both ixia
>>>>>>> ports having '!'.
>>>>>>>
>>>>>>> We are not sure how ixia port plays a role here. eth0 interfaces are
>>>>>>> the interfaces connecting two servers, not to ixia.
>>>>>>>
>>>>>>> On Thu, Oct 17, 2019 at 11:26 AM Balaji Venkatraman (balajiv) <
>>>>>>> bala...@cisco.com> wrote:
>>>>>>>
>>>>>>>> Hi Chuan,
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> Could you please try to reset the ixia controller connected to port
>>>>>>>> 4?
>>>>>>>>
>>>>>>>> I have seen issues with ‘!’ on ixia. Given the carrier on eth0 is
>>>>>>>> down, I suspect the ixia port.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>>
>>>>>>>> Regards,
>>>>>>>>
>>>>>>>> Balaji.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> *From: *Chuan Han <chuan...@google.com>
>>>>>>>> *Date: *Thursday, October 17, 2019 at 11:09 AM
>>>>>>>> *To: *"Balaji Venkatraman (balajiv)" <bala...@cisco.com>
>>>>>>>> *Cc: *"vpp-dev@lists.fd.io" <vpp-dev@lists.fd.io>, Arivudainambi
>>>>>>>> Appachi gounder <aappa...@google.com>, Jerry Cen <
>>>>>>>> zhiw...@google.com>
>>>>>>>> *Subject: *Re: [vpp-dev] Basic l2 bridging does not work
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> Yes. It is unidirectional stream from port 1 to port 4.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> Another engineer, Nambi, configured ixia. What he showed me
>>>>>>>> yesterday is that xia port connected to port 1 is green and good. ixia 
>>>>>>>> port
>>>>>>>> connected to port 4 is green but has a red exclamation mark, which 
>>>>>>>> means
>>>>>>>> ping does not work.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> We also found eth0 on R230 is down shown by "show hardware eth0"
>>>>>>>> command. However "show int" shows it is up.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> vpp# sh hardware-interfaces eth0
>>>>>>>>               Name                Idx   Link  Hardware
>>>>>>>> eth0                               2    down  eth0
>>>>>>>>   Link speed: unknown
>>>>>>>>   Ethernet address b4:96:91:23:1e:d6
>>>>>>>>   Intel 82599
>>>>>>>>     carrier down
>>>>>>>>     flags: admin-up promisc pmd rx-ip4-cksum
>>>>>>>>     rx: queues 1 (max 128), desc 512 (min 32 max 4096 align 8)
>>>>>>>>     tx: queues 3 (max 64), desc 512 (min 32 max 4096 align 8)
>>>>>>>>     pci: device 8086:154d subsystem 8086:7b11 address
>>>>>>>> 0000:06:00.01 numa 0
>>>>>>>>     max rx packet len: 15872
>>>>>>>>     promiscuous: unicast on all-multicast on
>>>>>>>>     vlan offload: strip off filter off qinq off
>>>>>>>>     rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum
>>>>>>>> tcp-lro
>>>>>>>>                        macsec-strip vlan-filter vlan-extend
>>>>>>>> jumbo-frame scatter
>>>>>>>>                        security keep-crc
>>>>>>>>     rx offload active: ipv4-cksum
>>>>>>>>     tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum
>>>>>>>> sctp-cksum
>>>>>>>>                        tcp-tso macsec-insert multi-segs security
>>>>>>>>     tx offload active: none
>>>>>>>>     rss avail:         ipv4-tcp ipv4-udp ipv4 ipv6-tcp-ex
>>>>>>>> ipv6-udp-ex ipv6-tcp
>>>>>>>>                        ipv6-udp ipv6-ex ipv6
>>>>>>>>     rss active:        none
>>>>>>>>     tx burst function: (nil)
>>>>>>>>     rx burst function: ixgbe_recv_pkts_vec
>>>>>>>>
>>>>>>>>     rx frames ok                                       33278
>>>>>>>>     rx bytes ok                                      3960082
>>>>>>>>     extended stats:
>>>>>>>>       rx good packets                                  33278
>>>>>>>>       rx good bytes                                  3960082
>>>>>>>>       rx q0packets                                     33278
>>>>>>>>       rx q0bytes                                     3960082
>>>>>>>>       rx size 65 to 127 packets                        33278
>>>>>>>>       rx multicast packets                             33278
>>>>>>>>       rx total packets                                 33278
>>>>>>>>       rx total bytes                                 3960082
>>>>>>>> vpp# sh int
>>>>>>>>               Name               Idx    State  MTU
>>>>>>>> (L3/IP4/IP6/MPLS)     Counter          Count
>>>>>>>> eth0                              2      up          9000/0/0/0
>>>>>>>> rx packets                 33279
>>>>>>>>
>>>>>>>>  rx bytes                 3960201
>>>>>>>>
>>>>>>>>  drops                          5
>>>>>>>>
>>>>>>>>  punt                           1
>>>>>>>>
>>>>>>>>  tx-error                   33274
>>>>>>>> eth1                              1      up          9000/0/0/0
>>>>>>>> rx packets                 33274
>>>>>>>>
>>>>>>>>  rx bytes                 3959606
>>>>>>>>
>>>>>>>>  tx packets                 33273
>>>>>>>>
>>>>>>>>  tx bytes                 3959487
>>>>>>>>
>>>>>>>>  drops                      33274
>>>>>>>>
>>>>>>>>  tx-error                       3
>>>>>>>> local0                            0     down          0/0/0/0
>>>>>>>>
>>>>>>>> vpp#
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Thu, Oct 17, 2019 at 10:54 AM Balaji Venkatraman (balajiv) <
>>>>>>>> bala...@cisco.com> wrote:
>>>>>>>>
>>>>>>>> Hi Chuan,
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> I assume u have unidirectional stream ? ixia->1->2->3->4->ixia?
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> vpp# sh int
>>>>>>>>               Name               Idx    State  MTU
>>>>>>>> (L3/IP4/IP6/MPLS)     Counter          Count
>>>>>>>> eth0                              2      up          9000/0/0/0
>>>>>>>> rx packets                 30925
>>>>>>>>
>>>>>>>>  rx bytes                 3680075
>>>>>>>>
>>>>>>>>  drops                          5
>>>>>>>>
>>>>>>>>  punt                           1
>>>>>>>>
>>>>>>>>  tx-error                   30920
>>>>>>>> eth1                              1      up          9000/0/0/0
>>>>>>>> rx packets                 30920 <<< packets are received on port 3
>>>>>>>>
>>>>>>>>  rx bytes                 3679480
>>>>>>>>
>>>>>>>>  tx packets                 30919
>>>>>>>>
>>>>>>>>  tx bytes                 3679361
>>>>>>>>
>>>>>>>>  drops                      30920 <<< all dropped at port 3
>>>>>>>>
>>>>>>>>  tx-error                       3
>>>>>>>> local0                            0     down          0/0/0/0
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On sh error logs on R 230 we see
>>>>>>>>
>>>>>>>>          1             ethernet-input             l3 mac mismatch
>>>>>>>> <<<<
>>>>>>>>          3               eth1-output              interface is down
>>>>>>>>      30922               eth0-output              interface is down
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> Do u see the arp getting resolved on ixia? The mac on ixia at port
>>>>>>>> with 172.16.1.2/24 should be seen on its other port. Are the ixia
>>>>>>>> ports up at both ends?
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>>
>>>>>>>> Regards,
>>>>>>>>
>>>>>>>> Balaji.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> *From: *<vpp-dev@lists.fd.io> on behalf of "Chuan Han via
>>>>>>>> Lists.Fd.Io <http://lists.fd.io/>" <chuanhan=google....@lists.fd.io
>>>>>>>> >
>>>>>>>> *Reply-To: *"chuan...@google.com" <chuan...@google.com>
>>>>>>>> *Date: *Thursday, October 17, 2019 at 9:59 AM
>>>>>>>> *To: *"Balaji Venkatraman (balajiv)" <bala...@cisco.com>
>>>>>>>> *Cc: *"vpp-dev@lists.fd.io" <vpp-dev@lists.fd.io>
>>>>>>>> *Subject: *Re: [vpp-dev] Basic l2 bridging does not work
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> It seems R740 vpp works fine. All packets coming from port 1 go to
>>>>>>>> port 2.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> vpp# sh int
>>>>>>>>               Name               Idx    State  MTU
>>>>>>>> (L3/IP4/IP6/MPLS)     Counter          Count
>>>>>>>> eth0                              2      up          9000/0/0/0
>>>>>>>> tx packets                 30895
>>>>>>>>
>>>>>>>>  tx bytes                 3676505
>>>>>>>> eth1                              1      up          9000/0/0/0
>>>>>>>> rx packets                 30895
>>>>>>>>
>>>>>>>>  rx bytes                 3676505
>>>>>>>> local0                            0     down          0/0/0/0
>>>>>>>>
>>>>>>>> vpp# sh int
>>>>>>>>               Name               Idx    State  MTU
>>>>>>>> (L3/IP4/IP6/MPLS)     Counter          Count
>>>>>>>> eth0                              2      up          9000/0/0/0
>>>>>>>> tx packets                 30897
>>>>>>>>
>>>>>>>>  tx bytes                 3676743
>>>>>>>> eth1                              1      up          9000/0/0/0
>>>>>>>> rx packets                 30897
>>>>>>>>
>>>>>>>>  rx bytes                 3676743
>>>>>>>> local0                            0     down          0/0/0/0
>>>>>>>>
>>>>>>>> vpp# sh error
>>>>>>>>    Count                    Node                  Reason
>>>>>>>>      30899                l2-output               L2 output packets
>>>>>>>>      30899                l2-learn                L2 learn packets
>>>>>>>>          1                l2-learn                L2 learn misses
>>>>>>>>      30899                l2-input                L2 input packets
>>>>>>>>      30899                l2-flood                L2 flood packets
>>>>>>>> vpp#
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> The drop happened on R230 vpp. Port 3 dropped all pkts complaining
>>>>>>>> about down interface. However, show command shows interfaces are up.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> vpp# sh int
>>>>>>>>               Name               Idx    State  MTU
>>>>>>>> (L3/IP4/IP6/MPLS)     Counter          Count
>>>>>>>> eth0                              2      up          9000/0/0/0
>>>>>>>> rx packets                 30925
>>>>>>>>
>>>>>>>>  rx bytes                 3680075
>>>>>>>>
>>>>>>>>  drops                          5
>>>>>>>>
>>>>>>>>  punt                           1
>>>>>>>>
>>>>>>>>  tx-error                   30920
>>>>>>>> eth1                              1      up          9000/0/0/0
>>>>>>>> rx packets                 30920
>>>>>>>>
>>>>>>>>  rx bytes                 3679480
>>>>>>>>
>>>>>>>>  tx packets                 30919
>>>>>>>>
>>>>>>>>  tx bytes                 3679361
>>>>>>>>
>>>>>>>>  drops                      30920
>>>>>>>>
>>>>>>>>  tx-error                       3
>>>>>>>> local0                            0     down          0/0/0/0
>>>>>>>>
>>>>>>>> vpp# sh error
>>>>>>>>    Count                    Node                  Reason
>>>>>>>>          2                llc-input               unknown llc
>>>>>>>> ssap/dsap
>>>>>>>>      61846                l2-output               L2 output packets
>>>>>>>>      61846                l2-learn                L2 learn packets
>>>>>>>>          2                l2-learn                L2 learn misses
>>>>>>>>      61846                l2-input                L2 input packets
>>>>>>>>      61846                l2-flood                L2 flood packets
>>>>>>>>          1             ethernet-input             l3 mac mismatch
>>>>>>>>          3               eth1-output              interface is down
>>>>>>>>      30922               eth0-output              interface is down
>>>>>>>> vpp#
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> Not sure how to check mac issues. Can you explain a bit more? Here
>>>>>>>> is what I can see on R230 vpp.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> vpp# show bridge-domain 1 detail
>>>>>>>>   BD-ID   Index   BSN  Age(min)  Learning  U-Forwrd   UU-Flood
>>>>>>>> Flooding  ARP-Term  arp-ufwd   BVI-Intf
>>>>>>>>     1       1      0     off        on        on       flood
>>>>>>>>  on       off       off        N/A
>>>>>>>>
>>>>>>>>            Interface           If-idx ISN  SHG  BVI  TxFlood
>>>>>>>>  VLAN-Tag-Rewrite
>>>>>>>>              eth0                2     1    0    -      *
>>>>>>>>       none
>>>>>>>>              eth1                1     1    0    -      *
>>>>>>>>       none
>>>>>>>> vpp# sh l2fib verbose
>>>>>>>>     Mac-Address     BD-Idx If-Idx BSN-ISN Age(min) static filter
>>>>>>>> bvi         Interface-Name
>>>>>>>>  28:99:3a:f4:3a:a6    1      2      0/1      -       -      -     -
>>>>>>>>               eth0
>>>>>>>>  28:99:3a:f4:3a:9c    1      1      0/1      -       -      -     -
>>>>>>>>               eth1
>>>>>>>> L2FIB total/learned entries: 2/2  Last scan time: 0.0000e0sec
>>>>>>>>  Learn limit: 4194304
>>>>>>>> vpp#
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Wed, Oct 16, 2019 at 6:01 PM Balaji Venkatraman (balajiv) <
>>>>>>>> bala...@cisco.com> wrote:
>>>>>>>>
>>>>>>>>
>>>>>>>> +-------------------------------------------------------------------------+
>>>>>>>>
>>>>>>>> |
>>>>>>>>                                                 |
>>>>>>>>
>>>>>>>> |
>>>>>>>> |
>>>>>>>>
>>>>>>>> |
>>>>>>>> IXIA                                  |
>>>>>>>>
>>>>>>>> |
>>>>>>>>                     |
>>>>>>>>
>>>>>>>> |
>>>>>>>> |
>>>>>>>>
>>>>>>>>
>>>>>>>> +-----------------------------------------------------------^-------------+
>>>>>>>>
>>>>>>>>             |172.16.1.1/24                                  |
>>>>>>>> 172.16.1.2/24
>>>>>>>>
>>>>>>>>             |                                               |
>>>>>>>>
>>>>>>>>             |                                               |
>>>>>>>>
>>>>>>>>             |eth0                                           | eth0
>>>>>>>>
>>>>>>>> +-----------v-------------+
>>>>>>>> +------------+-----------+
>>>>>>>>
>>>>>>>> |           1             |                    |
>>>>>>>> 4           |
>>>>>>>>
>>>>>>>> |                         |
>>>>>>>> |                        |
>>>>>>>>
>>>>>>>> |                         |
>>>>>>>> |                        |
>>>>>>>>
>>>>>>>> |                         |
>>>>>>>>                 |                        |
>>>>>>>>
>>>>>>>> |                         |eth1          eth1
>>>>>>>> |                        |
>>>>>>>>
>>>>>>>> |        VPP1           2 +--------------------> 3        VPP
>>>>>>>> 2         |
>>>>>>>>
>>>>>>>> |                         |                    |
>>>>>>>>        |
>>>>>>>>
>>>>>>>> |                         |
>>>>>>>> |                        |
>>>>>>>>
>>>>>>>> |                         |
>>>>>>>> |                        |
>>>>>>>>
>>>>>>>> |                         |
>>>>>>>> |                        |
>>>>>>>>
>>>>>>>> +-------------------------+
>>>>>>>> +------------------------+
>>>>>>>>
>>>>>>>>          R 740                                           R 230
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> Might help if you could check if the packet counts at ingress (port
>>>>>>>> 1) & egress (port 2) match. Similarly 3 & 4. And the mac entries seen 
>>>>>>>> on
>>>>>>>> both vpp(s). ARP req/rep tracing might also help.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> /-
>>>>>>>>
>>>>>>>> Balaji
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> *From: *<vpp-dev@lists.fd.io> on behalf of "Damjan Marion via
>>>>>>>> Lists.Fd.Io <http://lists.fd.io/>" <dmarion=me....@lists.fd.io>
>>>>>>>> *Reply-To: *"dmar...@me.com" <dmar...@me.com>
>>>>>>>> *Date: *Wednesday, October 16, 2019 at 5:12 PM
>>>>>>>> *To: *"chuan...@google.com" <chuan...@google.com>
>>>>>>>> *Cc: *"vpp-dev@lists.fd.io" <vpp-dev@lists.fd.io>
>>>>>>>> *Subject: *Re: [vpp-dev] Basic l2 bridging does not work
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On 16 Oct 2019, at 16:14, Chuan Han via Lists.Fd.Io
>>>>>>>> <http://lists.fd.io/> <chuanhan=google....@lists.fd.io> wrote:
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> Hi, vpp experts,
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> We are trying to make basic l2 bridge works within vpp.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> We have two servers: r230 and r740, each of which has two phy nics.
>>>>>>>> Two servers are connected via cable. On each server, we bring these two
>>>>>>>> nics into the same vpp instance and put them into the same l2 bridge. 
>>>>>>>> We
>>>>>>>> tried sending traffic using ixia. However, ixia shows ping does not 
>>>>>>>> work.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> I attached the topology, vpp conf files, startup conf file, and
>>>>>>>> logs.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> Please advise where we could make it wrong.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> Thanks.
>>>>>>>>
>>>>>>>> Chuan
>>>>>>>>
>>>>>>>> <r230 vpp.conf><r740 vpp.conf><r230 vpp startup.cfg><r740 vpp
>>>>>>>> startup.cfg><r740.log><r230.log><vpp testbed -
>>>>>>>> bridge.pdf>-=-=-=-=-=-=-=-=-=-=-=-
>>>>>>>> Links: You receive all messages sent to this group.
>>>>>>>>
>>>>>>>> View/Reply Online (#14189):
>>>>>>>> https://lists.fd.io/g/vpp-dev/message/14189
>>>>>>>> Mute This Topic: https://lists.fd.io/mt/34655826/675642
>>>>>>>> Group Owner: vpp-dev+ow...@lists.fd.io
>>>>>>>> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [dmar...@me.com]
>>>>>>>> -=-=-=-=-=-=-=-=-=-=-=-
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On the 1st look everything look ok including the packet trace.
>>>>>>>>
>>>>>>>> Can you try to clear counters* and enable packet trace on both
>>>>>>>> instances.
>>>>>>>>
>>>>>>>> Then send known number of packets and find put where drop happens
>>>>>>>> by looking into same outputs you already shared.....
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> * "clear int", "clear run", "clear trace"
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> Damjan
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> -=-=-=-=-=-=-=-=-=-=-=-
>>>>>>> Links: You receive all messages sent to this group.
>>>>>>>
>>>>>>> View/Reply Online (#14211):
>>>>>>> https://lists.fd.io/g/vpp-dev/message/14211
>>>>>>> Mute This Topic: https://lists.fd.io/mt/34655826/1991531
>>>>>>> Group Owner: vpp-dev+ow...@lists.fd.io
>>>>>>> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [
>>>>>>> chuan...@google.com]
>>>>>>> -=-=-=-=-=-=-=-=-=-=-=-
>>>>>>>
>>>>>> -=-=-=-=-=-=-=-=-=-=-=-
>>>> Links: You receive all messages sent to this group.
>>>>
>>>> View/Reply Online (#14219): https://lists.fd.io/g/vpp-dev/message/14219
>>>> Mute This Topic: https://lists.fd.io/mt/34655826/675152
>>>> Group Owner: vpp-dev+ow...@lists.fd.io
>>>> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [
>>>> fcoras.li...@gmail.com]
>>>> -=-=-=-=-=-=-=-=-=-=-=-
>>>>
>>>>
>>>> <log><vpp startup.conf>-=-=-=-=-=-=-=-=-=-=-=-
>>> Links: You receive all messages sent to this group.
>>>
>>> View/Reply Online (#14240): https://lists.fd.io/g/vpp-dev/message/14240
>>> Mute This Topic: https://lists.fd.io/mt/34655826/675642
>>> Group Owner: vpp-dev+ow...@lists.fd.io
>>> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [dmar...@me.com]
>>> -=-=-=-=-=-=-=-=-=-=-=-
>>>
>>>
>>> --
>>> Damjan
>>>
>>> -=-=-=-=-=-=-=-=-=-=-=-
>> Links: You receive all messages sent to this group.
>>
>> View/Reply Online (#14243): https://lists.fd.io/g/vpp-dev/message/14243
>> Mute This Topic: https://lists.fd.io/mt/34655826/1991531
>> Group Owner: vpp-dev+ow...@lists.fd.io
>> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [chuan...@google.com]
>> -=-=-=-=-=-=-=-=-=-=-=-
>>
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14245): https://lists.fd.io/g/vpp-dev/message/14245
Mute This Topic: https://lists.fd.io/mt/34655826/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to