You should be able to pick up both flows. I deployed this for a client last year and had it all working fine. I no longer have access to that config, so I cannot give the full details.
I also set up a dedicated packet capture server (it also had other network management responsibilities) for each IB subnet. I then used the switch level port SPAN port setup to direct the packets to my capture device. This even works for remote switches so enabled us to monitor any traffic flows in the fabric based on a selecting a switch and port. The benefit of this approach is that it avoids the Heisenberg effect of the capture load on the system being observed. This load can be very significant. I wanted to minimize the likelihood of the capture device dropping packets. There is not sufficient B/W on the server (due to PCIe Gen2 constraints) to even support one port at full line speed, let alone try to run two at time. Note for IPoIB, the standard tcpdump and wireshark tools can be used. -----Original Message----- From: [email protected] [mailto:[email protected]] On Behalf Of Julien Lafaye Sent: 05 April 2011 14:20 To: [email protected] Subject: [ewg] Capturing infiniband packets on the wire Hello, I am trying to find a reliable method to see the packets sent on an Infiniband link. For this I am trying to use the ibdump software provided by Mellanox in their custom ofed distribution. I have a simple setup with two hosts A and B configured with addresses 10.2.61.9 and 10.2.61.10 using ipoib. Using ibdump and the ibv_rc_pinpong test, I obtain pcap files in which inbound frames are present. There is NO outbound frame in my ibdump captures: - is that expected behavior ? - as far as I have understood, the queue pair to which payloads are sent is coded in the IB frame. How come can a program observe frames sent to an other queue pair using its own queue pair ? - is ibdump relying on Mellanox card specific features ? Is source code available ? I am using MLNX_OFED_LINUX-1.5.2-2.1.0-rhel5.4.iso from Mellanox website and the output of ibv_devinfo is at the end of this email. Best regards Julien Lafaye host A: hca_id: mlx4_0 transport: InfiniBand (0) fw_ver: 2.6.000 node_guid: 0030:48ff:ffce:5b5c sys_image_guid: 0030:48ff:ffce:5b5f vendor_id: 0x02c9 vendor_part_id: 26428 hw_ver: 0xA0 board_id: SM_2091000001000 phys_port_cnt: 1 port: 1 state: PORT_ACTIVE (4) max_mtu: 2048 (4) active_mtu: 2048 (4) sm_lid: 1 port_lid: 14 port_lmc: 0x00 link_layer: IB host B: hca_id: mlx4_0 transport: InfiniBand (0) fw_ver: 2.6.000 node_guid: 0030:48ff:ffce:5b58 sys_image_guid: 0030:48ff:ffce:5b5b vendor_id: 0x02c9 vendor_part_id: 26428 hw_ver: 0xA0 board_id: SM_2091000001000 phys_port_cnt: 1 port: 1 state: PORT_ACTIVE (4) max_mtu: 2048 (4) active_mtu: 2048 (4) sm_lid: 1 port_lid: 15 port_lmc: 0x00 link_layer: IB _______________________________________________ ewg mailing list [email protected] http://lists.openfabrics.org/cgi-bin/mailman/listinfo/ewg _______________________________________________ ewg mailing list [email protected] http://lists.openfabrics.org/cgi-bin/mailman/listinfo/ewg
