amit byron wrote: > Dotan Barak <dotanb <at> dev.mellanox.co.il> writes: > > >> Hi. >> >> amit byron wrote: >> >>> hi, >>> >>> if i evoke/call ib_post_send(IB_WR_SEND) with message >>> size 512 bytes, the message gets received on the >>> peer (second) node. the 2 nodes are connected point-to >>> -point. >>> >>> but if message size is increased to 4096 bytes then >>> second node receives the message; but message content >>> is missing (empty). >>> >>> won't infiniband stack break down message in smaller >>> chunks and assemble on peer node? >>> >>> thanks, >>> Amit. >>> >>> >> Which transport type are you using? >> if you are using a UD QP, then the answer is no. >> for any other transport type, the answer is yes (the message is being >> break down to packets with the MTU side as specified in the QP context. >> >> maybe you have a different problem in you code. did you check the >> completion status in both of the nodes? >> >> Dotan >> >> >> > > i'm using RC connection. the issue seems to occur only when > running in xen's domain 0 (xen0). on core linux kernel, the > code works -- i'm able to do both send message and perform > rdma write with size greater than 4096. > > i don't see any errors reported while sending a message with > size greater than 4096 (same hold true for rdma write). > > i'm able send message (greater than 4096 bytes) from code > running in core linux kernel to peer node code that is > running in xen's domain 0. > > this suggest that there is some hard-limit that prevents > infiniband to send message; but no errors are reported > from infiniband stack. > > any suggestions on how to enable tracing in hca driver? > > thanks, > Amit. >
1) You can use perfquery in the sender/receiver host to find how much data/packets were sent/received. 2) why does the number 4096 is so important? maybe the problem happens when using message size > MTU ... which MTU do you use in the QP? maybe you should try to send a message with the size of MTU + 1 bytes and check the result ... Dotan _______________________________________________ openib-general mailing list [email protected] http://openib.org/mailman/listinfo/openib-general To unsubscribe, please visit http://openib.org/mailman/listinfo/openib-general
