I see here some misunderstanding. Let me explain better how our tramsmit path works.
In our implementation we use normal memory registration path using ibv_reg_mr 
and we use ibv_post_send() with lkey/vaddr/len.

The implementation of ibv_post_send (nes_post_send in libnes) for RAW QP passes 
lkey/virtual_addr/len information to kernel using shared page to our device 
driver (ud_post_send). There is no data copy here and the driver is used only 
for fast synchronization.

Because our RAW ETH QP must use physical addresses only,  ud_post_send() in 
kernel makes a virtual to physical memory translation and accesses the QP HW 
for packet transmission. Previously a packet buffer memory was registered and 
pinned by ibv_reg_mr to provide necessary information for making such 
translation.


I see.  Thanks!


The non-bypass post_send/recv channel (using /dev/infiniband/rdma_cm) is shared with all other user-kernel communication and it is quite complex. It is a perfect path for QP/CQ/PD/mem management but for me it is too complex for traffic acceleration. The user<->kernel path through additional driver, shared page for lkey/vaddr/len passing and SW memory translation in kernel is much more effective. Maybe it is a good idea to make that API more official after some kind of standarization. Our tests proved that it works. We achieved twice better performance and latency. That way could open the way for adding some non-RDMA devices to devices supported OFED API.

Sounds good.

Do you have specific perf numbers to share? Is this all just optimizing mcast packets?
Also:

Does this raw qp service share the mac address with the ports being used by the host stack? Or does each raw qp get its own mac address?

Do you all have a user mode UDP/IP running on this raw qp? If so, does it use its own IP address separate from the host stack or does it share the host's IP address.


Thanks,

Steve.




--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to