Yesterday I complained about a gen2 x86 client RDMA read not working on a gen1 PowerPC server. Today I got my hardware guy measuring the PCI-Express interface on the PowerPC gen1 server. It seems as if the RDMA read is at least started. So all ideas about endianess problems
To explain the problem I have to give a little more details on our application: We are using the InfiniBand interface between a PowerPC gen1 server and a gen1/gen 2 x86 client for high speed film image transport from a scanner (server) to a workstation (client). In order to get the fastest response and performance the x86 client reads the images out of the scanner by RDMA on a hardware FIFO which is registered as physical memory. Unfortunately the scanner does not always have an image ready to deliver. So to avoid time consuming connected, rdma and disconnect for each and every image I'm holding the connection up and the scanner FIFO hardware delays the responses to the memory reads up to 500 msec. And here is now the problem: It seems as if the gen2 stack does not like that long delay on the rdma. Since a gen1 stack client can live with that long delay I think the gen2 stack should be able to do the same on the same hardware? The question now is: How to increase the waiting period/timeout for the completion of an RDMA operation? Things are getting clearer. Thomas
_______________________________________________ openib-general mailing list [email protected] http://openib.org/mailman/listinfo/openib-general To unsubscribe, please visit http://openib.org/mailman/listinfo/openib-general
