Hi.
Bharath Ramesh wrote:
I wrote a simple test program to actual time it takes for RDMA read over
IB. I find a huge difference in the numbers returned by timing. I was
wondering if someone could help me in finding what I might be doing
wrong in the way I am measuring the time.
Steps I do for timing is as follows.
1) Create the send WR for RDMA Read.
2) call gettimeofday ()
3) ibv_post_send () the WR
4) Loop around ibv_poll_cq () till I get the completion event.
5) call gettimeofday ();
The difference in time would give me the time it takes to perform RDMA
read over IB. I constantly get around 35 microsecs as the timing which
seems to be really large considering the latency of IB. I am measuring
the time for transferring 4K bytes of data. If anyone wants I can send
the code that I have written. I am not subscribed to the list, if you
could please cc me in the reply.
I don't familiar with the implementation of gettimeofday, but i believe
that this function do a context switch
(and/or spend some time in the function to fill the struct that you
supply to it)
I suggest you to call gettimeoday, execute N times the following commands:
1) ibv_post_send () the WR
2) Loop around ibv_poll_cq () till I get the completion event.
and then call gettimeoday again to calculate the average time for an
RDMA read
OR
you can call a better function to get the CPU/machine time, like the
performance tests do, for example:
https://svn.openfabrics.org/svn/openib/gen2/branches/1.1/src/userspace/perftest/get_clock.h
and
https://svn.openfabrics.org/svn/openib/gen2/branches/1.1/src/userspace/perftest/get_clock.c
Dotan
_______________________________________________
general mailing list
[email protected]
http://lists.openfabrics.org/cgi-bin/mailman/listinfo/general
To unsubscribe, please visit http://openib.org/mailman/listinfo/openib-general