VFS2_DEBUGFILE=debug.out
and look for messages like
    memcache_register: hit [%d] %p len %lld (via %p len %lld) ...
and compare the first pointer value to your myBuffer.  A hit
is good.  If you get messages that say "miss", please send me the
trace.

I'll let other pvfs types think about the contiguous request
question and choices of stripe sizes.  I don't know.


It looks like the memcache stuff may not be working right. But at the very least it is sucking up huge amounts of CPU..

This is the output of oprofile during one of our test runs.. hstar_ is the routine in GAMES doing computation. We appear to be spending more time looking up and freeing memory registration cache information than waiting for IO or computing.

CPU: AMD64 processors, speed 1403.2 MHz (estimated)
Counted CPU_CLK_UNHALTED events (Cycles outside of halt state) with a unit mask of 0x00 (No
unit mask) count 100000
samples  %        app name                 symbol name
6361527  26.3906  gamess.Feb222006R5.x     hstar_
5029662  20.8654  gamess.Feb222006R5.x     memcache_lookup_cover
4185769  17.3645  no-vmlinux               (no symbols)
4147846  17.2072  gamess.Feb222006R5.x     bufferRead
1969618   8.1709  gamess.Feb222006R5.x     memcache_memfree
799027    3.3147  libc-2.3.6.so            (no symbols)
206162    0.8553  libpscrt.so.1            memset.pathscale.opteron
197544    0.8195  gamess.Feb222006R5.x     __job_time_mgr_add
106939    0.4436  mthca.so                 (no symbols)
92519     0.3838  gamess.Feb222006R5.x     ddot_
68023     0.2822  gamess.Feb222006R5.x     sotran_
52222     0.2166  oprofiled                (no symbols)
41085     0.1704  libpthread-2.3.6.so      pthread_mutex_lock
32711     0.1357  gamess.Feb222006R5.x     PINT_process_request
 
_______________________________________________
Pvfs2-developers mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-developers

Reply via email to