I've come to a couple partial conclusions, and more requests for
information.

First, gettimeofday() on your system only reports with granularity
of 10 ms, likely the scheduling quantum of your kernel (100 Hz).  So
you don't get nice fine-grained timestamps.  Any apparent 10 ms jump
in the logs should be ignored; it's just the next "tick" of the
clock.

Second, all these long delays that I had pointed out earlier come
directly before a line that records a SYNC operation:

> > [D 14:46:05.673257] dbpf_attr_cache_insert: inserting 1055489 (k_size is 0 
> > | b_size is 0)
> > [D 14:46:05.763259] db SYNC called servicing op type DSPACE_CREATE

> > [D 14:46:05.763259] Updating cached attributes for key 1055489
> > [D 14:46:05.833261] db SYNC called servicing op type DSPACE_SETATTR

> > [D 14:46:05.833261] dbpf_attr_cache_insert: inserting 1055485 (k_size is 0 
> > | b_size is 0)
> > [D 14:46:05.933263] db SYNC called servicing op type DSPACE_CREATE

Not sure why I didn't notice this before; these are disk sync
operations.  Enabling TroveSyncMeta and TroveSyncData in the fs.conf
on my machine generates similar sorts of jumps, although much
shorter:

                 lee     pw
    create #1   90 ms   15 ms
    setattr     70 ms    8 ms
    create #2  100 ms    8 ms

I don't know why it appears to be so much slower on your machine.
Are you using slow disk?  Is there other traffic using the disk at
the same time?  You can also try editing your fs.conf to disable
those two values and see what you get.

I still don't understand why things appear to work quickly in TCP
but slowly in IB.  Maybe you could do the same single-mkdir test,
without mounting the fs, using TCP and we could compare the two?  I
have one log now (*), but there is this entire space that could help
rule things out:

    tcp with    syncmeta + syncdata
    tcp without syncmeta + syncdata
    ib  without syncmeta + syncdata
 *  ib  with    syncmeta + syncdata

Let me know if you have time to check on any of this.

                -- Pete
_______________________________________________
Pvfs2-developers mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-developers

Reply via email to