> You didn't say which AFS that is. Sorry, vos move from 1.3.85 on FC3 xfs to 1.4.1rc7 on Centos xfs.
> The modern, multi-threaded RX has a "pacing" bug where every now and > then one of the parties waits for roughly 0.3 seconds for a missing > acknowledgement. The bug must be there in the lwp-version as well but > somehow it doesn't slow things down enough. I mention this as last > year I ran into this RX-tracing a memory-to-memory RX application and > bypassed the problem by completely and horribly changing the ACK > algorithm, just to show that >110MB/s is possible using RX. Ok, "horribly" is the word then. Pointer? > On the volume with the 70k files you suffer mainly from another > effect: the first clone increments all the link counts and fsync()s > the link count file for every one of them. Depending on your RAID you > won't do many more than 100-200 per second of those. After the move, > it decrements all the link counts twice (for the clone and the > original) fsyncing all the time, again you can calculate how long this > will take. Maybe my RAID (when it was new, it was a RAED) is really bad when fsyncing under load. It was not that bad when otherwise idle (bonnie++), but of course this is not an artificial load. > In the inode fileserver there is not much you can do about that, for > the namei fileserver I've got a patch which a few people are already > running that batches the fsyncs - speedup factor >200 on a volume with > 1 million 30-bytes files mmm, users with metadata > data > that used to take almost 24 hours to move. > Although we're using it on several servers now I still consider it > experimental - let me know if you're interested. Sure, I'm running 1.4.1rc*, so can it get any worse? ;-) Harald. _______________________________________________ OpenAFS-devel mailing list [email protected] https://lists.openafs.org/mailman/listinfo/openafs-devel
