On Wed, Mar 24, 2010 at 4:44 PM, Steve Simmons <[email protected]> wrote: > On Mar 24, 2010, at 4:38 PM, Russ Allbery wrote: > >> Steve Simmons <[email protected]> writes: >> >>> Our estimate too. But before drilling down, it seemed worth checking if >>> anyone else has a similar server - ext3 with 14,000 or more volumes in a >>> single vice partition - and has seen a difference. Note, tho, that it's >>> not #inodes or total disk usage in the partition. The servers that >>> exhibited the problem had a large number of mostly empty volumes. >> >> That's a *lot* of volumes from our perspective. The biggest partition >> we've got has about 7000 volumes on it. It must be really fun when you >> have to restart that file server and reattach volumes. > > Nightmare is a better word. Fortunately very recent 1.4 releases have gotten > a lot faster on that front. It's also another reason why we're desperately > trying to carve out time so we can test dynamic attach, but that's grist for > another thread.
If your group (or anyone else on this list, for that matter) can the find time, please please test DAFS. Any feedback whatsoever would be helpful and deeply appreciated. In the unlikely event that problems ensue, then by all means open bugs, start a discussion on -devel, contact myself or Deason, etc. Getting a 1.6 release out the door is a high priority for all of us, and to some extent that is going to be predicated on DAFS success stories. As it stands, we believe the DAFS architecture shipping in 1.5.x will provide a significant speedup for all moderate-to-large namei fileserver deployments. However, the true proof will be in the pudding, and this is where we need the help of the community. If there are unforeseen corner cases where DAFS causes a regression, we need to know about them ASAP. -Tom _______________________________________________ OpenAFS-info mailing list [email protected] https://lists.openafs.org/mailman/listinfo/openafs-info
