> On Oct 19, 2019, at 2:17 AM, Michał Górny <mgo...@gentoo.org> wrote:
> 
> On Fri, 2019-10-18 at 21:09 -0400, Richard Yao wrote:
>>>> On Oct 18, 2019, at 4:49 PM, Michał Górny <mgo...@gentoo.org> wrote:
>>> 
>>> On Fri, 2019-10-18 at 15:53 -0400, Richard Yao wrote:
>>>>>>>> On Oct 18, 2019, at 9:42 AM, Michał Górny <mgo...@gentoo.org> wrote:
>>>>>>> Hi, everybody.
>>>>>>> It is my pleasure to announce that yesterday (EU) evening we've switched
>>>>>>> to a new distfile mirror layout.  Users will be switching to the new
>>>>>>> layout either as they upgrade Portage to 2.3.77 or -- if they upgraded
>>>>>>> already -- as their caches expire (24hrs).
>>>>>>> The new layout is mostly a bow towards mirror admins, for some of whom
>>>>>>> having a 60000+ files in a single directory have been a problem.
>>>>>>> However, I suppose some of you also found e.g. the directory index
>>>>>>> hardly usable due to its size.
>>>> This sounds like a filesystem issue. Do we know which filesystems are 
>>>> suffering?
>>>> ZFS should be fine. I believe ext2/ext3 have problems with this many 
>>>> files. ext4 is probably okay, but don’t quote me on that.
>>> 
>>> Ext2, VFAT and NTFS were mentioned on the bug [1], though I suppose this
>>> may apply only to older ntfs versions.  NFS has been mentioned too.
>> 
>> ext2 and vfat are not surprises to me (outside of the idea that anyone would 
>> use them for a mirror). NTFS and NFS are though.
> 
> Are you surprised that people use NTFS on Windows?  Or that they use
> local mirrors over NFS?  The latter still needs to be addressed
> separatel, provided that they mount it on DISTDIR.
I am surprised that it was an issue on NTFS because it uses B-trees. As for 
NFS, I had expected that to be more dependent on the local filesystem than on 
NFS itself. If it has a slowdown when used on a filesystem that had fast 
directory operations, that might be a bug.
> 
>>> However, just because modern filesystems can handle them efficiently, it
>>> doesn't mean having directories that huge comes with zero cost.
>> While I am okay with the change, what do you mean when you say that having 
>> huge directories does not come with zero cost?
>> 
>> Filesystems with O(1) directory lookups like ZFS would probably be hurt by 
>> this
> 
> O(1) or O(n)?
ZFS uses extendible hashing for its directories, so the data structure used is 
amortized O(1). You might consider it O(log n) due to the indirect tree 
traversal needed to find the direct block containing the hash table entry. With 
caching of indirect blocks, it should be amortized O(1) to find the direct 
block in practice as far as read IOs are considered. In addition, the base of 
the logarithm is 128 or 1024 depending on the pool feature flags.
> 
>> , but the impact should be negligible. Filesystems with O(log n) directory 
>> lookups would see faster directory lookups.
>> 
>> Outside of directory lookups, this could speed up up searches and sort 
>> operations when listing everything with just about any filesystem benefiting 
>> from the improvement.
>> 
>> Listing directories on such filesystems should not benefit from this unless 
>> you are using ls where the default behavior is to sort the directory 
>> contents (which is where the improvement when sorting comes into play). The 
>> need to sort the directory contents by default keeps ls from displaying 
>> anything until it has scanned the entire directory. The asymptotic 
>> complexity of a fast comparison based sort improves in this situation from 
>> O(nlogn) to O(nlog(n/b)) provided that you sort each subdirectory 
>> independently. A further speed up could be obtained by doing multithreading 
>> to parallelize the sort operations.
>> 
>> Since I know someone will call me out on that comment, I will explain. Each 
>> bucket has roughly n/b items in it where n is the total number and b is the 
>> number of buckets. Sorting one bucket is O(n/b * log(n/b)). Loop to sort 
>> each of the b buckets. The buckets are pre-sorted by prefix, so the result 
>> is now sorted. You therefore get O(nlog(n/b)) time complexity out of an 
>> O(nlogn) comparison sort on this very special case where you call it 
>> multiple times on data that has been persorted by prefix into buckets.
>> 
>> Is there any other benefit to this or did I get everything?
> 
> Listings for individual directories won't cause major pain to browsers
> anymore.  Not that there's much reason to do them.
That makes sense.
> 
> All kinds of per-direction operations will consume less memory
> and be potentially faster.
Userland would save memory when sorting or grepping a directory listing by 
virtue of having to process less data for grep and less data at a time for 
sorting (if it takes advantage of this). That would have performance benefits 
in userland.

The kernel would have little memory savings and in some cases might be slightly 
worse. It is negligible. Performance in the kernel ought to be slightly better 
on filesystems with O(log n) directory operations, but I would only expect the 
really bad ones to show much improvement.
> -- 
> Best regards,
> Michał Górny
> 


Reply via email to