On Tue, Mar 25, 2014 at 10:27 PM, Jens Alfke <[email protected]> wrote:
> On Mar 25, 2014, at 12:41 AM, Suraj Kumar <[email protected]> wrote:
>
>> If there are a million "*.couch" files under var/lib/couchdb/, I'd expect the
>> performance to be very poor / unpredictable since it now depends on the
>> underlying file system's logic.
>
> Do modern filesystems still have performance problems with large directories? 
> I’m sure none of them are representing directories as linear arrays of inodes 
> anymore. I’ve been wondering if this is just folk knowledge that’s no longer 
> relevant.

The most part of issues comes from tools that aren't able to operate
with such amounts of files effectively. That's ruins all usability of
having billion files in single directory.

Also: http://events.linuxfoundation.org/slides/2010/linuxcon2010_wheeler.pdf

As for Windows...never try to open a directory with thousands files
inside with the default file manager called Explorer.


--
,,,^..^,,,

Reply via email to