Stephen hit the nail on the head with the number I have always heard
when talking about optimal I/O threshold target limits for Unix file
systems.

Many years of benchmarking large SMP UNIX architectures and
applications on those seem to confirm those numbers as well. I'd be
willing to say the same about Windows, as a general rule, and
because it simply makes good sense.

Combine file system fragmentation issues, with a little bit too much
indirection, inflicted by huge directories with lots of files, bits and
pieces, and you've got less than desirable I/O performance.



----- Original Message ----- 
From: "Stephen O'Neal" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Thursday, September 30, 2004 11:38 AM
Subject: Re: [U2] [UV] Max Files Per Directory


> [EMAIL PROTECTED] wrote:
> > I've heard some discussions in the past regarding limiting the number of
> > files per directory to help OPEN performance.  Does anyone have any
> > real-world experience on what a reasonable limit might be on a *nix file
> > system?
>
> This topic was presented on this at the DM Technical Users Conference.
>
> Our experience shows that it effects the length of time it takes to open a
> file.  The reason is the length of time it takes to traverse the directory
> table to find the file.  Literally, we have seen directories (accounts)
> with upwards of 4,000 files & dictionaries!  We saw improvement in speed
> when reduced to the 1,000 entry range.
>
> But this is pail in comparison to keeping files opened thru labeled
common.
> We HIGHLY recommend holding files open in common!
>
> FYI,
>    Steve
>
>    Stephen M. O'Neal, CDP &  IBM Certified
>    SWG Services Sales Specialist / Channels & U2
> -------
> u2-users mailing list
> [EMAIL PROTECTED]
> To unsubscribe please visit http://listserver.u2ug.org/
-------
u2-users mailing list
[EMAIL PROTECTED]
To unsubscribe please visit http://listserver.u2ug.org/

Reply via email to