Karl Vogel wrote:
K The main reason I stick with 1000 is because directories are read
K linearly unless you're using something like ReiserFS...
On Sun, 26 Jul 2009 08:34:50 +0100,
Matthew Seaman m.sea...@infracaninophile.co.uk said:
M You mean filesystems like FreeBSD UFS2 with DIRHASH?
understanding what is going on. I'm reading up on this, and as soon
as I know enough to either understand the issue, or ask an
intelligent question, I will do so...
When a program is executed with arguments, there is a system
imposed limit on
the size of this argument list. On FreeBSD this
John Almberg wrote:
Which is why I'm starting to think that (a) my problem is different
or (b) I'm so clueless that there isn't any problem at all, and I'm
just not understanding something (most likely scenario!)
It looks to me like the thread began assuming that you must be typing
`ls *`
On Monday 27 July 2009 12:42:32 Chris Cowart wrote:
John Almberg wrote:
Which is why I'm starting to think that (a) my problem is different
or (b) I'm so clueless that there isn't any problem at all, and I'm
just not understanding something (most likely scenario!)
It looks to me like the
Karl Vogel wrote:
That arbitrary number has worked very nicely for me for 20 years under
Solaris, Linux, and several BSD variants. The main reason I stick
with 1000 is because directories are read linearly unless you're using
something like ReiserFS, and I get impatient waiting for
On Saturday 25 July 2009 23:34:50 Matthew Seaman wrote:
It's fairly rare to run into this as a practical
limitation during most day to day use, and there are various tricks like
using xargs(1) to extend the usable range. Even so, for really big
applications that need to process long lists of
On Jul 26, 2009, at 4:45 AM, Mel Flynn wrote:
On Saturday 25 July 2009 23:34:50 Matthew Seaman wrote:
It's fairly rare to run into this as a practical
limitation during most day to day use, and there are various
tricks like
using xargs(1) to extend the usable range. Even so, for really
On Sunday 26 July 2009 10:24:31 John Almberg wrote:
On Jul 26, 2009, at 4:45 AM, Mel Flynn wrote:
On Saturday 25 July 2009 23:34:50 Matthew Seaman wrote:
It's fairly rare to run into this as a practical
limitation during most day to day use, and there are various
tricks like
using
On Thursday 23 July 2009 09:41:26 Karl Vogel wrote:
K Every version of Unix I've ever used had an upper limit on the size of
K the argument list you could pass to a program, so it won't just be ls
K that's affected here. That's why I use 1,000 as a rule of thumb for the
K maximum number of
John Almberg wrote:
I seem to have run into an odd problem...
A client has a directory with a big-ish number of jpgs... maybe 4000.
Problem is, I can only see 2329 of them with ls, and I'm running into
other problems, I think.
Question: Is there some limit to the number of files that a
On Wed, 22 Jul 2009 20:01:57 -0400,
John Almberg jalmb...@identry.com said:
J A client has a directory with a big-ish number of jpgs... maybe 4000.
J Problem is, I can only see 2329 of them with ls, and I'm running into
J other problems, I think.
J Question: Is there some limit to the number
On Thursday 23 July 2009 09:41:26 Karl Vogel wrote:
On Wed, 22 Jul 2009 20:01:57 -0400,
John Almberg jalmb...@identry.com said:
J A client has a directory with a big-ish number of jpgs... maybe 4000.
J Problem is, I can only see 2329 of them with ls, and I'm running into
J other problems,
How are you using ls? I presume something along the lines of ls -la |
more.
What does sysctl fs.file-ma and sysctl kern.maxfiles tell you?
I've seen directories with 1+ files. The only problem I've ever had
with that many is using the rm command. In that case, you will need to use
13 matches
Mail list logo