On Thursday 23 July 2009 09:41:26 Karl Vogel wrote: > >> On Wed, 22 Jul 2009 20:01:57 -0400, > >> John Almberg <jalmb...@identry.com> said: > > J> A client has a directory with a big-ish number of jpgs... maybe 4000. > J> Problem is, I can only see 2329 of them with ls, and I'm running into > J> other problems, I think. > > J> Question: Is there some limit to the number of files that a directory > J> can contain? Or rather, is there some number where things like ls start > J> working incorrectly? > > Every version of Unix I've ever used had an upper limit on the size > of the argument list you could pass to a program, so it won't just be > "ls" that's affected here. That's why I use 1,000 as a rule of thumb > for the maximum number of files I put in a directory.
That arbitrary number works simply because kern.argmax default has been raised somewhere in 6.x (before it was 64kB). % echo `sysctl -n kern.argmax`/1000|bc 262 And MAXNAMLEN in sys/dirent.h is 255. Knowing your way around maximum arguments length through xargs as suggested in this thread is much better solution then trying to exercise control over directory sizes, which may or not be under your control in the first place. -- Mel _______________________________________________ email@example.com mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"