On Wed, 12 Sep 2007 12:29:31 -0600
Nicholas Leippe <[EMAIL PROTECTED]> wrote:


> We have a reader process that consumes the logs and removes them as it goes.
> 
is the number of files guaranteed to reach zero at some point?


> However, sometimes, after processing all of the files the folder size gets 
> reduced back to the default 4096, and sometimes it stays very large.

I've done this kind of thing with many 10's of thousands of small files
and I've never seen the size of the directory get reduced
automagically (on ext type f/s anyway).
I haven't looked at this in the last several years, so maybe the newer
versions of linux do this automatically.   I'll wait for others to
answer this one with recent experience.

> Is there some utility that can force the folder to have it's entries 
> optimized 
> and have its size reduced accordingly? Is there any reason one way or another 
> why this doesn't already happen?

The thing I've done many times is to always open files like these in a
directory based on a date timestamp.  Sometimes with month granularity,
other times down to minute granularity.   Form the directory names
like: YYYYMMDDHHMM and you can easily find the oldest directory.  
When a directory is emptied and a later directory
exists (to prevent a race condition) delete the empty directory.

HTH

bill

/*
PLUG: http://plug.org, #utah on irc.freenode.net
Unsubscribe: http://plug.org/mailman/options/plug
Don't fear the penguin.
*/

Reply via email to