We have a process that generates a huge amount of critical log traffic. We use chronolog to split the logs into and puts them in one folder.
We have a reader process that consumes the logs and removes them as it goes. As the number of file entries increases, so does the size of the folder. This is perfectly understandable, as it requires more file entries. However, sometimes, after processing all of the files the folder size gets reduced back to the default 4096, and sometimes it stays very large. Is there some utility that can force the folder to have it's entries optimized and have its size reduced accordingly? Is there any reason one way or another why this doesn't already happen? Nick /* PLUG: http://plug.org, #utah on irc.freenode.net Unsubscribe: http://plug.org/mailman/options/plug Don't fear the penguin. */
