DO NOT REPLY TO THIS EMAIL, BUT PLEASE POST YOUR BUG 
RELATED COMMENTS THROUGH THE WEB INTERFACE AVAILABLE AT
<http://nagoya.apache.org/bugzilla/show_bug.cgi?id=24407>.
ANY REPLY MADE TO THIS MESSAGE WILL NOT BE COLLECTED AND 
INSERTED IN THE BUG DATABASE.

http://nagoya.apache.org/bugzilla/show_bug.cgi?id=24407

large maxbackupindex makes RollingFileAppender dread slow





------- Additional Comments From [EMAIL PROTECTED]  2003-11-04 21:32 -------
No. A binary search could simply check File.exists() and very little would need
to be loaded into memory. Even for 99,999,999 files, this is under 30 calls (in
the worse case) to File.exists(). The performance tradeoff for a small number of
already-present files would be negligable -- you could do it either way;
however, the memory lost in doing a File.list() compared to the unchanging 30
File.exists() or less when there are many existing log files that match the
filter makes it worthwhile to stick to the simple binary search using File.exists().

Note that for a more reasonable MaxBackupIndex, the number of File.exists()
calls gets vastly smaller.

I realize this difference is somewhat petty -- there are probably only a handful
of people out there who would ever have that many logfiles in the same
directory. While we're on it though, we might as well do something that scales
reasonably well; there's no compelling reason to load the entire File.list()
instead of doing a few File.exists()..

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to