DO NOT REPLY TO THIS EMAIL, BUT PLEASE POST YOUR BUG 
RELATED COMMENTS THROUGH THE WEB INTERFACE AVAILABLE AT
<http://nagoya.apache.org/bugzilla/show_bug.cgi?id=24407>.
ANY REPLY MADE TO THIS MESSAGE WILL NOT BE COLLECTED AND 
INSERTED IN THE BUG DATABASE.

http://nagoya.apache.org/bugzilla/show_bug.cgi?id=24407

large maxbackupindex makes RollingFileAppender dread slow

           Summary: large maxbackupindex makes RollingFileAppender dread
                    slow
           Product: Log4j
           Version: 1.2
          Platform: Other
        OS/Version: Other
            Status: NEW
          Severity: Major
          Priority: Other
         Component: Appender
        AssignedTo: [EMAIL PROTECTED]
        ReportedBy: [EMAIL PROTECTED]


To ensure that our RollingFileAppender would not start deleting files soon, we 
set maxbackupindex=99999999.

This seemed to work, but we experienced that this would make log4j halt to a 
standstill at each time the rollover should happen.

The culprit is that RollingFileAppender at line 123 counts DOWN from 
maxbackupindex and try to check if a file exists with that number.

This is VERY ineffective when maxbackupindex is a large number.

Why don't it start form 0 (zero) and try to find the first missing file ? 
That would make it complete the search in a more reasonable time.

Or even better - do a File.list() and do it all in memory instead of accessing 
the file system for each iteration.

If you don't find this a solution, then at least warn about this in the docs of 
maxbackupindex.

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to