Hello, Curt. "Curt Arnold" <[EMAIL PROTECTED]> escreveu na mensagem news:[EMAIL PROTECTED] > I'll try to flesh out a sandbox project for this at ApacheCON US 's > Hackathon next week. I don't know of any other log4j developers that > will be there, but at least one log4net developer will be attending.
Great! I'd surely like to participate in such project. Hope I can then be more useful than just a leecher... Please let us know about further development. > > The fundamental characteristics that I see are: I agree with you on almost all of them, but: > A configurable limit on the number of writers open at any one time. > A configurable limit on the time a writer will rename open without any > writes. It would be something like 'whichever comes first, close a file handler', right? > Opening or reopening a log file will append to the file. That should be the case only if the log file was closed due timeout or max open limit reached; because that would then mimic the behavior of FileAppender; the Append parameter being false, the file is overwritten on first open... > > I would expect this to be derived from AppenderSkeleton or maybe > WriterAppender, but not FileAppender. Why not? > > I like the name MultiFileAppender at the moment. > > Headers and Footers might be interesting. I think you'd only write the > header when the file does not exist. I think you would not want to write > the footer when a file was closed due to max open files or elapsed time, > which might mean keeping around a list of file names that had been > encountered and writing the footers when the overall appender was closed. > Or maybe not support headers/footers at al. I think not supporting is better, avoiding many complications... > > I would not suggest using a thread to monitor the elapsed time, but just > check the map of writers on each log request. The problem of not using a thread would be that, if the system is not generating new log events, the writer would never timeout, thus failing to do what it was supposed to do. I guess that the problem we (I do, at least) face is too many open files; that would be solved using the 'A configurable limit on the number of writers open at any one time.' you suggested. It would them eliminate the oldest (or least on average? which strategy is better?) open, close it and generate a new. This way, we avoid 'max open files' errors and don't have to have a thread checking, nor check on each log event: check only when creating a new file. That could be done using a pool of some sort... My 2 cents. Best regards, Leo. --------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
