Hi.

I apologize in advance if I have missed the answer to this question in my searches on the log4net website and the internet in general. 

I am using log4net 1.2.9 to generate logs on an ASP.NET web client application.  This app is running on IIS with 20 processes.  I am seeing what I believe is a race condition when the logs roll over at the hour mark.  I can see the new log appear shortly after the hour turns over, and I can see log entries appearing in the new log file.  Then, whoosh, all the entries disappear and new entries start appearing.  This happens some number of times over the first minutes of the new hour, and then the new log file stabilizes.  It seems that each process is taking responsibility for opening the new file when it detects that the hour has changed, even if the new file already exists.  Once the file stabilizes, entries from all processes are appearing.

Here is my appender configuration:

    <!-- Define rolling, timestamped file output appender. -->
    <appender name="LogFileAppender" type="log4net.Appender.RollingFileAppender">
        <file value="log/service.log" />

        <!-- Do not overwrite existing files. -->
        <appendToFile value="true" />

        <!-- Create new log files with timestamped name. -->
        <StaticLogFileName value="false" />

        <!-- Allow multiple processes to write to the same file. -->
        <lockingModel type="log4net.Appender.FileAppender+MinimalLock " />

        <!-- Roll the log every hour -->
        <rollingStyle value="Date" />
        <datePattern value=".yyyyMMdd-HH" />

        <layout type=" log4net.Layout.PatternLayout">
            <conversionPattern value="%date [%thread] %-5level %logger [%property{NDC}] - %message%newline" />
        </layout>
    </appender>

I haven't tried to enable error tracing since the problem is hard to duplicate on low traffic systems, and I don't really want to enable error tracing on our production systems.  This happens with StaticLogFileName set to "true" or "false."  I've considered turning on the ImmediateFlush option to minimize the number of lost entries, but I don't really want to take the performance hit and at low traffic times, we could still end up losing a lot of data. 

My questions:
1. Am I doing something obviously wrong or stupid?
2. Is this a known issue?
3. If this is a bug in log4net, is it fixed in 1.2.10?
4. If this is a bug and it is not fixed in 1.2.10, is there a workaround?

Thanks in advance for any help.  I'm losing hair and sleep over this.

-JP

Reply via email to