[ 
https://issues.apache.org/jira/browse/LOG4NET-487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15105049#comment-15105049
 ] 

Dominik Psenner commented on LOG4NET-487:
-----------------------------------------

Here comes a small refinement of the locking options:

1. global (all processes, even ones that are not on the same machine should be 
synchronized so that only one process on one machine rolls): this will be hard 
to implement
2. local (all processes on the same machine are synchronized so that one of 
them rolls): this is a mutex lock and the way it works at the moment
3. thread (it is assumed that there is only one process that logs and rolls): a 
"lock (instanceOfObject) {}" could to the job
4. no lock (no locking required as there is only one process and thread who 
logs and rolls): no locks should happen for maximum performance

At the time of writing, only option 2. and 4. make sense to be implemented.

> Control mutex type
> ------------------
>
>                 Key: LOG4NET-487
>                 URL: https://issues.apache.org/jira/browse/LOG4NET-487
>             Project: Log4net
>          Issue Type: Improvement
>          Components: Appenders
>    Affects Versions: 1.2.14, 1.3.0
>            Reporter: NN
>            Assignee: Dominik Psenner
>
> The only missing feature is an option for choosing Local (per session) or 
> Global (per machine) mutex.
> The current code just uses the filename for mutex which good but it always 
> makes a local one, so if you have two sessions you cannot synchronize them.
> Default is Local for backward compatibility.
> See Note in: 
> https://msdn.microsoft.com/en-us/library/system.threading.mutex%28v=vs.110%29.aspx
>  
> I think it can be an option like
> <RollingMutexType value="Global" /> 
> or something like that.
> It also applies to FileAppender mutex .
> <LockingModel InterProcessLock>
>   <LockingMutexType value="Global" />
> </..>
> See issue #485 for reference.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to