I haven't worked any more on this, but I do intend to do so. I thought that I would at least give a heads up that this issue isn't dead.

The current approach has a couple of drawbacks: it incurs a synchronization for every dispatched message, it adds a sequenceCount to the LoggingEvent which breaks serialization compatibility with 1.2.x and might be mis-interpreted as a reliable means to detect dropped messages, and would report as equal serializations of the same object in different states (discussed later).
Since LoggingEvents have a short lifetime, the values of Object.hashCode (which I believe are similar to memory addresses) may repeat and likely in an implementation dependent fashion.


The approach that seems most reliable to me is to implement LoggingEvent.equals as a real value comparison and create a corresponding LoggingEvent.hashCode. That would avoid synchronization tax on every logging request and would only cost the callers of LoggingEvent.equals and LoggingEvent.hashCode. The implementation would need to be start enough so that a cloned through serialization object would compare equal to the original and that prepareForDeferredProcessing would not effect either equals or hashCode.

There would be at least two cases where the approaches give observably results. If distinct logging event with content and timestamp were received the current approach would report the messages as distinct while the new approach would report the messages as equal. This would only occur in tight loops like:

for (int i = 0; i < 20; i++) {
    logger.info("Hello, world");
}

The current approach would see that as twenty distinct messages while the new approach would see all messages within the same millisecond as identical.

The other scenario is that the current approach sees all deserialized copies of the same original object instance as identical. log4j 1.3 adds setter methods to LoggingEvent that are used by Filters to change the message and other aspects of the logging event. If a message is sent using the SocketAppender both before and after Filter processing, with the current approach the messages would be seen as identical even though the message content, level and other aspects could be radically different.

My attack plan would be to first add unit tests that should check the essential features on any acceptable approach and should be passed by the current implementation, all the equals contract test cases (equals to self, equals to null), comparison to deserialized clone, hashCode invariant after prepareForDeferredProcessing. Then I would try to flesh out the implementation and corresponding tests for the value comparison implementation and report back to the list before proceeding.


--------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]



Reply via email to