At 05:43 PM 1/10/2005, Curt Arnold wrote:

I said Object.hashCode, the underlying implementation if you had called super.hashCode() within logging event. It has similar characteristics to a memory address,

Yes, the Object.hashCode implementation is computed over the object's addres. However, two equal objects *must* return the same hashcode. In particular, two successive deserializations of the same event *must* return the same hashcode. Object.hashCode does not respect this fundamental constraint.


it would be highly unlikely that two objects that exist at the same time would have the same Object.hashCode and that could be used as an identifier.

Unless the objects are pooled. We don't do that currently but might in the future. In any case, Object.equals() is not an acceptable implementation for us, as explained above.


However, since LoggingEvents are typically short lived, you might see the same hashCode repeat for successive LoggingEvents, at least the probabiility of a collision would much higher. However, I guess it would be unlikely that they would repeat between garbage collections.

There are no guarantees about that (no repetitions between gcs).

I don't think any of the performance tests attempt to dispatch logging requests from multiple threads. It might be good to add some so the cost of the sync lock could be determined.

+1



The implementation would need to be start enough so that a cloned through serialization object would compare equal to the original and that prepareForDeferredProcessing would not effect either equals or hashCode.

There would be at least two cases where the approaches give observably results. If distinct logging event with content and timestamp were received the current approach would report the messages as distinct while the new approach would report the messages as equal. This would only occur in tight loops like:

for (int i = 0; i < 20; i++) {
    logger.info("Hello, world");
}

The current approach would see that as twenty distinct messages while the new approach would see all messages within the same millisecond as identical.

Hence the necessity for a sequence counter.

I didn't think this scenario would be a requirement. However if it is, it might be addressed by something other than a sequence counter (Object.hashCode might be sufficient for this use).

Isn't this a self-defeating argument? If Object.hashCode can be used to check for equality, you might as well implement the equals() method as:


class LoggingEvent {

  public boolean equals(Object r) {
    return this == r;
  }
}

I'd love to hear your thoughts behind adding the setMessage and other mutators to LoggingEvent. It is generally a very good thing for objects to be immutable since you can eliminate a score of potential synchronization problems. In log4j 1.3, LoggingEvent's have become mutable which means that we have to think about scenarios like a LoggingEvent gets placed in an AsynchAppender then gets modified in a filter. The AsyncAppender may or may not see the new message. It is usually much easier to prevent synchronization issues by designing immutable objects than to ensure that mutable but non synchronized classes are used correctly.

In 1.3, LoggingEvents are still immutable. Just have a look at its setter methods. The only mutable part is the LoggingEvent 'properties' member field. (The sequence number is mutable but that's just an omission. I'll correct it shortly.)



-- Ceki Gülcü

  The complete log4j manual: http://www.qos.ch/log4j/



---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Reply via email to