Thank you for the pointer, Ralph. Glad to see there's good progress on this.

In the meantime I guess I'll have to stick with my current approach, except 
I'll change it to (ab)use NanoClock/nanoTime field instead.

Cheers,
Al.

Sent: Saturday, February 10, 2018 at 2:16 PM
From: "Ralph Goers" <ralph.go...@dslextreme.com>
To: "Log4J Users List" <log4j-user@logging.apache.org>
Subject: Re: Best approach for sub-millisecond timestamps?

> See https://issues.apache.org/jira/browse/LOG4J2-1883
> <https://issues.apache.org/jira/browse/LOG4J2-1883>
> 
> Ralph
> 
> > On Feb 10, 2018, at 10:42 AM, alfred.eckm...@gmx.com wrote:
> >
> > Hello list,
> >
> > I need to get more granular timestamps in my logs.
> >
> > The question is what is the best way get Log4j to handle
> > sub-millisecond timestamps?
> >
> > A top search result provides this deceptively-simple approach:
> > http://blog.caplin.com/2017/10/13/microsecond-time-stamp-logging-for-mifid-ii/
> >
> > However, that will provide wrong results, since they're generating the
> > timestamp inside the Converter, which would run in the AsyncLogger
> > thread and in effect log when the event was processed rather than when
> > it actually occurred.
> >
> > The hack I came up with is to implement a custom Clock whose
> > currentTimeMillis() returns nanoseconds since the epoch, and a
> > corresponding Converter plugin that handles nanoseconds in the
> > millisecond field (Hopefully it won't still be around by year 2262 :)
> > It works pretty well for what I'm doing so long as both the custom
> > clock and a correct layout are used, but I imagine it could cause
> > unexpected consequences if some time-related functionality is used
> > (e.g. file rolling).
> >
> > Any suggestions for a cleaner but low-overhead approach?
> >
> > Cheers, Al.
> >
>

---------------------------------------------------------------------
To unsubscribe, e-mail: log4j-user-unsubscr...@logging.apache.org
For additional commands, e-mail: log4j-user-h...@logging.apache.org

Reply via email to