Date:        Sat, 14 Apr 2018 05:24:40 +1000
    From:        matthew green <>
    Message-ID:  <>

  |  i'd recommend going for at  least 100ns intervals, if not 10ns.

I have implemented it in a way that the precision can be controlled
by sysctl - without actually adding the sysctl magic to make that
happen (someone more familiar with that should be able to add it
easily, it is just an integer variable to be accessed/set).

This way sites that need lots of precision in the timing of log
messages can get it, and those that prefer smaller log lines
can choose that instead (once someone implements the sysctl.)

  | we could also make this only relevant on ports with cpus that
  | have faster cpus, leaving older systems with smaller logs.

The default (which I have surrently set at units of 100nsec - ie: 7 digits)
could be made to depend upon the port, or even on the detected
clock rate of the CPU for ports that support a significantly wide range.

  | (the seconds part shouldn't change, i think, as there is no
  | reason to believe any port has longer uptime than another.)

The seconds field simply grows as needed and can handle systems
that boot now and are still running when the sun goes supernova
(I doubt any will still be running after that).

I have made dmesg dynamically adapt to what the kernel happens to
send (on a message by mesage basis) but still (as it does today) print
all (numermic format timestamps and deltas) timestamps in units of
microseconds.   Once this is cheked in if someone wants to add an
option to dmesg to control the precision it prints, go for it.

This is all being tested now - but as I don't have the sysctl code implemented,
and I am way too lazy to learn how top make gdb patch a running system,
testing different precisions for the log messages takes rebuilding and 
rebooting, so it is kind of slow. 


Reply via email to