Hey Robert,

I see that you're working on some performance graphs for the Log4cxx website
<https://github.com/apache/logging-log4cxx/commit/39eba952aeca92a3e9d69790fdb789cc864ce133>.
Judging from our experience with Log4j, I’d like to share a word of
caution: this can easily become a very slippery slope.

There are simply too many configuration variables involved (OS,
architecture, version, configuration, etc.), and the codebase itself is
constantly evolving. To give a concrete example: Log4j once had a
performance page with benchmark results collected by Remko, but no one was
ever able to reproduce the same numbers again. Over time, that page became
not only misleading but also a maintenance liability.

If I may, I'd suggest one of the following approaches, in order of personal
preference:

   1. Don't do it. It is deprecated before it is released. Keeping it
   updated is a liability.
   2. Share your configuration in detail (OS, architecture, version, and
   configuration) and provide a script where users can generate these reports
   for their particular setup.
   3. Implement a continuous performance test bed, where option #2 is
   _automatically_ (through CI?) got executed for every Log4cxx release. (This
   I wanted to implement for Log4j, but could not secure funding for. Lucene
   has one. <https://benchmarks.mikemccandless.com/>)

Kind regards.

Reply via email to