Ceki Gulcu skrev:
As there are certainly pros and cons for each serialization approach,
instead of the debate eventually degenerating into a religious
argument, we are likely to be better served by basing comparisons on
the same logging event data collection, which in the compression world
is called a "corpus". At present time, we do not have a logging event
corpus. Just as importantly, logback currently lacks a format for
storing the said corpus. Given that this corpus will serve as a
yardstick for a long time, and performance is not an issue, a human
readable text format such as XML seems like a reasonable choice.
Is anyone interested in providing a corpus?
....
Having said that, defining a corpus seems to me as being the most
pressing issue at this time.
Once we settle on a corpus, we can more objectively debate the merits
of such and such serialization strategy.
If I understand you correctly you basically say there is a need for a
standardized set of event data.
After thinking this over it might be better to have code generating the
events instead of them being stored statically on disk. This is to
avoid setting any API in stone except the slf4j interface which by now
should be settled.
(What if the internal representation of a stack trace is changed or
similar? Just happened, might happen again :) )
A test suite might then build all the events for a given test in memory
and then do the actual testing (as would have been done anyway if read
from XML)
That said. What would be reasonable test suites?
* A million events with almost no text?
* A million events with very large texts (using full unicode set?)
* Lots of exceptions?
* Large MDC's?
What do those with experience in large data sets say?
--
Thorbjørn Ravn Andersen "...plus... Tubular Bells!"
_______________________________________________
logback-dev mailing list
[email protected]
http://qos.ch/mailman/listinfo/logback-dev