It would be weird IMO to provide a PatternConverter for an NDC stack that is 
supported by SLF4J but not Log4j.

Unfortunately, I think the only sane thing to do is option 1.

Ralph

> On Aug 22, 2022, at 8:45 AM, Piotr P. Karwasz <piotr.karw...@gmail.com> wrote:
> 
> Hi all,
> 
> On Sat, 16 Apr 2022 at 00:17, Carter Kozak <cko...@ckozak.net> wrote:
>> I can understand how the stack-based mdc might be convenient and useful, but 
>> I don't think it fits my use-cases. That said, I wonder if the API could be 
>> improved in such a way that it could leverage the application stack instead 
>> of maintaining its own -- this is an issue that I've encountered in tracing 
>> implementations as well, where asymmetric interactions to cause the 
>> application stack and internal stack to get out of sync. Perhaps using 
>> something like putCloseable[1] would allow the data needed to reset to be 
>> stored in the closeable without maintaining a standalone stack (at the cost 
>> of the ability to support getCopyOfDequeByKey[2]).
> 
> Since Ceki announced the release of SLF4J 2.0.0, the topic is back. We
> need to decide whether:
> 
> 1. we extend the Log4j2 API to support the "enhanced" NDC version of SLF4J2,
> 2. we use the default implementation provided by Ceki,
> 3. we hack around it (e.g. encoding a list of values to a JSON-like
> structure) like in https://github.com/apache/logging-log4j2/pull/820.
> 
> I would use Ceki's implementation and provide a `PatternConverter` for
> those hordes of users that will use it. Alternatively we could inject
> the top values from Ceki's NDC into the usual ThreadContextMap.
> 
> What do you think?

Reply via email to