I sent a reply mentioning the same Jira issue yesterday but for some reason it doesn’t appear to have made it to the mailing list.
Ralph > On Jan 28, 2019, at 2:40 PM, Remko Popma <[email protected]> wrote: > > This reminds me of a JIRA ticket that Ralph raised. > https://issues.apache.org/jira/browse/LOG4J2-1137 > > That would be very nice to have! > > Remko. > > (Shameless plug) Every java main() method deserves http://picocli.info > >> On Jan 29, 2019, at 6:23, Matt Sicker <[email protected]> wrote: >> >> I like the idea in general, though I wonder if this is already doable >> with an existing plugin? >> >>> On Mon, 28 Jan 2019 at 04:43, 于得水 <[email protected]> wrote: >>> >>> Hello, Log4j developers >>> We have a problem when debugging an online production system. Our >>> production system manages and distributes data across multiple worker >>> machines. There's a bug that can cause data unbalanced placement or even >>> unavailable under heavy work load. In this scenario, DEBUG level log will >>> help us a lot to diagnose the issue. However, we cannot always set logger's >>> level to DEBUG because that will store too many logs on disk and slow down >>> the production service, especially the bug just occurs occasionally. >>> >>> I wonder if we could add a new type of memory appender in Log4j. This >>> appender will store log entries in a memory queue first, with a >>> configurable maximum queue size and a policy (like FIFO) to roll out stale >>> log entry once the queue is full. If any problem occurs, like some types of >>> exception we're interest is thrown, user can trigger the dump of this >>> appender to flush in memory logs into file for future diagnostic use. So it >>> can only record 'useful' DEBUG logs and related context in disk, avoid >>> wasting disk space and slowing down production service. >>> >>> If you think it's worth doing, I can create a JIRA and paste my >>> prototype Pull Request for review. >>> >>> Thanks, >>> Deshui >> >> >> >> -- >> Matt Sicker <[email protected]>
