I would suggest you look at http://flume.apache.org/FlumeUserGuide.html 
<http://flume.apache.org/FlumeUserGuide.html> and specifically 
http://flume.apache.org/FlumeUserGuide.html#file-channel 
<http://flume.apache.org/FlumeUserGuide.html#file-channel>. Flume has an 
embedded agent http://flume.apache.org/FlumeDeveloperGuide.html#embedded-agent 
<http://flume.apache.org/FlumeDeveloperGuide.html#embedded-agent> that you can 
use that handles all of this for you. The Log4j FlumeAppender does this but it 
also supports a lighter-weight version that you could copy as well.  It uses 
BerkelyDB and writes the event to it before passing the event to another thread 
where it uses the Flume RPC client to send the event off. You could copy just 
the BerkelyDB logic and tie it to another delivery mechanism if you want. The 
code for that is at 
https://github.com/apache/logging-log4j2/blob/master/log4j-flume-ng/src/main/java/org/apache/logging/log4j/flume/appender/FlumePersistentManager.java
 
<https://github.com/apache/logging-log4j2/blob/master/log4j-flume-ng/src/main/java/org/apache/logging/log4j/flume/appender/FlumePersistentManager.java>
 and the main portion of the appender is at 
https://github.com/apache/logging-log4j2/blob/master/log4j-flume-ng/src/main/java/org/apache/logging/log4j/flume/appender/FlumeAppender.java
 
<https://github.com/apache/logging-log4j2/blob/master/log4j-flume-ng/src/main/java/org/apache/logging/log4j/flume/appender/FlumeAppender.java>
 so you can see how they tie together. The FlumeAppender supports any valid 
Layout so the answer to how the stack trace is passed would be “it depends”.

Ralph

> On Jun 2, 2017, at 11:44 AM, Oleksandr Gavenko <[email protected]> wrote:
> 
> On Fri, Jun 2, 2017 at 9:02 PM, Ralph Goers <[email protected]> wrote:
>> I created the FlumeAppender for Log4j with just this purpose in mind. The 
>> FlumeAppender will write the log event to local disk and then returns 
>> control to the application.
>> At this point eventual delivery is guaranteed.
> Can you share how do you serialize log event?
> https://logback.qos.ch/apidocs/ch/qos/logback/classic/spi/ILoggingEvent.html
> interface has simple fields along with complex, like:
> 
> * MCD (which I definitely will use)
> * StackTraceElement[]
> 
> Is that some CSV format? How do you handle control characters and new lines?
> 
> My end destination works with JSON format, I can stop with this serialization.
> 
>> A background thread reads the events that have been written to disk and 
>> forwards them on to another Apache Flume node.
>> When that node confirms it has accepted them the event is deleted from local 
>> disk.
>> The FlumeAppender has the ability to fail over to alternate Flume nodes.
>> If none are available the events will simply stay on disk until it is full.
> 
> Originally I thought about complex solution that asynchronously send
> to network unless remote host down or event buffer is full because of
> load.
> 
> In later case write to disc and later try to deliver saved data.
> 
> On application shutdown saving to disk can be much faster then trying
> to deliver logging events to external server.
> 
> What I wander is how do you manage saved events. In  single file or
> several? How do you discover file for processing? How do you split
> logging events? How do you keep pointers for not yet processes events?
> _______________________________________________
> logback-user mailing list
> [email protected]
> http://mailman.qos.ch/mailman/listinfo/logback-user

_______________________________________________
logback-user mailing list
[email protected]
http://mailman.qos.ch/mailman/listinfo/logback-user

Reply via email to