I found https://github.com/gilt/logback-flume-appender 
<https://github.com/gilt/logback-flume-appender> but haven’t looked at the 
code. Maybe it will meet your needs.

Ralph

> On Jun 2, 2017, at 11:02 AM, Ralph Goers <[email protected]> wrote:
> 
> I created the FlumeAppender for Log4j with just this purpose in mind. The 
> FlumeAppender will write the log event to local disk and then returns control 
> to the application. At this point eventual delivery is guaranteed. A 
> background thread reads the events that have been written to disk and 
> forwards them on to another Apache Flume node. When that node confirms it has 
> accepted them the event is deleted from local disk. The FlumeAppender has the 
> ability to fail over to alternate Flume nodes. If none are available the 
> events will simply stay on disk until it is full.
> 
> Ralph
> 
>> On Jun 2, 2017, at 10:01 AM, Oleksandr Gavenko <[email protected]> wrote:
>> 
>> I made search in a list and only found one relevant message:
>> 
>> http://markmail.org/message/tdkr745eqf3vxqme
>> 
>> From Michael Reinhold, Mar 19, 2014 5:45:23 pm:
>>> The method described for a disk-cached AsyncAppender of course
>>> makes perfect sense. I think it would make a good alternative to
>>> the current AsyncAppender for some scenarios and would be a solid
>>> addition to the current Logback appenders.
>> 
>> I am going to implement log collection into ElasticSearch and I would
>> like to achieve high level of delivery reliability. Not in sense of
>> latency but in sense of completeness.
>> 
>> Business processes operate on collected data and nothing should be
>> lost if remote log collector is down or my application
>> crashed/restarted.
>> 
>> Only local disk space the most safest place for write due to low
>> latency, persistence and availability (except rare cases of FS
>> corruption or disk is full error).
>> 
>> Network services definitely have downtime.
>> 
>> =========================
>> 
>> Is there persistence wrapper for log events to guaranty delivery at
>> later time if remote service is not available? Like AsyncAppender for
>> Appender?
>> 
>> Docs https://logback.qos.ch/manual/appenders.html doesn't show
>> possibility for error:
>> 
>> public interface Appender<E> extends LifeCycle, ContextAware, 
>> FilterAttachable {
>> public String getName();
>> public void setName(String name);
>> void doAppend(E event);
>> }
>> 
>> but in sources I see:
>> 
>>   void doAppend(E event) throws LogbackException;
>> 
>> Potentially it is possible to say that service unavailable via
>> LogbackException subclass and start collect data into memory buffer
>> until some limit, then fallback to disk media.
>> 
>> And try to send events to server with some defined pattern until
>> RecipientDownException is gone.
>> 
>> On application startup PersistentAppender should check for serialized
>> unpublished events and try to send them again.
>> 
>> As I see in 
>> https://logback.qos.ch/apidocs/ch/qos/logback/classic/spi/ILoggingEvent.html
>> persister should store:
>> 
>> * timestamp
>> * level
>> * msg
>> * marker (not sure is full hierarchy is needed as market is just a string)
>> * MCD
>> * StackTraceElement[] (this information can be useful but require a lot of 
>> CPU)
>> 
>> Some get* method are strange, like getArgumentArray(), I am not sure
>> if this information is useful.
>> 
>> ======================
>> 
>> Alternative approach is to write log and later parse it. Like this
>> done with Filebeat https://www.elastic.co/products/beats/filebeat that
>> complement to ElasticSearch.
>> 
>> The problem with such solution is necessity to invent or follow file
>> format. Formatting log with MDC and Markers that can be tricky.
>> 
>> ======================
>> 
>> Direct writer to ElasticSearch
>> https://github.com/internetitem/logback-elasticsearch-appender buffer
>> messages only in memory if ElasticSearch isn't available and
>> application should be restarted everything is lost.
>> _______________________________________________
>> logback-user mailing list
>> [email protected]
>> http://mailman.qos.ch/mailman/listinfo/logback-user
> 

_______________________________________________
logback-user mailing list
[email protected]
http://mailman.qos.ch/mailman/listinfo/logback-user

Reply via email to