> On Sep 24, 2020, at 11:48 AM, Volkan Yazıcı <[email protected]> wrote:
> 
> I think endOfBatch flag is still falling short of addressing the main
> issue: the appender is oblivious to the batching. Put another way, it
> cannot have an assumption whether endOfBatch is employed or not. Consider
> appender A honoring endOfBatch flag, though the preceding filter F never
> takes advantage of it, hence A will keep on piling events until an OOM
> exception. This is my main motivation for introducing batching explicitly
> at the appender interface. For instance,

I’m not following this.  A Filter analyzes a single event, much as a firewall 
analyzes a single packet. If it rejects the event the Appender should discard 
it so that wouldn’t generate an OOM.  OTOH, batching events before they hit 
Appenders could waste memory but that would primarily be because you are trying 
to create a batch of events for ALL Appenders to process. One might reject some 
of the events and a different Appender might reject a different set.



> 
>    interface Appender {
>        default void append(LogEvent logEvent) {
> append(Batch.of(logEvent)); }    // for backward-compatibility
>        void append(Batch<LogEvent> batch);
>    }
> 
>    interface Batch<E> {
>        void forEach(Consumer<E> item);
>    }
> 
> This way we can introduce reusable (and hence, mutable) Batch<E>
> implementations, while the appender is free to perform its own batching
> magic tuned for the underlying sink, e.g., JDBC connection, network socket,
> etc.
> 
> Regarding your remark about "being sympathetic to the ring buffer design",
> I did not fully understand this. Would you mind elaborating on this a
> little bit more and maybe even sharing a comparison between this and the
> aforementioned explicit batching at the interface level, please?
> 
> In conclusion, IMHO, making batching/aggregation explicit at the interface
> level will help us to avoid quite some code repetition and make the life
> easier for future appenders.


Synchronous loggers deliver events 1 by 1. It makes no sense to “batch” them 
because they then become asynchronous which is what the async Loggers do. That 
is why Ring Buffers were mentioned. But all the ring buffer really does is act 
as a FIFO queue to process the events on a different thread. It still follows 
(more or less) the same flow that synchronous events do. Processing them in 
batches would make things a bit more complicated as each event needs to be 
valued by Filters on the LoggerConfig and Appender-Ref before being passed to 
Appenders. Does that mean Filters now have to accept batches? If so, how do 
they return the Filter.Result for each event?

Ralph

Reply via email to