Neerja Khattar created FLUME-2669:
-------------------------------------

             Summary: ability to batch a number of logging events together in 
order to send them over the wire in one go
                 Key: FLUME-2669
                 URL: https://issues.apache.org/jira/browse/FLUME-2669
             Project: Flume
          Issue Type: New Feature
            Reporter: Neerja Khattar


A better support for Flume in the log4j 1.x appender may be beneficial. 
There is  support for load-balancing with back-off and transparent failover in 
it already. What it needs is the ability to batch a number of logging events 
together in order to send them over the wire in one go. 
With a blocking queue in Java 5, it will be "fairly easy" to implement the 
following logic: 
1) the appender is initialized with a blocking queue on 
AppenderSkeleton.activateOptions() and will be the producer for it - on every 
call to AppenderSkeleton.append() it will add a logging event to the queue then 
return. 
2) a consumer thread is also brought up on AppenderSkeleton.activateOptions() 
and the same queue is passed to it. The thread will continuously peek an object 
from the queue and store it in a buffer. If no objects on the queue, the thread 
will block. Once the buffer is full, it will call RpcClient.appendBatch() then 
free up the buffer. 
3) in case an error occurs while sending the batch to a remote Flume agent (and 
the RPC client fails to deliver it having tried all the agents), either an 
exception is thrown (safe mode) or the events are silently dropped (unsafe 
mode). 
4) size of the in-memory buffer / size of the batch as well as size of the 
blocking queue should be made configurable for greater flexibility



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to