[ 
https://issues.apache.org/jira/browse/CHUKWA-389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12756172#action_12756172
 ] 

Ari Rabkin commented on CHUKWA-389:
-----------------------------------

It's certainly doable.  A few things worth considering:

1) What happens if the application is logging faster than the collector is 
writing?  Where does the appender buffer?  Can the buffer fill up?
2) How does the appender find a collector?  Can it accept a list?  Can it be 
told to reread that list?
3) How do we name the chunks, in such a way that different runs of the programs 
produce either distinct streams, or else successive portions of a stream?

> Send chunks directly from a log4j appender to the collector without writing 
> to local drive
> ------------------------------------------------------------------------------------------
>
>                 Key: CHUKWA-389
>                 URL: https://issues.apache.org/jira/browse/CHUKWA-389
>             Project: Hadoop Chukwa
>          Issue Type: New Feature
>          Components: data collection
>            Reporter: Jerome Boulon
>            Assignee: Jerome Boulon
>
> Currently Chukwa required to first write the data to the local drive and run 
> an agent on every single box in order to be able to collect logs.
> This is a good solution if you cannot afford to loose any data but sometime 
> you don't want to run an agent on very single box.
> In that case, a new log4J appender could be used to send data directly from 
> the application to a collector.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to