[
https://issues.apache.org/jira/browse/CHUKWA-389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12756178#action_12756178
]
Jerome Boulon commented on CHUKWA-389:
--------------------------------------
>> if the application is logging faster than the collector is writing
The log4j could start another writer that will connect to another collector,
but this will not be done in the first iteration. For the first iteration the
data will be dropped.
>> How does the appender find a collector?
Same as the ChukwaAgent, a list of collector/port
>>How do we name the chunks
One recordType per Log4J appender so we can keep the Chunk metadata. If one
application needs to send more than one recordType then the application will
have todo that using the log4j configuration and start another log4j for the 2d
recordType.
> Send chunks directly from a log4j appender to the collector without writing
> to local drive
> ------------------------------------------------------------------------------------------
>
> Key: CHUKWA-389
> URL: https://issues.apache.org/jira/browse/CHUKWA-389
> Project: Hadoop Chukwa
> Issue Type: New Feature
> Components: data collection
> Reporter: Jerome Boulon
> Assignee: Jerome Boulon
>
> Currently Chukwa required to first write the data to the local drive and run
> an agent on every single box in order to be able to collect logs.
> This is a good solution if you cannot afford to loose any data but sometime
> you don't want to run an agent on very single box.
> In that case, a new log4J appender could be used to send data directly from
> the application to a collector.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.