We have used the fan-out sink & seems to work pretty well, writing out both to HDFS & local file system
Here is the configuration schema exec config 'collector' 'CollectorSource(35853)' '[collectorSink("hdfs://<namenode>:54310/<location>/%Y/%m/%d/","%{host}-%{tailSrcFile}.log"),collectorSink("file:///local_file_path","%{host}-%{tailSrcFile}.log")]' ~Subbu On Mon, Nov 14, 2011 at 3:57 AM, Meyer, Dennis <dennis.me...@adtech.com>wrote: > Hi, > > As far as I understood a flume agent can sent to multiple sinks. Can't you > just configure two sinks, one as HDFS and on as normal file system? I'm > interested in that functionality as well. Does anybody know? > > Thanks, > Dennis > > > > > > Am 11.11.11 13:50 schrieb "Torsten Curdt" unter <tcu...@vafer.org>: > > >No, we have the ASF paperwork on file but we never got around to release > >it yet. > > > >We also added a disk based ring buffer in case of hdfs cluster > >downtime, fixed some syslog bugs and worked on the ganglia integration > >(not working as it should and we ended up going a simpler route). > > > >I bet it will need to some love to turn into a patch for trunk. I can > >bring it up and see if we can open source over on github without > >spending the additional time and then see if there is interest in > >porting some things over. > > > >cheers, > >Torsten > > -- Thanks!! ~Subbu