It seems pretty relevant. If you can directly log via NFS that is a viable alternative.
On Sat, Apr 21, 2012 at 11:42 AM, alo alt <wget.n...@googlemail.com> wrote: > We decided NO product and vendor advertising on apache mailing lists! > I do not understand why you'll put that closed source stuff from your employe > in the room. It has nothing to do with flume or the use cases! > > -- > Alexander Lorenz > http://mapredit.blogspot.com > > On Apr 21, 2012, at 4:06 PM, M. C. Srivas wrote: > >> Karl, >> >> since you did ask for alternatives, people using MapR prefer to use the >> NFS access to directly deposit data (or access it). Works seamlessly from >> all Linuxes, Solaris, Windows, AIX and a myriad of other legacy systems >> without having to load any agents on those machines. And it is fully >> automatic HA >> >> Since compression is built-in in MapR, the data gets compressed coming in >> over NFS automatically without much fuss. >> >> Wrt to performance, can get about 870 MB/s per node if you have 10GigE >> attached (of course, with compression, the effective throughput will >> surpass that based on how good the data can be squeezed). >> >> >> On Fri, Apr 20, 2012 at 3:14 PM, Karl Hennig <khen...@baynote.com> wrote: >> >>> I am investigating automated methods of moving our data from the web tier >>> into HDFS for processing, a process that's performed periodically. >>> >>> I am looking for feedback from anyone who has actually used Flume in a >>> production setup (redundant, failover) successfully. I understand it is >>> now being largely rearchitected during its incubation as Apache Flume-NG, >>> so I don't have full confidence in the old, stable releases. >>> >>> The other option would be to write our own tools. What methods are you >>> using for these kinds of tasks? Did you write your own or does Flume (or >>> something else) work for you? >>> >>> I'm also on the Flume mailing list, but I wanted to ask these questions >>> here because I'm interested in Flume _and_ alternatives. >>> >>> Thank you! >>> >>> >