An AppScale appender might be interesting:
http://appscale.cs.ucsb.edu/datastores.html

Gary


On Fri, Mar 22, 2013 at 4:34 PM, Christian Grobmeier <[email protected]>wrote:

> +1
>
> I would love to support a GSOC student, and if your more concrete
> proposal meets some interest here I am willing to actually help.
> That said, while Apache Flume is great, its maybe a bit "too much". I
> already have had some thoughts on some kind of a server which utilizes
> receivers to send data to $x. Less features than Flume, but easy to
> setup. Not sure if that has some value.
>
> Pranav, please let us hear more of your ideas.
>
>
>
> On Fri, Mar 22, 2013 at 4:40 PM, Ralph Goers <[email protected]>
> wrote:
> > The Flume Appender leverages Apache Flume to route data into various
> places.
> > The primary sponsor of Flume is Cloudera, so naturally Flume supports
> > writing data into Hadoop. In addition, my employer is using Flume to
> write
> > data into Cassandra.  That said, we would welcome contributions and if
> you
> > can provide more details on how you would implement your idea I'd love to
> > see them.  Perhaps you can create a page on the logging wiki with your
> > proposal.
> >
> > Ralph
> >
> > On Mar 22, 2013, at 2:04 AM, Pranav Bhole wrote:
> >
> > Hello to all,
> >         This is Pranav Bhole, I am Master student at The University of
> Texas
> > at Dallas. My research interest is Big Data. I haven been using Log4j
> > extensively as core since 5-6 years in my academic and professional work.
> > Recently an idea came up in my mind by facing some of the difficulties in
> > managing the TeraBytes of Log files. I would like to implement this idea
> as
> > plug in or functionality in the existing log4j appender module as
> student of
> > Google Summer of Code 2013.
> >
> > Short description of the idea:
> > Server appends the bulk of log files and in the most cases server lacks
> with
> > the storage space for these logs files and also computing on such bulk of
> > file is costly for the server. With the consideration of this problem,
> idea
> > proposes to write a module which could be able to move these files into
> > Public (S3 of AWS, Azure) or private cloud (Hadoop) on the rolling basis
> > based on the configuration file. To resolve the computing layer
> objective,
> > the idea proposes the Big Data Query generator based on the logging
> format
> > used. Such Big Data Queries will include MapReduce, PIG etc.
> Administrator
> > would be able to run these BigData queries generated by Log4j to track
> the
> > keywords in the logs like error number, TimeStamp or any other arbitrary
> > string.
> >
> > I would like to appreciate to all of you for reading this idea. I would
> > really love to get involved in Log4j development team with your support
> and
> > suggestion on this idea.
> >
> > Thank you very much.
> >
> > --
> > Pranav Bhole
> > Student of MS in Computer Science for Fall 2012,
> > University of Texas at Dallas
> > http://www.linkedin.com/in/pranavbhole
> > Cell Phone No: 972-978-6108.
> >
> >
>
>
>
> --
> http://www.grobmeier.de
> https://www.timeandbill.de
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [email protected]
> For additional commands, e-mail: [email protected]
>
>


-- 
E-Mail: [email protected] | [email protected]
JUnit in Action, 2nd Ed: <http://goog_1249600977>http://bit.ly/ECvg0
Spring Batch in Action: <http://s.apache.org/HOq>http://bit.ly/bqpbCK
Blog: http://garygregory.wordpress.com
Home: http://garygregory.com/
Tweet! http://twitter.com/GaryGregory

Reply via email to