Hello to all,
        This is Pranav Bhole, I am Master student at The University of
Texas at Dallas. My research interest is Big Data. I haven been using Log4j
extensively as core since 5-6 years in my academic and professional work.
Recently an idea came up in my mind by facing some of the difficulties in
managing the TeraBytes of Log files. I would like to implement this idea as
plug in or functionality in the existing log4j appender module as student
of Google Summer of Code 2013.

Short description of the idea:
Server appends the bulk of log files and in the most cases server lacks
with the storage space for these logs files and also computing on such bulk
of file is costly for the server. With the consideration of this problem,
idea proposes to write a module which could be able to move these files
into Public (S3 of AWS, Azure) or private cloud (Hadoop) on the rolling
basis based on the configuration file. To resolve the computing layer
objective, the idea proposes the Big Data Query generator based on the
logging format used. Such Big Data Queries will include MapReduce, PIG etc.
Administrator would be able to run these BigData queries generated by Log4j
to track the keywords in the logs like error number, TimeStamp or any other
arbitrary string.

I would like to appreciate to all of you for reading this idea. I would
really love to get involved in Log4j development team with your support and
suggestion on this idea.

Thank you very much.

-- 
Pranav Bhole
Student of MS in Computer Science for Fall 2012,
University of Texas at Dallas
http://www.linkedin.com/in/pranavbhole
Cell Phone No: 972-978-6108.

Reply via email to