[ 
https://issues.apache.org/jira/browse/HADOOP-53?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HADOOP-53:
--------------------------------

    Attachment: mapredDFSLog_v1.patch

Attaching a patch to store the tasks' logs to the FileSystem used. This is 
useful for example to store the logs permanently or to access the logs in a 
centralized way. 

The patch adds a new log4j Appender, called FsLogAppender, which appends logs 
to the files in the FileSystem. The appender is called FSLA. The old 
appender(TLA) still continues. The user can select which appender to use, and 
which log level to use via JobConf(). ex : 
job.setTaskLogRootLogger("INFO,TLA,FSLA"); 
The user can also specify the location to save the logs : 
job.setTaskLogDir(Path);

Now, at DEBUG level the logs from org.apache.hadoop will continue to polute the 
logs of the user's program, but we will defer this to a separate issue. 

The appender can also be used to store the logs of the framework(for debugging 
etc,), but again it is a seperate issue. 

> MapReduce log files should be storable in dfs.
> ----------------------------------------------
>
>                 Key: HADOOP-53
>                 URL: https://issues.apache.org/jira/browse/HADOOP-53
>             Project: Hadoop
>          Issue Type: New Feature
>          Components: mapred
>    Affects Versions: 0.2.0
>            Reporter: Doug Cutting
>            Assignee: Enis Soztutar
>         Attachments: mapredDFSLog_v1.patch
>
>
> It should be possible to cause a job's log output to be stored in dfs.  The 
> jobtracker's log output and (optionally) all tasktracker log output related 
> to a job should be storable in a job-specified dfs directory.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to