[
https://issues.apache.org/jira/browse/HADOOP-953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12470639
]
Andrew McNabb commented on HADOOP-953:
--------------------------------------
It seems like half of configuration is done in environment variables, and half
is done in the xml config files. I wouldn't mind if things are available in
both, but it's really hard when some have to be done in one place and half have
to be done in another. I'm especially confused in this case because there is a
logging option in the xml file: dfs.namenode.logging.level.
Anyway, thank you very much for alerting me to the HADOOP_ROOT_LOGGER
enviroment variable. That will definitely help in the short term. Thanks.
> huge log files
> --------------
>
> Key: HADOOP-953
> URL: https://issues.apache.org/jira/browse/HADOOP-953
> Project: Hadoop
> Issue Type: Improvement
> Affects Versions: 0.10.1
> Environment: N/A
> Reporter: Andrew McNabb
>
> On our system, it's not uncommon to get 20 MB of logs with each MapReduce
> job. It would be very helpful if it were possible to configure Hadoop
> daemons to write logs only when major things happen, but the only conf
> options I could find are for increasing the amount of output. The disk is
> really a bottleneck for us, and I believe that short jobs would run much more
> quickly with less disk usage. We also believe that the high disk usage might
> be triggering a kernel bug on some of our machines, causing them to crash.
> If the 20 MB of logs went down to 20 KB, we would probably still have all of
> the information we needed.
> Thanks!
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.