Hi folks,

I have hadoop installed on an NFS, for which I have a file limit of 20000. I have the HDFS using the /tmp directory on several network machines.

Everything works fine, but I am finding that I regularly exceed my file limit due to the number of log files that hadoop creates in $HADOOP_INSTALL/hadoop-0.13.0/logs/.

I was wondering if there was a way to redirect the history and userlogs to another location (ie the /tmp directory where I don't have a file limit)? I have tried setting the hadoop.log.dir variable in the conf/log4j.properties file, but this just seems to introduce errors when running a mapreduce job, without actually changing the location to which the log files are written.

Thanks for your time,

Ollie

Reply via email to