Thanks Neeraj. Your second suggestion worked perfectly.
Thanks again,
Ollie

Quoting "Mahajan, Neeraj" <[EMAIL PROTECTED]>:

Would creating a sym link from  $HADOOP_INSTALL/hadoop-0.13.0/logs to
any other location solve your problem?
Else exporting the env var "HADOOP_LOG_DIR=<your new log location>"
before starting hadoop might also solve your problem.

~ Neeraj

-----Original Message-----
From: Oliver Haggarty [mailto:[EMAIL PROTECTED]
Sent: Friday, July 06, 2007 4:36 AM
To: [email protected]
Subject: Redirecting logs directory

Hi folks,

I have hadoop installed on an NFS, for which I have a file limit of
20000. I have the HDFS using the /tmp directory on several network
machines.

Everything works fine, but I am finding that I regularly exceed my file
limit due to the number of log files that hadoop creates in
$HADOOP_INSTALL/hadoop-0.13.0/logs/.

I was wondering if there was a way to redirect the history and userlogs
to another location (ie the /tmp directory where I don't have a file
limit)? I have tried setting the hadoop.log.dir variable in the
conf/log4j.properties file, but this just seems to introduce errors when
running a mapreduce job, without actually changing the location to which
the log files are written.

Thanks for your time,

Ollie




Reply via email to