Re: HADOOP_ROOT_LOGGER

2014-05-22 Thread Robert Rati
In my experience the default HADOOP_ROOT_LOGGER definition will override any root logger defined in log4j.properties, which is where the problems have arisen. If the HADOOP_ROOT_LOGGER definition in hadoop-config.sh were removed, wouldn't the root logger defined in the log4j.properties file

Re: HADOOP_ROOT_LOGGER

2014-05-22 Thread Colin McCabe
permission to do this on a production cluster. Doing something like HADOOP_ROOT_LOGGER=DEBUG,console hadoop fs -cat /foo has helped me diagnose problems in the past. best, Colin On Thu, May 22, 2014 at 6:34 AM, Robert Rati rr...@redhat.com wrote: In my experience the default HADOOP_ROOT_LOGGER

Re: HADOOP_ROOT_LOGGER

2014-05-22 Thread Robert Rati
Ah, that makes sense. Would it make sense to default the root logger to the one defined in log4j.properties file instead of the static value in the script then? That way an admin can set all logging properties desired in the log4j.properties file, but can override with HADOOP_ROOT_LOGGER

HADOOP_ROOT_LOGGER

2014-05-21 Thread Robert Rati
I noticed in hadoop-config.sh there is this line: HADOOP_OPTS=$HADOOP_OPTS -Dhadoop.root.logger=${HADOOP_ROOT_LOGGER:-INFO,console} which is setting a root logger if HADOOP_ROOT_LOGGER isn't set. Why is this here.needed? There is a log4j.properties file provided that defines a default

Re: HADOOP_ROOT_LOGGER

2014-05-21 Thread Vinayakumar B
Hi Robert, I understand your confusion. HADOOP_ROOT_LOGGER is set to default value INFO,console if it hasn't set for anything and logs will be displayed on the console itself. This will be true for any client commands you run. For ex: hdfs dfs -ls / But for the server scripts (hadoop-daemon.sh