First you really don't want to launch the job from the cluster but from an edge 
node.

To answer your question, in a word, yes, you should have a consistent set of 
configuration files as possible, noting that overtime this may not be possible 
as hardware configs may change,


Sent from a remote device. Please excuse any typos...

Mike Segel

On Mar 27, 2012, at 8:42 PM, Jane Wayne <jane.wayne2...@gmail.com> wrote:

> if i have a hadoop cluster of 10 nodes, do i have to modify the
> /hadoop/conf/log4j.properties files on ALL 10 nodes to be the same?
> 
> currently, i ssh into the master node to execute a job. this node is the
> only place where i have modified the logj4.properties file. i notice that
> although my log files are being created, nothing is being written to them.
> when i test on cygwin, the logging works, however, when i go to a live
> cluster (i.e. amazon elastic mapreduce), the logging output on the master
> node no longer works. i wonder if logging is happening at each slave/task
> node?
> 
> could someone explain logging or point me to the documentation discussing
> this issue?

Reply via email to