For weeks, I've been using the following trick to successfully disable log4j in 
the spark-shell when running a cluster on ec2 that was started by the Spark 
provided ec2 scripts.

cp ./conf/log4j.properties.template ./conf/log4j.properties


I then change log4j.rootCategory=INFO to log4j.rootCategory=WARN.

This all stopped working on Wednesday when I could no longer successfully start 
a cluster on ec2 (using the Spark provided ec2 scripts).  I noticed the 
resolution to this problem was a script referenced by the ec2 scripts had been 
changed (and that this referenced script has since been reverted).  I raise 
this as I don't know if this is a symptom of my problem and that it's 
interesting the problems started happening at the same time.

When I now start up the cluster on ec2 and subsequently start the spark-shell I 
can no longer disable the log4j messages using the above trick.  I'm using 
Apache Spark 1.1.0.

What's interesting is that I can start the cluster locally on my laptop (using 
Spark 1.1.0) and the above trick for disabling log4j in the spark-shell works.  
So, the issue appears to be related to ec2 and potentially something referenced 
by the Spark provided ec2 startup script.  But, that is purely a guess on my 
part.

I'm wondering if anyone else has noticed this issue and if so has a workaround.

Thanks.

Darin.

Reply via email to