Can anyone help me here?

I've got a small spark cluster running on three machines - hadoop-s1,
hadoop-s2, and hadoop-s3 - with s1 acting master, and all three acting as
workers.  It works fine - I can connect with spark-shell, I can run jobs, I
can see the web ui.

The web UI says:
Spark Master at spark://hadoop-s1.oculus.local:7077
URL: spark://hadoop-s1.oculus.local:7077

I've connected to it fine using both a scala and a java SparkContext.

But when I try connecting from within a Tomcat service, I get the following
messages:
[INFO] 22 Feb 2014 00:27:38 - org.apache.spark.Logging$class - Connecting
to master spark://hadoop-s1.oculus.local:7077...
[INFO] 22 Feb 2014 00:27:58 - org.apache.spark.Logging$class - Connecting
to master spark://hadoop-s1.oculus.local:7077...
[ERROR] 22 Feb 2014 00:28:18 - org.apache.spark.Logging$class - All masters
are unresponsive! Giving up.
[ERROR] 22 Feb 2014 00:28:18 - org.apache.spark.Logging$class - Spark
cluster looks dead, giving up.
[ERROR] 22 Feb 2014 00:28:18 - org.apache.spark.Logging$class - Exiting due
to error from cluster scheduler: Spark cluster looks down

When I look on the spark server logs, there isn't even a sign of an
attempted connection.

I'm trying to use a JavaSparkContext, and I've printed out the parameters I
pass in, and they work fine in a stand-alone program.

Anyone have a clue why this fails? Or even how to find out why this fals?


-- 
Nathan Kronenfeld
Senior Visualization Developer
Oculus Info Inc
2 Berkeley Street, Suite 600,
Toronto, Ontario M5A 4J5
Phone:  +1-416-203-3003 x 238
Email:  nkronenf...@oculusinfo.com

Reply via email to