That was it, thanks!
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Use-mvn-run-Spark-program-occur-problem-tp1751p6512.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
I finally got it working. Main points:
- I had to add hadoop-client dependency to avoid a strange EOFException.
- I had to set SPARK_MASTER_IP in conf/start-master.sh to hostname -f
instead of hostname, since akka seems not to work properly with host names /
ip, it requires fully qualified domain
Hi Andrei,
I think the preferred way to deploy Spark jobs is by using the sbt package
task instead of using the sbt assembly plugin. In any case, as you comment,
the mergeStrategy in combination with some dependency exlusions should fix
your problems. Have a look at this gist
Same here, got stuck at this point. Any hints on what might be going on?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Akka-Connection-refused-standalone-cluster-using-spark-0-9-0-tp1297p6463.html
Sent from the Apache Spark User List mailing list archive
During the last few days I've been trying to deploy a Scala job to a
standalone cluster (master + 4 workers) without much success, although it
worked perfectly when launching it from the spark shell, that is, using the
Scala REPL (pretty strange, this would mean my cluster config was actually
I am experiencing the same issue (I tried both using Kryo as serializer and
increasing the buffer size up to 256M, my objects are much smaller though).
I share my registrator class just in case:
https://gist.github.com/JordiAranda/5cc16cf102290c413c82
Any hints would be highly appreciated.
Thanks for the heads up, I also experienced this issue.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/file-not-found-tp1854p6438.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.