I am having the same problems. Did you find a fix?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-1-3-build-with-hive-support-fails-tp22215p22309.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
Did you ever find a sln to this problem? I'm having similar issues.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/java-lang-IllegalStateException-unread-block-data-while-running-the-sampe-WordCount-program-from-Ecle-tp8388p11412.html
Sent from the Apache
Thanks AL!
Thats what I though. I've setup nexus to maintain spark libs and download
them when needed.
For development purposes. Suppose we have a dev cluster. Is it possible to
run the driver program locally (on a developers machine)?
I..e just run the driver from the ID and have it
I'm trying to run a local driver (on a development machine) and have this
driver communicate with the Spark master and workers however I'm having a
few problems getting the driver to connect and run a simple job from within
an IDE.
It all looks like it works but when I try to do something simple
The code for this example is very simple;
object SparkMain extends App with Serializable {
val conf = new SparkConf(false)
//.setAppName(cc-test)
//.setMaster(spark://hadoop-001:7077)
//.setSparkHome(/tmp)
.set(spark.driver.host, 192.168.23.108)
.set(spark.cores.max, 10)
Hi all,
I'm trying to get the jobserver working with Spark 1.0.1. I've got it
building, tests passing and it connects to my Spark master (e.g.
spark://hadoop-001:7077).
I can also pre-create contexts. These show up in the Spark master console
i.e. on hadoop-001:8080
The problem is that after I