Thanks Tim,
There's a little more to it in fact - if I use the
pre-built-with-hadoop-2.6 binaries, all is good (with correctly named
tarballs in hdfs). Using the pre-built with user-provided hadoop
(including setting SPARK_DIST_CLASSPATH in setup-env.sh) then I get the
JNI exception.
Aha
Hi Adrian,
Spark is expecting a specific naming of the tgz and also the folder name
inside, as this is generated by running make-distribution.sh --tgz in the
Spark source folder.
If you use a Spark 1.4 tgz generated with that script with the same name
and upload to HDFS again, fix the URI then it
5mins later...
Trying 1.5 with a fairly plain build:
./make-distribution.sh --tgz --name os1 -Phadoop-2.6
and on my first attempt stderr showed:
I0909 15:16:49.392144 1619 fetcher.cpp:441] Fetched
'hdfs:///apps/spark/spark15.tgz' to
'/tmp/mesos/slaves/20150826-133446-3217621258-5050-4064-S1/f
I'm trying to run spark (1.4.1) on top of mesos (0.23). I've followed
the instructions (uploaded spark tarball to HDFS, set executor uri in
both places etc) and yet on the slaves it's failing to lauch even the
SparkPi example with a JNI error. It does run with a local master. A
day of debugg