Or you can do sc.addJar(/path/to/the/jar), i haven't tested with HDFS path
though it works fine with local path.
Thanks
Best Regards
On Wed, Jun 10, 2015 at 10:17 AM, Jörn Franke jornfra...@gmail.com wrote:
I am not sure they work with HDFS pathes. You may want to look at the
source code.
You can put a Thread.sleep(10) in the code to have the UI available for
quiet some time. (Put it just before starting any of your transformations)
Or you can enable the spark history server
https://spark.apache.org/docs/latest/monitoring.html too. I believe --jars
Thanks Akhil:
The driver fails so fast to get a look at 4040. Is there any other way to see
the download and ship process of the files?
Is driver supposed to download these jars from HDFS to some location, then ship
them to excutors?
I can see from log that the driver downloaded the
Once you submits the application, you can check in the driver UI (running
on port 4040) Environment Tab to see whether those jars you added got
shipped or not. If they are shipped and still you are getting NoClassDef
exceptions then it means that you are having a jar conflict which you can
resolve
Thanks So much!
I did put sleep on my code to have the UI available.
Now from the UI, I can see:
· In the “SparkProperty” Section, the spark.jars and spark.files are
set as what I want.
· In the “Classpath Entries” Section, my jars and files paths are
there(with a HDFS path)
I am not sure they work with HDFS pathes. You may want to look at the
source code. Alternatively you can create a fat jar containing all jars
(let your build tool set correctly METAINF). This always works.
Le mer. 10 juin 2015 à 6:22, Dong Lei dong...@microsoft.com a écrit :
Thanks So much!
Hi Jörn:
I start to check code and sadly it seems it does not work hdfs path:
In HTTPFileServer.scala:
def addFileToDir:
….
Files.copy
….
It looks like it only copy file from local to
Hi, spark-users:
I'm using spark-submit to submit multiple jars and files(all in HDFS) to run a
job, with the following command:
Spark-submit
--class myClass
--master spark://localhost:7077/
--deploy-mode cluster
--jars hdfs://localhost/1.jar, hdfs://localhost/2.jar
--files