will be added to the discussion
below:
http://apache-spark-user-list.1001560.n3.nabble.com/spark-submit-command-line-with-files-tp14645p14708.html
To unsubscribe from spark-submit command-line with --files, click here
http://apache-spark-user-list.1001560.n3.nabble.com/template/NamlServlet.jtp?macro
(FileInputStream.java:146)
at java.io.FileInputStream.init(FileInputStream.java:101)
at
com.test.batch.modeltrainer.ModelTrainerMain$.deSerializeMapFromFile(ModelTrainerMain.scala:96)
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/spark-submit-command-line
)
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/spark-submit-command-line-with-files-tp14645p14719.html
Sent from the Apache Spark User List mailing list archive at Nabble.com
the file on
hdfs ?
-C
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/spark-submit-command-line-with-files-tp14645p14753.html
Sent from the Apache Spark User List mailing list archive at Nabble.com
(SparkSubmit.scala)
Looking at the Scala code for SparkFiles:37, it looks like SparkEnv.get is
getting a null ..
Thanks
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/spark-submit-command-line-with-files-tp14645.html
Sent from the Apache Spark User List mailing
Hey just a minor clarification, you _can_ use SparkFiles.get in your
application only if it runs on the executors, e.g. in the following way:
sc.parallelize(1 to 100).map { i = SparkFiles.get(my.file) }.collect()
But not in general (otherwise NPE, as in your case). Perhaps this should be
)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Looking at the Scala code for SparkFiles:37, it looks like SparkEnv.get is
getting a null ..
Thanks
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/spark-submit-command-line-with-files-tp14645.html