See SPARK-4160. Long story short: you need to upload the files and
jars to some shared storage (like HDFS) manually.
On Wed, Sep 5, 2018 at 2:17 AM Guillermo Ortiz Fernández
wrote:
>
> I'm using standalone cluster and the final command I'm trying is:
> spark-submit --verbose --deploy-mode cluster
I'm using standalone cluster and the final command I'm trying is:
spark-submit --verbose --deploy-mode cluster --driver-java-options
"-Dlogback.configurationFile=conf/i${1}Logback.xml" \
--class com.example.Launcher --driver-class-path
lib/spark-streaming-kafka-0-10_2.11-2.0.2.jar:lib/kafka-clients
I want to execute my processes in cluster mode. As I don't know where the
driver has been executed I have to do available all the file it needs. I
undertand that they are two options. Copy all the files to all nodes of
copy them to HDFS.
My doubt is,, if I want to put all the files in HDFS, isn't