> Just to be clear: in client mode things work right? (Although I'm not really familiar with how client mode works in k8s - never tried it.)
If the driver runs on the submission client machine, yes, it should just work. If the driver runs in a pod, however, it faces the same problem as in cluster mode. Yinan On Fri, Oct 5, 2018 at 11:06 AM Stavros Kontopoulos < stavros.kontopou...@lightbend.com> wrote: > @Marcelo is correct. Mesos does not have something similar. Only Yarn does > due to the distributed cache thing. > I have described most of the above in the the jira also there are some > other options. > > Best, > Stavros > > On Fri, Oct 5, 2018 at 8:28 PM, Marcelo Vanzin < > van...@cloudera.com.invalid> wrote: > >> On Fri, Oct 5, 2018 at 7:54 AM Rob Vesse <rve...@dotnetrdf.org> wrote: >> > Ideally this would all just be handled automatically for users in the >> way that all other resource managers do >> >> I think you're giving other resource managers too much credit. In >> cluster mode, only YARN really distributes local dependencies, because >> YARN has that feature (its distributed cache) and Spark just uses it. >> >> Standalone doesn't do it (see SPARK-4160) and I don't remember seeing >> anything similar on the Mesos side. >> >> There are things that could be done; e.g. if you have HDFS you could >> do a restricted version of what YARN does (upload files to HDFS, and >> change the "spark.jars" and "spark.files" URLs to point to HDFS >> instead). Or you could turn the submission client into a file server >> that the cluster-mode driver downloads files from - although that >> requires connectivity from the driver back to the client. >> >> Neither is great, but better than not having that feature. >> >> Just to be clear: in client mode things work right? (Although I'm not >> really familiar with how client mode works in k8s - never tried it.) >> >> -- >> Marcelo >> >> --------------------------------------------------------------------- >> To unsubscribe e-mail: dev-unsubscr...@spark.apache.org >> >> > > >