Okay, I see what's going on here. Looks like the way that spark is coded, the driver container image (specified by --conf spark.kubernetes.driver.container.image) and executor container image (specified by --conf spark.kubernetes.executor.container.image) is required.
If they're not specified it'll fallback to --conf spark.kubernetes.container.image. The way the "pod template" feature was coded is such that even if it's specified in the YAML, those conf properties take priority and override the value set on the YAML file. So basically what I'm saying is that although you have it in the YAML file, you still need to specify them. If, like you said, the goal is to not specify those in the spark submit, you'll likely need to submit an Improvement to the JIRA. On Tue, Jun 30, 2020 at 5:26 AM Michel Sumbul <michelsum...@gmail.com> wrote: > Hi Edeesis, > > The goal is to not have these settings in the spark submit command. If I > specify the same things in a pod template for the executor, I still got the > message: > "Exception in thread "main" org.apache.spark.SparkException "Must specify > the driver container image" > > it even don't try to start an executor container as the driver is not > started yet. > Any idea? > > Thanks, > Michel > > Le mar. 30 juin 2020 à 00:06, edeesis <edee...@gmail.com> a écrit : > >> If I could muster a guess, you still need to specify the executor image. >> As >> is, this will only specify the driver image. >> >> You can specify it as --conf spark.kubernetes.container.image or --conf >> spark.kubernetes.executor.container.image >> >> >> >> -- >> Sent from: http://apache-spark-developers-list.1001551.n3.nabble.com/ >> >> --------------------------------------------------------------------- >> To unsubscribe e-mail: dev-unsubscr...@spark.apache.org >> >>