If I had to guess, it's likely because the Spark code would have to read
the YAML to make sure the required parameters are set, and the way it's
done was just easier to build on without a lot of refactoring.
On Mon, Jul 6, 2020 at 5:06 PM Michel Sumbul wrote:
> Thanks Edward for the reply!
>
> I
Okay, I see what's going on here.
Looks like the way that spark is coded, the driver container image
(specified by --conf
spark.kubernetes.driver.container.image) and executor container image
(specified by --conf
spark.kubernetes.executor.container.image) is required.
If they're not specified it'
Hi Edeesis,
The goal is to not have these settings in the spark submit command. If I
specify the same things in a pod template for the executor, I still got the
message:
"Exception in thread "main" org.apache.spark.SparkException "Must specify
the driver container image"
it even don't try to star
If I could muster a guess, you still need to specify the executor image. As
is, this will only specify the driver image.
You can specify it as --conf spark.kubernetes.container.image or --conf
spark.kubernetes.executor.container.image
--
Sent from: http://apache-spark-developers-list.1001551.n3
Hello,
Adding the dev mailing list maybe there is someone here that can help to
have/show a valid/accepted pod template for spark 3?
Thanks in advance,
Michel
Le ven. 26 juin 2020 à 14:03, Michel Sumbul a
écrit :
> Hi Jorge,
> If I set that in the spark submit command it works but I want it o