Hi,

In the following under Spark Kubernetes configuration
<https://spark.apache.org/docs/latest/running-on-kubernetes.html#configuration>,
it states


spark.kubernetes.container.image, default None, meaning:Container image to
use for the Spark application. This is usually of the form
example.com/repo/spark:v1.0.0. *This configuration is required and must be
provided by the user, unless explicit images are provided for each
different container type.*


I interpret this as if you specify both the driver and executor container
images, then you don't need to specify the container image itself. However,
if both the driver and executor images are provided with NO container
image, the job fails.


Spark config:

(*spark.kubernetes.driver.docker.image,*
eu.gcr.io/axial-glow-224522/spark-py:3.1.1-scala_2.12-8-jre-slim-buster-container
)

(*spark.kubernetes.executor.docker.image*,
eu.gcr.io/axial-glow-224522/spark-py:3.1.1-scala_2.12-8-jre-slim-buster-container
)

Using Spark's default log4j profile:
org/apache/spark/log4j-defaults.properties
21/12/10 08:24:03 INFO SparkKubernetesClientFactory: Auto-configuring K8S
client using current context from users K8S config file
*Exception in thread "main" org.apache.spark.SparkException: Must specify
the driver container image*

Sounds like that regardless you still have to specify the container image
explicitly


HTH


   view my Linkedin profile
<https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>



*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.

Reply via email to