Re: docker image distribution in Kubernetes cluster

2021-12-08 Thread Prasad Paravatha
I agree with Khalid and Rob. We absolutely need different properties for
Driver and Executor images for ML use-cases.

Here is a real-world example of setup at our company

   - Default setup via configmaps: When our Data scientists request Spark
   on k8s clusters (they are not familiar with Docker or k8s), we inject spark
   default Driver/Executor images (and whole lot of other default properties)
   - Our ML Engineers frequently build new Driver and Executor images to
   include new experimental ML libraries/packages, test and release to the
   wider Data scientist community.

Regards,
Prasad

On Thu, Dec 9, 2021 at 12:25 AM Mich Talebzadeh 
wrote:

>
> Fine. If I go back to the list itself
>
>
> Property NameDefaultMeaning
> spark.kubernetes.container.image (none) Container image to use for the
> Spark application. This is usually of the form
> example.com/repo/spark:v1.0.0. This configuration is required and must be
> provided by the user, unless explicit images are provided for each
> different container type. 2.3.0
> spark.kubernetes.driver.container.image (value of
> spark.kubernetes.container.image) Custom container image to use for the
> driver. 2.3.0
> spark.kubernetes.executor.container.image (value of
> spark.kubernetes.container.image) Custom container image to use for
> executors.
>
> If I specify* both* the driver and executor images, then there is no need
> for a generic container type image,  it will be ignored.  So either one
> specifies the driver AND executor images explicitly and excludes the
> container image or
>
> specifies one of the driver *or* container images explicitly and then it
> has to set the container image as well for the default to work. A bit of a
> long shot.
>
>
> cheers
>
>
>view my Linkedin profile
> 
>
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
>
> On Wed, 8 Dec 2021 at 18:21, Rob Vesse  wrote:
>
>> So the point Khalid was trying to make is that there are legitimate
>> reasons you might use different container images for the driver pod vs the
>> executor pod.  It has nothing to do with Docker versions.
>>
>>
>>
>> Since the bulk of the actual work happens on the executor you may want
>> additional libraries, tools or software in that image that your job code
>> can call.  This same software may be entirely unnecessary on the driver
>> allowing you to use a smaller image for that versus the executor image.
>>
>>
>>
>> As a practical example for a ML use case you might want to have the
>> optional Intel MKL or OpenBLAS dependencies which can significantly bloat
>> the size of your container image (by hundreds of megabytes) and would only
>> be needed by the executor pods.
>>
>>
>>
>> Rob
>>
>>
>>
>> *From: *Mich Talebzadeh 
>> *Date: *Wednesday, 8 December 2021 at 17:42
>> *To: *Khalid Mammadov 
>> *Cc: *"user @spark" , Spark dev list <
>> dev@spark.apache.org>
>> *Subject: *Re: docker image distribution in Kubernetes cluster
>>
>>
>>
>> Thanks Khalid for your notes
>>
>>
>>
>> I have not come across a use case where the docker version on the driver
>> and executors need to be different.
>>
>>
>>
>> My thinking is that spark.kubernetes.executor.container.image is the
>> correct reference as in the Kubernetes where container is the correct
>> terminology and also both driver and executors are spark specific.
>>
>>
>>
>> cheers
>>
>>
>>
>>
>>
>>  [image: Image removed by sender.]  view my Linkedin profile
>> 
>>
>>
>>
>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>> any loss, damage or destruction of data or any other property which may
>> arise from relying on this email's technical content is explicitly
>> disclaimed. The author will in no case be liable for any monetary damages
>> arising from such loss, damage or destruction.
>>
>>
>>
>>
>>
>>
>>
>> On Wed, 8 Dec 2021 at 11:47, Khalid Mammadov 
>> wrote:
>>
>> Hi Mitch
>>
>>
>>
>> IMO, it's done to provide most flexibility. So, some users can have
>> limited/restricted version of the image or with some additional software
>> that they use on the executors that is used during processing.
>>
>>
>>
>> So, in your case you only need to provide the first one since the other
>> two configs will be copied from it
>>
>>
>>
>> Regards
>>
>> Khalid
>>
>>
>>
>> On Wed, 8 Dec 2021, 10:41 Mich Talebzadeh, 
>> wrote:
>>
>> Just a correction that in Spark 3.2 documentation it states
>> 
>> that
>>
>>
>>
>> *Property Name*
>>
>> *Default*
>>
>> *Meaning*
>>
>> spark.kubernetes.container.image
>>
>> (none)

Hadoop profile change to hadoop-2 and hadoop-3 since Spark 3.3

2021-12-08 Thread angers zhu
Hi all,

Since Spark 3.2, we have supported Hadoop 3.3.1 now, but its profile name
is *hadoop-3.2* (and *hadoop-2.7*) that is not correct.
So we made a change in https://github.com/apache/spark/pull/34715
Starting from Spark 3.3, we use hadoop profile *hadoop-2* and *hadoop-3 *,
and default hadoop profile is hadoop-3.
Profile changes

*hadoop-2.7* changed to *hadoop-2*
*hadoop-3.2* changed to *hadoop-3*
Release tar file

Spark-3.3.0 with profile hadoop-3: *spark-3.3.0-bin-hadoop3.tgz*
Spark-3.3.0 with profile hadoop-2: *spark-3.3.0-bin-hadoop2.tgz*

For Spark 3.2.0, the release tar file was, for example,
*spark-3.2.0-bin-hadoop3.2.tgz*.
Pip install option changes

For PySpark with/without a specific Hadoop version, you can install it by
using PYSPARK_HADOOP_VERSION environment variables as below (Hadoop 3):

PYSPARK_HADOOP_VERSION=3 pip install pyspark

For Hadoop 2:

PYSPARK_HADOOP_VERSION=2 pip install pyspark

Supported values in PYSPARK_HADOOP_VERSION are now:

   - without: Spark pre-built with user-provided Apache Hadoop
   - 2: Spark pre-built for Apache Hadoop 2.
   - 3: Spark pre-built for Apache Hadoop 3.3 and later (default)

Building Spark and Specifying the Hadoop Version


You can specify the exact version of Hadoop to compile against through the
hadoop.version property.
Example:

./build/mvn -Pyarn -Dhadoop.version=3.3.0 -DskipTests clean package

or you can specify *hadoop-3* profile

./build/mvn -Pyarn -Phadoop-3 -Dhadoop.version=3.3.0 -DskipTests clean package

If you want to build with Hadoop 2.x, enable *hadoop-2* profile:

./build/mvn -Phadoop-2 -Pyarn -Dhadoop.version=2.8.5 -DskipTests clean package

Notes

In the current master, it will use the default Hadoop 3 if you continue to
use -Phadoop-2.7 and -Phadoop-3.2 to build Spark
because Maven or SBT will just warn and ignore these non-existent profiles.
Please change profiles to -Phadoop-2 or -Phadoop-3.


Re: docker image distribution in Kubernetes cluster

2021-12-08 Thread Mich Talebzadeh
Fine. If I go back to the list itself


Property NameDefaultMeaning
spark.kubernetes.container.image (none) Container image to use for the
Spark application. This is usually of the form example.com/repo/spark:v1.0.0.
This configuration is required and must be provided by the user, unless
explicit images are provided for each different container type. 2.3.0
spark.kubernetes.driver.container.image (value of
spark.kubernetes.container.image) Custom container image to use for the
driver. 2.3.0
spark.kubernetes.executor.container.image (value of
spark.kubernetes.container.image) Custom container image to use for
executors.

If I specify* both* the driver and executor images, then there is no need
for a generic container type image,  it will be ignored.  So either one
specifies the driver AND executor images explicitly and excludes the
container image or

specifies one of the driver *or* container images explicitly and then it
has to set the container image as well for the default to work. A bit of a
long shot.


cheers


   view my Linkedin profile




*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Wed, 8 Dec 2021 at 18:21, Rob Vesse  wrote:

> So the point Khalid was trying to make is that there are legitimate
> reasons you might use different container images for the driver pod vs the
> executor pod.  It has nothing to do with Docker versions.
>
>
>
> Since the bulk of the actual work happens on the executor you may want
> additional libraries, tools or software in that image that your job code
> can call.  This same software may be entirely unnecessary on the driver
> allowing you to use a smaller image for that versus the executor image.
>
>
>
> As a practical example for a ML use case you might want to have the
> optional Intel MKL or OpenBLAS dependencies which can significantly bloat
> the size of your container image (by hundreds of megabytes) and would only
> be needed by the executor pods.
>
>
>
> Rob
>
>
>
> *From: *Mich Talebzadeh 
> *Date: *Wednesday, 8 December 2021 at 17:42
> *To: *Khalid Mammadov 
> *Cc: *"user @spark" , Spark dev list <
> dev@spark.apache.org>
> *Subject: *Re: docker image distribution in Kubernetes cluster
>
>
>
> Thanks Khalid for your notes
>
>
>
> I have not come across a use case where the docker version on the driver
> and executors need to be different.
>
>
>
> My thinking is that spark.kubernetes.executor.container.image is the
> correct reference as in the Kubernetes where container is the correct
> terminology and also both driver and executors are spark specific.
>
>
>
> cheers
>
>
>
>
>
>  [image: Image removed by sender.]  view my Linkedin profile
> 
>
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
>
>
>
>
> On Wed, 8 Dec 2021 at 11:47, Khalid Mammadov 
> wrote:
>
> Hi Mitch
>
>
>
> IMO, it's done to provide most flexibility. So, some users can have
> limited/restricted version of the image or with some additional software
> that they use on the executors that is used during processing.
>
>
>
> So, in your case you only need to provide the first one since the other
> two configs will be copied from it
>
>
>
> Regards
>
> Khalid
>
>
>
> On Wed, 8 Dec 2021, 10:41 Mich Talebzadeh, 
> wrote:
>
> Just a correction that in Spark 3.2 documentation it states
> 
> that
>
>
>
> *Property Name*
>
> *Default*
>
> *Meaning*
>
> spark.kubernetes.container.image
>
> (none)
>
> Container image to use for the Spark application. This is usually of the
> form example.com/repo/spark:v1.0.0. This configuration is required and
> must be provided by the user, unless explicit images are provided for each
> different container type.
>
> 2.3.0
>
> spark.kubernetes.driver.container.image
>
> (value of spark.kubernetes.container.image)
>
> Custom container image to use for the driver.
>
> 2.3.0
>
> spark.kubernetes.executor.container.image
>
> (value of spark.kubernetes.container.image)
>
> Custom container image to use for executors.
>
> So both driver and executor images are mapped to the container image. In
> my opinion, they are redundant and will potentially add confusion so they
> should be removed?
>
>
>
>  [image: Image removed by sender.]  view my Linkedin profile
> 

Re: docker image distribution in Kubernetes cluster

2021-12-08 Thread Rob Vesse
So the point Khalid was trying to make is that there are legitimate reasons you 
might use different container images for the driver pod vs the executor pod.  
It has nothing to do with Docker versions.

 

Since the bulk of the actual work happens on the executor you may want 
additional libraries, tools or software in that image that your job code can 
call.  This same software may be entirely unnecessary on the driver allowing 
you to use a smaller image for that versus the executor image.

 

As a practical example for a ML use case you might want to have the optional 
Intel MKL or OpenBLAS dependencies which can significantly bloat the size of 
your container image (by hundreds of megabytes) and would only be needed by the 
executor pods.

 

Rob

 

From: Mich Talebzadeh 
Date: Wednesday, 8 December 2021 at 17:42
To: Khalid Mammadov 
Cc: "user @spark" , Spark dev list 
Subject: Re: docker image distribution in Kubernetes cluster

 

Thanks Khalid for your notes
 

I have not come across a use case where the docker version on the driver and 
executors need to be different.

 

My thinking is that spark.kubernetes.executor.container.image is the correct 
reference as in the Kubernetes where container is the correct terminology and 
also both driver and executors are spark specific.

 

cheers

 

 

   view my Linkedin profile

 

Disclaimer: Use it at your own risk. Any and all responsibility for any loss, 
damage or destruction of data or any other property which may arise from 
relying on this email's technical content is explicitly disclaimed. The author 
will in no case be liable for any monetary damages arising from such loss, 
damage or destruction. 

 

 

 

On Wed, 8 Dec 2021 at 11:47, Khalid Mammadov  wrote:

Hi Mitch

 

IMO, it's done to provide most flexibility. So, some users can have 
limited/restricted version of the image or with some additional software that 
they use on the executors that is used during processing. 

 

So, in your case you only need to provide the first one since the other two 
configs will be copied from it

 

Regards

Khalid

 

On Wed, 8 Dec 2021, 10:41 Mich Talebzadeh,  wrote:

Just a correction that in Spark 3.2 documentation it states that 

 

Property NameDefaultMeaning
spark.kubernetes.container.image(none)Container image to use for the Spark 
application. This is usually of the form example.com/repo/spark:v1.0.0. This 
configuration is required and must be provided by the user, unless explicit 
images are provided for each different container type.2.3.0
spark.kubernetes.driver.container.image(value of 
spark.kubernetes.container.image)Custom container image to use for the 
driver.2.3.0
spark.kubernetes.executor.container.image(value of 
spark.kubernetes.container.image)Custom container image to use for executors.
So both driver and executor images are mapped to the container image. In my 
opinion, they are redundant and will potentially add confusion so they should 
be removed?

 

   view my Linkedin profile

 

Disclaimer: Use it at your own risk. Any and all responsibility for any loss, 
damage or destruction of data or any other property which may arise from 
relying on this email's technical content is explicitly disclaimed. The author 
will in no case be liable for any monetary damages arising from such loss, 
damage or destruction. 

 

 

 

On Wed, 8 Dec 2021 at 10:15, Mich Talebzadeh  wrote:

Hi,

 

We have three conf parameters to distribute the docker image with spark-sumit 
in Kubernetes cluster.

 

These are

 

spark-submit --verbose \

  --conf spark.kubernetes.driver.docker.image=${IMAGEGCP} \

   --conf spark.kubernetes.executor.docker.image=${IMAGEGCP} \

   --conf spark.kubernetes.container.image=${IMAGEGCP} \

 

when the above is run, it shows

 

(spark.kubernetes.driver.docker.image,eu.gcr.io/axial-glow-224522/spark-py:3.1.1-scala_2.12-8-jre-slim-buster-addedpackages)

(spark.kubernetes.executor.docker.image,eu.gcr.io/axial-glow-224522/spark-py:3.1.1-scala_2.12-8-jre-slim-buster-addedpackages)

(spark.kubernetes.container.image,eu.gcr.io/axial-glow-224522/spark-py:3.1.1-scala_2.12-8-jre-slim-buster-addedpackages)

 

You notice that I am using the same docker image for driver, executor and 
container. In Spark 3.2 (actually in recent spark versions), I cannot see 
reference to driver or executor. Are these depreciated? It appears that Spark 
still accepts them?

 

Thanks


 

 

   view my Linkedin profile

 

Disclaimer: Use it at your own risk. Any and all responsibility for any loss, 
damage or destruction of data or any other property which may arise from 
relying on this email's technical content is explicitly disclaimed. The author 
will in no case be liable for any monetary damages arising from such loss, 
damage or destruction. 

 h

 

 

 

 



Re: docker image distribution in Kubernetes cluster

2021-12-08 Thread Mich Talebzadeh
Thanks Khalid for your notes


I have not come across a use case where the docker version on the driver
and executors need to be different.


My thinking is that spark.kubernetes.executor.container.image is the
correct reference as in the Kubernetes where container is the correct
terminology and also both driver and executors are spark specific.


cheers



   view my Linkedin profile




*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Wed, 8 Dec 2021 at 11:47, Khalid Mammadov 
wrote:

> Hi Mitch
>
> IMO, it's done to provide most flexibility. So, some users can have
> limited/restricted version of the image or with some additional software
> that they use on the executors that is used during processing.
>
> So, in your case you only need to provide the first one since the other
> two configs will be copied from it
>
> Regards
> Khalid
>
> On Wed, 8 Dec 2021, 10:41 Mich Talebzadeh, 
> wrote:
>
>> Just a correction that in Spark 3.2 documentation it states
>> 
>> that
>>
>> Property NameDefaultMeaning
>> spark.kubernetes.container.image (none) Container image to use for the
>> Spark application. This is usually of the form
>> example.com/repo/spark:v1.0.0. This configuration is required and must
>> be provided by the user, unless explicit images are provided for each
>> different container type. 2.3.0
>> spark.kubernetes.driver.container.image (value of
>> spark.kubernetes.container.image) Custom container image to use for the
>> driver. 2.3.0
>> spark.kubernetes.executor.container.image (value of
>> spark.kubernetes.container.image) Custom container image to use for
>> executors.
>>
>> So both driver and executor images are mapped to the container image. In
>> my opinion, they are redundant and will potentially add confusion so they
>> should be removed?
>>
>>
>>view my Linkedin profile
>> 
>>
>>
>>
>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>> any loss, damage or destruction of data or any other property which may
>> arise from relying on this email's technical content is explicitly
>> disclaimed. The author will in no case be liable for any monetary damages
>> arising from such loss, damage or destruction.
>>
>>
>>
>>
>> On Wed, 8 Dec 2021 at 10:15, Mich Talebzadeh 
>> wrote:
>>
>>> Hi,
>>>
>>> We have three conf parameters to distribute the docker image with
>>> spark-sumit in Kubernetes cluster.
>>>
>>> These are
>>>
>>> spark-submit --verbose \
>>>   --conf spark.kubernetes.driver.docker.image=${IMAGEGCP} \
>>>--conf spark.kubernetes.executor.docker.image=${IMAGEGCP} \
>>>--conf spark.kubernetes.container.image=${IMAGEGCP} \
>>>
>>> when the above is run, it shows
>>>
>>> (spark.kubernetes.driver.docker.image,
>>> eu.gcr.io/axial-glow-224522/spark-py:3.1.1-scala_2.12-8-jre-slim-buster-addedpackages
>>> )
>>> (spark.kubernetes.executor.docker.image,
>>> eu.gcr.io/axial-glow-224522/spark-py:3.1.1-scala_2.12-8-jre-slim-buster-addedpackages
>>> )
>>> (spark.kubernetes.container.image,
>>> eu.gcr.io/axial-glow-224522/spark-py:3.1.1-scala_2.12-8-jre-slim-buster-addedpackages
>>> )
>>>
>>> You notice that I am using the same docker image for driver, executor
>>> and container. In Spark 3.2 (actually in recent spark versions), I cannot
>>> see reference to driver or executor. Are these depreciated? It appears that
>>> Spark still accepts them?
>>>
>>> Thanks
>>>
>>>
>>>
>>>view my Linkedin profile
>>> 
>>>
>>>
>>>
>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>> any loss, damage or destruction of data or any other property which may
>>> arise from relying on this email's technical content is explicitly
>>> disclaimed. The author will in no case be liable for any monetary damages
>>> arising from such loss, damage or destruction.
>>>
>>>  h
>>>
>>>
>>>
>>>
>>>
>>>


Re: docker image distribution in Kubernetes cluster

2021-12-08 Thread Khalid Mammadov
Hi Mitch

IMO, it's done to provide most flexibility. So, some users can have
limited/restricted version of the image or with some additional software
that they use on the executors that is used during processing.

So, in your case you only need to provide the first one since the other two
configs will be copied from it

Regards
Khalid

On Wed, 8 Dec 2021, 10:41 Mich Talebzadeh, 
wrote:

> Just a correction that in Spark 3.2 documentation it states
> 
> that
>
> Property NameDefaultMeaning
> spark.kubernetes.container.image (none) Container image to use for the
> Spark application. This is usually of the form
> example.com/repo/spark:v1.0.0. This configuration is required and must be
> provided by the user, unless explicit images are provided for each
> different container type. 2.3.0
> spark.kubernetes.driver.container.image (value of
> spark.kubernetes.container.image) Custom container image to use for the
> driver. 2.3.0
> spark.kubernetes.executor.container.image (value of
> spark.kubernetes.container.image) Custom container image to use for
> executors.
>
> So both driver and executor images are mapped to the container image. In
> my opinion, they are redundant and will potentially add confusion so they
> should be removed?
>
>
>view my Linkedin profile
> 
>
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
>
> On Wed, 8 Dec 2021 at 10:15, Mich Talebzadeh 
> wrote:
>
>> Hi,
>>
>> We have three conf parameters to distribute the docker image with
>> spark-sumit in Kubernetes cluster.
>>
>> These are
>>
>> spark-submit --verbose \
>>   --conf spark.kubernetes.driver.docker.image=${IMAGEGCP} \
>>--conf spark.kubernetes.executor.docker.image=${IMAGEGCP} \
>>--conf spark.kubernetes.container.image=${IMAGEGCP} \
>>
>> when the above is run, it shows
>>
>> (spark.kubernetes.driver.docker.image,
>> eu.gcr.io/axial-glow-224522/spark-py:3.1.1-scala_2.12-8-jre-slim-buster-addedpackages
>> )
>> (spark.kubernetes.executor.docker.image,
>> eu.gcr.io/axial-glow-224522/spark-py:3.1.1-scala_2.12-8-jre-slim-buster-addedpackages
>> )
>> (spark.kubernetes.container.image,
>> eu.gcr.io/axial-glow-224522/spark-py:3.1.1-scala_2.12-8-jre-slim-buster-addedpackages
>> )
>>
>> You notice that I am using the same docker image for driver, executor and
>> container. In Spark 3.2 (actually in recent spark versions), I cannot see
>> reference to driver or executor. Are these depreciated? It appears that
>> Spark still accepts them?
>>
>> Thanks
>>
>>
>>
>>view my Linkedin profile
>> 
>>
>>
>>
>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>> any loss, damage or destruction of data or any other property which may
>> arise from relying on this email's technical content is explicitly
>> disclaimed. The author will in no case be liable for any monetary damages
>> arising from such loss, damage or destruction.
>>
>>  h
>>
>>
>>
>>
>>
>>


Re: docker image distribution in Kubernetes cluster

2021-12-08 Thread Mich Talebzadeh
Just a correction that in Spark 3.2 documentation it states

that

Property NameDefaultMeaning
spark.kubernetes.container.image (none) Container image to use for the
Spark application. This is usually of the form example.com/repo/spark:v1.0.0.
This configuration is required and must be provided by the user, unless
explicit images are provided for each different container type. 2.3.0
spark.kubernetes.driver.container.image (value of
spark.kubernetes.container.image) Custom container image to use for the
driver. 2.3.0
spark.kubernetes.executor.container.image (value of
spark.kubernetes.container.image) Custom container image to use for
executors.

So both driver and executor images are mapped to the container image. In my
opinion, they are redundant and will potentially add confusion so they
should be removed?


   view my Linkedin profile




*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Wed, 8 Dec 2021 at 10:15, Mich Talebzadeh 
wrote:

> Hi,
>
> We have three conf parameters to distribute the docker image with
> spark-sumit in Kubernetes cluster.
>
> These are
>
> spark-submit --verbose \
>   --conf spark.kubernetes.driver.docker.image=${IMAGEGCP} \
>--conf spark.kubernetes.executor.docker.image=${IMAGEGCP} \
>--conf spark.kubernetes.container.image=${IMAGEGCP} \
>
> when the above is run, it shows
>
> (spark.kubernetes.driver.docker.image,
> eu.gcr.io/axial-glow-224522/spark-py:3.1.1-scala_2.12-8-jre-slim-buster-addedpackages
> )
> (spark.kubernetes.executor.docker.image,
> eu.gcr.io/axial-glow-224522/spark-py:3.1.1-scala_2.12-8-jre-slim-buster-addedpackages
> )
> (spark.kubernetes.container.image,
> eu.gcr.io/axial-glow-224522/spark-py:3.1.1-scala_2.12-8-jre-slim-buster-addedpackages
> )
>
> You notice that I am using the same docker image for driver, executor and
> container. In Spark 3.2 (actually in recent spark versions), I cannot see
> reference to driver or executor. Are these depreciated? It appears that
> Spark still accepts them?
>
> Thanks
>
>
>
>view my Linkedin profile
> 
>
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>  h
>
>
>
>
>
>