Re: Conda Python Env in K8S

2021-12-24 Thread Hyukjin Kwon
Can you share the logs, settings, environment, etc. and file a JIRA? There
are integration test cases for K8S support, and I myself also tested it
before.
It would be helpful if you try what I did at
https://databricks.com/blog/2020/12/22/how-to-manage-python-dependencies-in-pyspark.html
and see if it works.

On Mon, 6 Dec 2021 at 17:22, Bode, Meikel, NMA-CFD <
meikel.b...@bertelsmann.de> wrote:

> Hi Mich,
>
>
>
> Thanks for your response. Yes –py-files options works. I also tested it.
>
> The question is why the –archives option doesn’t?
>
>
>
> From Jira I can see that it should be available since 3.1.0:
>
>
>
> https://issues.apache.org/jira/browse/SPARK-33530
>
> https://issues.apache.org/jira/browse/SPARK-33615
>
>
>
> Best,
>
> Meikel
>
>
>
>
>
> *From:* Mich Talebzadeh 
> *Sent:* Samstag, 4. Dezember 2021 18:36
> *To:* Bode, Meikel, NMA-CFD 
> *Cc:* dev ; u...@spark.apache.org
> *Subject:* Re: Conda Python Env in K8S
>
>
>
>
> Hi Meikel
>
>
>
> In the past I tried with
>
>
>
>--py-files
> hdfs://$HDFS_HOST:$HDFS_PORT/minikube/codes/DSBQ.zip \
>
>--archives
> hdfs://$HDFS_HOST:$HDFS_PORT/minikube/codes/pyspark_venv.zip#pyspark_venv \
>
>
>
> which is basically what you are doing. the first line --py-files works but
> the second one fails
>
>
>
> It tried to unpack them ? It tries to unpack them
>
>
>
> Unpacking an archive hdfs://
> 50.140.197.220:9000/minikube/codes/pyspark_venv.zip#pyspark_venv
> <https://eur02.safelinks.protection.outlook.com/?url=http%3A%2F%2F50.140.197.220%3A9000%2Fminikube%2Fcodes%2Fpyspark_venv.zip%23pyspark_venv=04%7C01%7CMeikel.Bode%40bertelsmann.de%7Cf9716ed642fe4c92be6f08d9b74c98bd%7C1ca8bd943c974fc68955bad266b43f0b%7C0%7C0%7C637742362326413635%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000=UinOKIfYC16iRnLiibB9kXsvoiEZ10DfVzHlKqJZTHk%3D=0>
>  from
> /tmp/spark-502a5b57-0fe6-45bd-867d-9738e678e9a3/pyspark_venv.zip to
> /opt/spark/work-dir/./pyspark_venv
>
>
>
> But it failed.
>
>
>
> This could be due to creating the virtual environment inside the docker in
> the work-dir *o*r sometimes when there is not enough available memory to
> gunzip and untar the file, especially if your executors are built on
> cluster nodes with less memory than the driver node.
>
>
>
> However, The most convenient way to add additional packages to the docker
> image is to add them directly to the docker image at time of creating the
> image. So external packages are bundled as a part of my docker image
> because it is fixed and if an application requires those set of
> dependencies every time, they are there. Also note that every time you put
> RUN statement it creates an intermediate container and hence it increases
> build time. So reduce it as follows
>
> RUN pip install pyyaml numpy cx_Oracle --no-cache-dir
>
> The --no-cheche-dir option to pip is to prevent the downloaded binaries from 
> being added to the image, reducing the image size. It is also advisable to 
> install all packages in one line. Every time you put RUN statement it creates 
> an intermediate container and hence it increases the build time. So reduce it 
> by putting all packages in one line.
>
> Log in to the docker image and check for Python packages installed
>
> docker run -u 0 -it 
> spark/spark-py:3.1.1-scala_2.12-8-jre-slim-buster_java8PlusPackages bash
>
> root@5bc049af7278:/opt/spark/work-dir# pip list
>
> PackageVersion
>
> -- ---
>
> cx-Oracle  8.3.0
>
> numpy  1.21.4
>
> pip21.3.1
>
> PyYAML 6.0
>
> setuptools 59.4.0
>
> wheel  0.34.2
>
> HTH
>
>
>
>view my Linkedin profile
> <https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.linkedin.com%2Fin%2Fmich-talebzadeh-ph-d-5205b2%2F=04%7C01%7CMeikel.Bode%40bertelsmann.de%7Cf9716ed642fe4c92be6f08d9b74c98bd%7C1ca8bd943c974fc68955bad266b43f0b%7C0%7C0%7C637742362326413635%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000=i0NSWMcUHWBNBMV2Qe%2BejnJyFSNfGQkEs9KMh0OS5uY%3D=0>
>
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
>
>
>
>
> On Sat, 4 Dec 2021 at 07:52, Bode, Meikel, NMA-CFD <
> meikel.b...@bertelsmann.de> wrote:
>
> Hi Mich,
>

Re: Conda Python Env in K8S

2021-12-06 Thread Mich Talebzadeh
Hi Meikel,

Well the short answer is it is what it is, for one reason or other. if
someone else managed to make it work, then no doubt will be delighted to
hear it. Until then I prefer the built- in docker image.

Also by centralising this in the docker image, it will be available if a
node fails and k8s has to create thex executors again.

HTH





   view my Linkedin profile
<https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>



*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Mon, 6 Dec 2021 at 08:21, Bode, Meikel, NMA-CFD <
meikel.b...@bertelsmann.de> wrote:

> Hi Mich,
>
>
>
> Thanks for your response. Yes –py-files options works. I also tested it.
>
> The question is why the –archives option doesn’t?
>
>
>
> From Jira I can see that it should be available since 3.1.0:
>
>
>
> https://issues.apache.org/jira/browse/SPARK-33530
>
> https://issues.apache.org/jira/browse/SPARK-33615
>
>
>
> Best,
>
> Meikel
>
>
>
>
>
> *From:* Mich Talebzadeh 
> *Sent:* Samstag, 4. Dezember 2021 18:36
> *To:* Bode, Meikel, NMA-CFD 
> *Cc:* dev ; u...@spark.apache.org
> *Subject:* Re: Conda Python Env in K8S
>
>
>
>
> Hi Meikel
>
>
>
> In the past I tried with
>
>
>
>--py-files
> hdfs://$HDFS_HOST:$HDFS_PORT/minikube/codes/DSBQ.zip \
>
>--archives
> hdfs://$HDFS_HOST:$HDFS_PORT/minikube/codes/pyspark_venv.zip#pyspark_venv \
>
>
>
> which is basically what you are doing. the first line --py-files works but
> the second one fails
>
>
>
> It tried to unpack them ? It tries to unpack them
>
>
>
> Unpacking an archive hdfs://
> 50.140.197.220:9000/minikube/codes/pyspark_venv.zip#pyspark_venv
> <https://eur02.safelinks.protection.outlook.com/?url=http%3A%2F%2F50.140.197.220%3A9000%2Fminikube%2Fcodes%2Fpyspark_venv.zip%23pyspark_venv=04%7C01%7CMeikel.Bode%40bertelsmann.de%7Cf9716ed642fe4c92be6f08d9b74c98bd%7C1ca8bd943c974fc68955bad266b43f0b%7C0%7C0%7C637742362326413635%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000=UinOKIfYC16iRnLiibB9kXsvoiEZ10DfVzHlKqJZTHk%3D=0>
>  from
> /tmp/spark-502a5b57-0fe6-45bd-867d-9738e678e9a3/pyspark_venv.zip to
> /opt/spark/work-dir/./pyspark_venv
>
>
>
> But it failed.
>
>
>
> This could be due to creating the virtual environment inside the docker in
> the work-dir *o*r sometimes when there is not enough available memory to
> gunzip and untar the file, especially if your executors are built on
> cluster nodes with less memory than the driver node.
>
>
>
> However, The most convenient way to add additional packages to the docker
> image is to add them directly to the docker image at time of creating the
> image. So external packages are bundled as a part of my docker image
> because it is fixed and if an application requires those set of
> dependencies every time, they are there. Also note that every time you put
> RUN statement it creates an intermediate container and hence it increases
> build time. So reduce it as follows
>
> RUN pip install pyyaml numpy cx_Oracle --no-cache-dir
>
> The --no-cheche-dir option to pip is to prevent the downloaded binaries from 
> being added to the image, reducing the image size. It is also advisable to 
> install all packages in one line. Every time you put RUN statement it creates 
> an intermediate container and hence it increases the build time. So reduce it 
> by putting all packages in one line.
>
> Log in to the docker image and check for Python packages installed
>
> docker run -u 0 -it 
> spark/spark-py:3.1.1-scala_2.12-8-jre-slim-buster_java8PlusPackages bash
>
> root@5bc049af7278:/opt/spark/work-dir# pip list
>
> PackageVersion
>
> -- ---
>
> cx-Oracle  8.3.0
>
> numpy  1.21.4
>
> pip21.3.1
>
> PyYAML 6.0
>
> setuptools 59.4.0
>
> wheel  0.34.2
>
> HTH
>
>
>
>view my Linkedin profile
> <https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.linkedin.com%2Fin%2Fmich-talebzadeh-ph-d-5205b2%2F=04%7C01%7CMeikel.Bode%40bertelsmann.de%7Cf9716ed642fe4c92be6f08d9b74c98bd%7C1ca8bd943c974fc68955bad266b43f0b%7C0%7C0%7C637742362326413635%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000=i0NSWMcUHWBNBMV2Qe%2BejnJyFSNfGQkEs9KMh0OS5uY%3D=0>
>
>
>
> *Disclaimer:* Use it at your own risk.

RE: Conda Python Env in K8S

2021-12-06 Thread Bode, Meikel, NMA-CFD
Hi Mich,

Thanks for your response. Yes -py-files options works. I also tested it.
The question is why the -archives option doesn't?

>From Jira I can see that it should be available since 3.1.0:

https://issues.apache.org/jira/browse/SPARK-33530
https://issues.apache.org/jira/browse/SPARK-33615

Best,
Meikel


From: Mich Talebzadeh 
Sent: Samstag, 4. Dezember 2021 18:36
To: Bode, Meikel, NMA-CFD 
Cc: dev ; u...@spark.apache.org
Subject: Re: Conda Python Env in K8S



Hi Meikel



In the past I tried with


   --py-files hdfs://$HDFS_HOST:$HDFS_PORT/minikube/codes/DSBQ.zip \
   --archives 
hdfs://$HDFS_HOST:$HDFS_PORT/minikube/codes/pyspark_venv.zip#pyspark_venv \


which is basically what you are doing. the first line --py-files works but the 
second one fails



It tried to unpack them ? It tries to unpack them



Unpacking an archive 
hdfs://50.140.197.220:9000/minikube/codes/pyspark_venv.zip#pyspark_venv<https://eur02.safelinks.protection.outlook.com/?url=http%3A%2F%2F50.140.197.220%3A9000%2Fminikube%2Fcodes%2Fpyspark_venv.zip%23pyspark_venv=04%7C01%7CMeikel.Bode%40bertelsmann.de%7Cf9716ed642fe4c92be6f08d9b74c98bd%7C1ca8bd943c974fc68955bad266b43f0b%7C0%7C0%7C637742362326413635%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000=UinOKIfYC16iRnLiibB9kXsvoiEZ10DfVzHlKqJZTHk%3D=0>
 from /tmp/spark-502a5b57-0fe6-45bd-867d-9738e678e9a3/pyspark_venv.zip to 
/opt/spark/work-dir/./pyspark_venv



But it failed.



This could be due to creating the virtual environment inside the docker in the 
work-dir or sometimes when there is not enough available memory to gunzip and 
untar the file, especially if your executors are built on cluster nodes with 
less memory than the driver node.



However, The most convenient way to add additional packages to the docker image 
is to add them directly to the docker image at time of creating the image. So 
external packages are bundled as a part of my docker image because it is fixed 
and if an application requires those set of dependencies every time, they are 
there. Also note that every time you put RUN statement it creates an 
intermediate container and hence it increases build time. So reduce it as 
follows

RUN pip install pyyaml numpy cx_Oracle --no-cache-dir

The --no-cheche-dir option to pip is to prevent the downloaded binaries from 
being added to the image, reducing the image size. It is also advisable to 
install all packages in one line. Every time you put RUN statement it creates 
an intermediate container and hence it increases the build time. So reduce it 
by putting all packages in one line.

Log in to the docker image and check for Python packages installed

docker run -u 0 -it 
spark/spark-py:3.1.1-scala_2.12-8-jre-slim-buster_java8PlusPackages bash

root@5bc049af7278:/opt/spark/work-dir# pip list

PackageVersion

-- ---

cx-Oracle  8.3.0

numpy  1.21.4

pip21.3.1

PyYAML 6.0

setuptools 59.4.0

wheel  0.34.2
HTH

 
[https://docs.google.com/uc?export=download=1-q7RFGRfLMObPuQPWSd9sl_H1UPNFaIZ=0B1BiUVX33unjMWtVUWpINWFCd0ZQTlhTRHpGckh4Wlg4RG80PQ]
   view my Linkedin 
profile<https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.linkedin.com%2Fin%2Fmich-talebzadeh-ph-d-5205b2%2F=04%7C01%7CMeikel.Bode%40bertelsmann.de%7Cf9716ed642fe4c92be6f08d9b74c98bd%7C1ca8bd943c974fc68955bad266b43f0b%7C0%7C0%7C637742362326413635%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000=i0NSWMcUHWBNBMV2Qe%2BejnJyFSNfGQkEs9KMh0OS5uY%3D=0>



Disclaimer: Use it at your own risk. Any and all responsibility for any loss, 
damage or destruction of data or any other property which may arise from 
relying on this email's technical content is explicitly disclaimed. The author 
will in no case be liable for any monetary damages arising from such loss, 
damage or destruction.




On Sat, 4 Dec 2021 at 07:52, Bode, Meikel, NMA-CFD 
mailto:meikel.b...@bertelsmann.de>> wrote:
Hi Mich,

sure thats possible. But distributing the complete env would be more practical.
A workaround at the moment is, that we build different environments and store 
them in a pv and then we mount it into the pods and refer from the 
SparkApplication resource to the desired env..

But actually these options exist and I want to understand what the issue is...
Any hints on that?

Best,
Meikel

From: Mich Talebzadeh 
mailto:mich.talebza...@gmail.com>>
Sent: Freitag, 3. Dezember 2021 13:27
To: Bode, Meikel, NMA-CFD 
mailto:meikel.b...@bertelsmann.de>>
Cc: dev mailto:dev@spark.apache.org>>; 
u...@spark.apache.org<mailto:u...@spark.apache.org>
Subject: Re: Conda Python Env in K8S

Build python packages into the docker image itself first with pip install

RUN pip install panda . . -no-cache

HTH

On Fri, 3 Dec 2021 at 11:58, Bode, Meikel, NMA-CFD 
mailto:meikel.b...@bertelsmann.de>> wrote:
Hello,

I am trying to run spa

Re: Conda Python Env in K8S

2021-12-04 Thread Mich Talebzadeh
Hi Meikel


In the past I tried with


   --py-files hdfs://$HDFS_HOST:$HDFS_PORT/minikube/codes/DSBQ.zip \
   --archives
hdfs://$HDFS_HOST:$HDFS_PORT/minikube/codes/pyspark_venv.zip#pyspark_venv \


which is basically what you are doing. the first line --py-files works but
the second one fails


It tried to unpack them ? It tries to unpack them


Unpacking an archive hdfs://
50.140.197.220:9000/minikube/codes/pyspark_venv.zip#pyspark_venv from
/tmp/spark-502a5b57-0fe6-45bd-867d-9738e678e9a3/pyspark_venv.zip to
/opt/spark/work-dir/./pyspark_venv


But it failed.


This could be due to creating the virtual environment inside the docker in
the work-dir *o*r sometimes when there is not enough available memory to
gunzip and untar the file, especially if your executors are built on
cluster nodes with less memory than the driver node.


However, The most convenient way to add additional packages to the docker
image is to add them directly to the docker image at time of creating the
image. So external packages are bundled as a part of my docker image
because it is fixed and if an application requires those set of
dependencies every time, they are there. Also note that every time you put
RUN statement it creates an intermediate container and hence it increases
build time. So reduce it as follows

RUN pip install pyyaml numpy cx_Oracle --no-cache-dir

The --no-cheche-dir option to pip is to prevent the downloaded
binaries from being added to the image, reducing the image size. It is
also advisable to install all packages in one line. Every time you put
RUN statement it creates an intermediate container and hence it
increases the build time. So reduce it by putting all packages in one
line.

Log in to the docker image and check for Python packages installed

docker run -u 0 -it
spark/spark-py:3.1.1-scala_2.12-8-jre-slim-buster_java8PlusPackages
bash
root@5bc049af7278:/opt/spark/work-dir# pip list
PackageVersion
-- ---
cx-Oracle  8.3.0
numpy  1.21.4
pip21.3.1
PyYAML 6.0
setuptools 59.4.0
wheel  0.34.2

HTH

   view my Linkedin profile
<https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>



*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Sat, 4 Dec 2021 at 07:52, Bode, Meikel, NMA-CFD <
meikel.b...@bertelsmann.de> wrote:

> Hi Mich,
>
>
>
> sure thats possible. But distributing the complete env would be more
> practical.
>
> A workaround at the moment is, that we build different environments and
> store them in a pv and then we mount it into the pods and refer from the
> SparkApplication resource to the desired env..
>
>
>
> But actually these options exist and I want to understand what the issue
> is…
>
> Any hints on that?
>
>
>
> Best,
>
> Meikel
>
>
>
> *From:* Mich Talebzadeh 
> *Sent:* Freitag, 3. Dezember 2021 13:27
> *To:* Bode, Meikel, NMA-CFD 
> *Cc:* dev ; u...@spark.apache.org
> *Subject:* Re: Conda Python Env in K8S
>
>
>
> Build python packages into the docker image itself first with pip install
>
>
>
> RUN pip install panda . . —no-cache
>
>
>
> HTH
>
>
>
> On Fri, 3 Dec 2021 at 11:58, Bode, Meikel, NMA-CFD <
> meikel.b...@bertelsmann.de> wrote:
>
> Hello,
>
>
>
> I am trying to run spark jobs using Spark Kubernetes Operator.
>
> But when I try to bundle a conda python environment using the following
> resource description the python interpreter is only unpack to the driver
> and not to the executors.
>
>
>
> apiVersion: "sparkoperator.k8s.io/v1beta2
> <https://eur02.safelinks.protection.outlook.com/?url=http%3A%2F%2Fsparkoperator.k8s.io%2Fv1beta2=04%7C01%7CMeikel.Bode%40bertelsmann.de%7Cb5110ae39caf431d2dbb08d9b65ac233%7C1ca8bd943c974fc68955bad266b43f0b%7C0%7C0%7C637741323186317880%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000=yMQzpK2mKyoEThxxOakqJJmV7JbbX14nW4w46pZk3KQ%3D=0>
> "
>
> kind: SparkApplication
>
> metadata:
>
>   name: …
>
> spec:
>
>   type: Python
>
>   pythonVersion: "3"
>
>   mode: cluster
>
>   mainApplicationFile: local:///path/script.py
>
> ..
>
>   sparkConf:
>
> "spark.archives": "local:///path/conda-env.tar.gz#environment"
>
> "spark.pyspark.python": "./environment/bin/python"
>
> "spark.pyspark.driver.python": "./environment/bin/python"
>
>
>
>
>
> The driver is unpacking the archive and 

RE: Conda Python Env in K8S

2021-12-03 Thread Bode, Meikel, NMA-CFD
Hi Mich,

sure thats possible. But distributing the complete env would be more practical.
A workaround at the moment is, that we build different environments and store 
them in a pv and then we mount it into the pods and refer from the 
SparkApplication resource to the desired env..

But actually these options exist and I want to understand what the issue is...
Any hints on that?

Best,
Meikel

From: Mich Talebzadeh 
Sent: Freitag, 3. Dezember 2021 13:27
To: Bode, Meikel, NMA-CFD 
Cc: dev ; u...@spark.apache.org
Subject: Re: Conda Python Env in K8S

Build python packages into the docker image itself first with pip install

RUN pip install panda . . -no-cache

HTH

On Fri, 3 Dec 2021 at 11:58, Bode, Meikel, NMA-CFD 
mailto:meikel.b...@bertelsmann.de>> wrote:
Hello,

I am trying to run spark jobs using Spark Kubernetes Operator.
But when I try to bundle a conda python environment using the following 
resource description the python interpreter is only unpack to the driver and 
not to the executors.

apiVersion: 
"sparkoperator.k8s.io/v1beta2<https://eur02.safelinks.protection.outlook.com/?url=http%3A%2F%2Fsparkoperator.k8s.io%2Fv1beta2=04%7C01%7CMeikel.Bode%40bertelsmann.de%7Cb5110ae39caf431d2dbb08d9b65ac233%7C1ca8bd943c974fc68955bad266b43f0b%7C0%7C0%7C637741323186317880%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000=yMQzpK2mKyoEThxxOakqJJmV7JbbX14nW4w46pZk3KQ%3D=0>"
kind: SparkApplication
metadata:
  name: ...
spec:
  type: Python
  pythonVersion: "3"
  mode: cluster
  mainApplicationFile: local:///path/script.py
..
  sparkConf:
"spark.archives": "local:///path/conda-env.tar.gz#environment"
"spark.pyspark.python": "./environment/bin/python"
"spark.pyspark.driver.python": "./environment/bin/python"


The driver is unpacking the archive and the python scripts gets executed.
On executors there is no log message indicating that the archive gets unpacked.
Executors then fail as they cant find the python executable at the given 
location "./environment/bin/python".

Any hint?

Best,
Meikel
--




 
[https://docs.google.com/uc?export=download=1-q7RFGRfLMObPuQPWSd9sl_H1UPNFaIZ=0B1BiUVX33unjMWtVUWpINWFCd0ZQTlhTRHpGckh4Wlg4RG80PQ]
   view my Linkedin 
profile<https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.linkedin.com%2Fin%2Fmich-talebzadeh-ph-d-5205b2%2F=04%7C01%7CMeikel.Bode%40bertelsmann.de%7Cb5110ae39caf431d2dbb08d9b65ac233%7C1ca8bd943c974fc68955bad266b43f0b%7C0%7C0%7C637741323186327824%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000=KyGbLLVXpJxSumXg7GHnYIYiP2J7q%2Fe4UJJWJefjAnI%3D=0>



Disclaimer: Use it at your own risk. Any and all responsibility for any loss, 
damage or destruction of data or any other property which may arise from 
relying on this email's technical content is explicitly disclaimed. The author 
will in no case be liable for any monetary damages arising from such loss, 
damage or destruction.




Re: Conda Python Env in K8S

2021-12-03 Thread Mich Talebzadeh
Build python packages into the docker image itself first with pip install

RUN pip install panda . . —no-cache

HTH

On Fri, 3 Dec 2021 at 11:58, Bode, Meikel, NMA-CFD <
meikel.b...@bertelsmann.de> wrote:

> Hello,
>
>
>
> I am trying to run spark jobs using Spark Kubernetes Operator.
>
> But when I try to bundle a conda python environment using the following
> resource description the python interpreter is only unpack to the driver
> and not to the executors.
>
>
>
> apiVersion: "sparkoperator.k8s.io/v1beta2"
>
> kind: SparkApplication
>
> metadata:
>
>   name: …
>
> spec:
>
>   type: Python
>
>   pythonVersion: "3"
>
>   mode: cluster
>
>   mainApplicationFile: local:///path/script.py
>
> ..
>
>   sparkConf:
>
> "spark.archives": "local:///path/conda-env.tar.gz#environment"
>
> "spark.pyspark.python": "./environment/bin/python"
>
> "spark.pyspark.driver.python": "./environment/bin/python"
>
>
>
>
>
> The driver is unpacking the archive and the python scripts gets executed.
>
> On executors there is no log message indicating that the archive gets
> unpacked.
>
> Executors then fail as they cant find the python executable at the given
> location "./environment/bin/python".
>
>
>
> Any hint?
>
>
>
> Best,
>
> Meikel
>
-- 



   view my Linkedin profile




*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.