Re: start-history-server.sh doesn't survive system reboot. Recommendation?

2021-12-08 Thread James Yu
The Ops guy would probably be fired if he doesn't make sure the container runtime is up 24/7.  From: Mich Talebzadeh Sent: Wednesday, December 8, 2021 12:33 PM Cc: user @spark Subject: Re: start-history-server.sh doesn't survive system reboot. Recommendation?

Re: start-history-server.sh doesn't survive system reboot. Recommendation?

2021-12-08 Thread Mich Talebzadeh
Well that is just relying on docker daemon to start after reboot. It may not *docker ps* Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? *systemctl start docker* *docker ps* CONTAINER IDIMAGE COMMAND CREATED

Re: start-history-server.sh doesn't survive system reboot. Recommendation?

2021-12-08 Thread James Yu
Just thought about another possibility which is to containerize the history server and run the container with proper restart policy. This may be the approach we will be taking because the deployment of such HS would be more flexible. Thanks! From: Sean Owen

Re: docker image distribution in Kubernetes cluster

2021-12-08 Thread Mich Talebzadeh
Thanks Khalid for your notes I have not come across a use case where the docker version on the driver and executors need to be different. My thinking is that spark.kubernetes.executor.container.image is the correct reference as in the Kubernetes where container is the correct terminology and

Re: docker image distribution in Kubernetes cluster

2021-12-08 Thread Khalid Mammadov
Hi Mitch IMO, it's done to provide most flexibility. So, some users can have limited/restricted version of the image or with some additional software that they use on the executors that is used during processing. So, in your case you only need to provide the first one since the other two configs

Re: docker image distribution in Kubernetes cluster

2021-12-08 Thread Mich Talebzadeh
Just a correction that in Spark 3.2 documentation it states that Property NameDefaultMeaning spark.kubernetes.container.image (none) Container image to use for the Spark application. This is usually of the form

docker image distribution in Kubernetes cluster

2021-12-08 Thread Mich Talebzadeh
Hi, We have three conf parameters to distribute the docker image with spark-sumit in Kubernetes cluster. These are spark-submit --verbose \ --conf spark.kubernetes.driver.docker.image=${IMAGEGCP} \ --conf spark.kubernetes.executor.docker.image=${IMAGEGCP} \