samarthkansal opened a new issue #8736:
URL: https://github.com/apache/airflow/issues/8736


   Hello,
   
   When i try to submit my app through spark-submit i get the following error:
   Please help me resolve the problem
   
   **_Error:_**
   
   ```
   pod name: newdriver
            namespace: default
            labels: spark-app-selector -> 
spark-a17960c79886423383797eaa77f9f706, spark-role -> driver
            pod uid: 0afa41ae-4e4c-47be-86a3-1ef77739506c
            creation time: 2020-05-06T14:11:29Z
            service account name: spark
            volumes: spark-local-dir-1, spark-conf-volume, spark-token-tks2g
            node name: minikube
            start time: 2020-05-06T14:11:29Z
            phase: Running
            container status:
                    container name: spark-kubernetes-driver
                    container image: spark-py:v3.0
                    container state: running
                    container started at: 2020-05-06T14:11:31Z
   Exception in thread "main" 
io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: POST 
at: https://172.17.0.2:8443/api/v1/namespaces/default/pods. Message: pods 
"newtrydriver" already exists. Received status: Status(apiVersion=v1, code=409, 
details=StatusDetails(causes=[], group=null, kind=pods, name=newtrydriver, 
retryAfterSeconds=null, uid=null, additionalProperties={}), kind=Status, 
message=pods "newtrydriver" already exists, metadata=ListMeta(_continue=null, 
remainingItemCount=null, resourceVersion=null, selfLink=null, 
additionalProperties={}), reason=AlreadyExists, status=Failure, 
additionalProperties={}).
           at 
io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:510)
           at 
io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:449)
           at 
io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:413)
           at 
io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:372)
           at 
io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleCreate(OperationSupport.java:241)
           at 
io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleCreate(BaseOperation.java:819)
           at 
io.fabric8.kubernetes.client.dsl.base.BaseOperation.create(BaseOperation.java:334)
           at 
io.fabric8.kubernetes.client.dsl.base.BaseOperation.create(BaseOperation.java:330)
           at 
org.apache.spark.deploy.k8s.submit.Client.$anonfun$run$2(KubernetesClientApplication.scala:130)
           at 
org.apache.spark.deploy.k8s.submit.Client.$anonfun$run$2$adapted(KubernetesClientApplication.scala:129)
           at org.apache.spark.util.Utils$.tryWithResource(Utils.scala:2539)
           at 
org.apache.spark.deploy.k8s.submit.Client.run(KubernetesClientApplication.scala:129)
           at 
org.apache.spark.deploy.k8s.submit.KubernetesClientApplication.$anonfun$run$4(KubernetesClientApplication.scala:221)
           at 
org.apache.spark.deploy.k8s.submit.KubernetesClientApplication.$anonfun$run$4$adapted(KubernetesClientApplication.scala:215)
           at org.apache.spark.util.Utils$.tryWithResource(Utils.scala:2539)
           at 
org.apache.spark.deploy.k8s.submit.KubernetesClientApplication.run(KubernetesClientApplication.scala:215)
           at 
org.apache.spark.deploy.k8s.submit.KubernetesClientApplication.start(KubernetesClientApplication.scala:188)
           at 
org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:928)
           at 
org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
           at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
           at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
           at 
org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1007)
           at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1016)
           at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
   20/05/06 14:11:34 INFO ShutdownHookManager: Shutdown hook called
   20/05/06 14:11:34 INFO ShutdownHookManager: Deleting directory 
/tmp/spark-b7ea9c80-6040-460a-ba43-5c6e656d3039
   
   ```
   - Configuration for Submitting the job to k8s:
   
   ```
   ./spark-submit \
       --master k8s://https://172.17.0.2:8443 \
       --deploy-mode cluster \
       --conf spark.executor.instances=3 \
       --conf spark.kubernetes.container.image=spark-py:v3.0 \
        --conf spark.kubernetes.authenticate.driver.serviceAccountName=spark \
        --name newtry \
        --conf spark.kubernetes.driver.pod.name=newdriver \
        local:///opt/spark/examples/src/main/python/spark-submit-old.py 
   ```
   
   - Running spark on k8s in Cluster Mode
   - No other Pod with the name newdriver running on my minikube
   - Deleted all the Pods, restarted minikube, nothing worked out
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to