ScrapCodes commented on pull request #33257:
URL: https://github.com/apache/spark/pull/33257#issuecomment-888201573


   While testing your PR, 
   
   I am getting the same error again.
   
   ```
   export HADOOP_CONF_DIR=`pwd`/conf
   
   
   ./bin/spark-submit \
       --master <IP>:<port> \
       --deploy-mode cluster \
       --name spark-pi \
       --class org.apache.spark.examples.SparkPi \
       --conf spark.executor.instances=2 \
       --conf spark.kubernetes.authenticate.driver.serviceAccountName=spark \
       --conf spark.kubernetes.container.image=scrapcodes/spark:3.3.0-SNAPSHOT \
       local:///opt/spark/examples/jars/spark-examples_2.12-3.3.0-SNAPSHOT.jar
   ```
   
   Executors are crash looping with:
   
   ```
     Warning  FailedMount  23s (x7 over 55s)  kubelet, 10.240.128.22  
MountVolume.SetUp failed for volume "hadoop-properties" : configmap 
"spark-pi-4c4e757aeca6de9b-hadoop-config" not found
   
   ```
   
   This happens because, configmap does not get created in the executor step. 
And the current code is designed that way. It will work, if we use a user 
provided configmap.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to