> but  the spark-submit log still  running

Set the "spark.kubernetes.submission.waitAppCompletion" config to false to
change that. As the doc says:

"spark.kubernetes.submission.waitAppCompletion" : In cluster mode, whether
to wait for the application to finish before exiting the launcher process.
When changed to false, the launcher has a "fire-and-forget" behavior when
launching the Spark job.

On Thu, Mar 11, 2021 at 10:05 PM Attila Zsolt Piros <
piros.attila.zs...@gmail.com> wrote:

>
> For getting the logs please read Accessing Logs
> <https://spark.apache.org/docs/3.1.1/running-on-kubernetes.html#accessing-logs>
>  part
> of the *Running Spark on Kubernetes* page.
>
> For stopping and generic management of the spark application please read
> the Spark Application Management
> <https://spark.apache.org/docs/3.1.1/running-on-kubernetes.html#spark-application-management>,
> where you find the example:
>
> $ spark-submit --kill spark:spark-pi* --master  k8s://https://192.168.2.8:8443
>
>
>
> On Thu, Mar 11, 2021 at 1:07 PM yxl040840219 <yxl040840...@126.com> wrote:
>
>>
>>
>>
>> when run the code in k8s ,  driver pod throw AnalysisException , but  the
>> spark-submit log still  running , then how to get the exception and stop
>> pods ?
>>
>> val spark = SparkSession.builder().getOrCreate()
>>     import spark.implicits._
>>     val df = (0 until 100000).toDF("id").selectExpr("id % 5 as key",
>> "id%10 as value")
>>       .groupBy("key").agg(count("value1").as("cnt"))
>>     df.show()
>> spark.stop()
>>
>> bin/spark-submit \
>> --master k8s://https://localhost:9443 \
>> --deploy-mode cluster \
>> --name wordcount \
>> --class k8s.WordCount \
>> --conf spark.kubernetes.container.image=rspark:v3.1.1 \
>> --conf spark.kubernetes.container.image.pullPolicy=IfNotPresent \
>> --conf spark.kubernetes.authenticate.driver.serviceAccountName=spark \
>> --conf spark.kubernetes.file.upload.path=hdfs://localhost:8020/data/spark
>> \
>> /data/spark-example-1.0.0.jar
>>
>

Reply via email to