This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/spark-kubernetes-operator.git


The following commit(s) were added to refs/heads/main by this push:
     new 736d256  [SPARK-52292] Use `super-linter` for markdown files
736d256 is described below

commit 736d256a86d79d6b087f03ef2ff76250b47efe10
Author: Dongjoon Hyun <[email protected]>
AuthorDate: Sat May 24 09:30:21 2025 -0700

    [SPARK-52292] Use `super-linter` for markdown files
    
    ### What changes were proposed in this pull request?
    
    This PR aims to apply `super-linter` for markdown files.
    
    ### Why are the changes needed?
    
    For consistency.
    
    ### Does this PR introduce _any_ user-facing change?
    
    No.
    
    ### How was this patch tested?
    
    Pass the CIs with the newly added `super-linter` test.
    
    ### Was this patch authored or co-authored using generative AI tooling?
    
    No.
    
    Closes #224 from dongjoon-hyun/SPARK-52292.
    
    Authored-by: Dongjoon Hyun <[email protected]>
    Signed-off-by: Dongjoon Hyun <[email protected]>
---
 .github/workflows/build_and_test.yml |  8 ++++++
 .markdownlint.yaml                   | 18 ++++++++++++++
 .markdownlintignore                  | 18 ++++++++++++++
 README.md                            | 37 +++++++++++++--------------
 docs/architecture.md                 | 38 ++++++++++++++--------------
 docs/configuration.md                | 34 ++++++++++++-------------
 docs/operations.md                   | 12 +++++----
 docs/spark_custom_resources.md       | 48 +++++++++++++++---------------------
 8 files changed, 126 insertions(+), 87 deletions(-)

diff --git a/.github/workflows/build_and_test.yml 
b/.github/workflows/build_and_test.yml
index ee9f474..de95b82 100644
--- a/.github/workflows/build_and_test.yml
+++ b/.github/workflows/build_and_test.yml
@@ -147,6 +147,14 @@ jobs:
     steps:
       - name: Checkout repository
         uses: actions/checkout@v4
+        with:
+          fetch-depth: 0
+      - name: Super-Linter
+        uses: 
super-linter/super-linter@12150456a73e248bdc94d0794898f94e23127c88
+        env:
+          DEFAULT_BRANCH: main
+          VALIDATE_MARKDOWN: true
+          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
       - name: Set up JDK 17
         uses: actions/setup-java@v4
         with:
diff --git a/.markdownlint.yaml b/.markdownlint.yaml
new file mode 100644
index 0000000..11c7a48
--- /dev/null
+++ b/.markdownlint.yaml
@@ -0,0 +1,18 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+MD013: false
diff --git a/.markdownlintignore b/.markdownlintignore
new file mode 100644
index 0000000..8169fbb
--- /dev/null
+++ b/.markdownlintignore
@@ -0,0 +1,18 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+docs/config_properties.md
diff --git a/README.md b/README.md
index d2af819..b998b38 100644
--- a/README.md
+++ b/README.md
@@ -12,13 +12,14 @@ aims to extend K8s resource manager to manage Apache Spark 
applications via
 ## Install Helm Chart
 
 Apache Spark provides a Helm Chart.
+
 - <https://apache.github.io/spark-kubernetes-operator/>
 - 
<https://artifacthub.io/packages/helm/spark-kubernetes-operator/spark-kubernetes-operator/>
 
-```
-$ helm repo add spark-kubernetes-operator 
https://apache.github.io/spark-kubernetes-operator
-$ helm repo update
-$ helm install spark-kubernetes-operator 
spark-kubernetes-operator/spark-kubernetes-operator
+```bash
+helm repo add spark-kubernetes-operator 
https://apache.github.io/spark-kubernetes-operator
+helm repo update
+helm install spark-kubernetes-operator 
spark-kubernetes-operator/spark-kubernetes-operator
 ```
 
 ## Building Spark K8s Operator
@@ -27,25 +28,25 @@ Spark K8s Operator is built using Gradle.
 To build, run:
 
 ```bash
-$ ./gradlew build -x test
+./gradlew build -x test
 ```
 
 ## Running Tests
 
 ```bash
-$ ./gradlew build
+./gradlew build
 ```
 
 ## Build Docker Image
 
 ```bash
-$ ./gradlew buildDockerImage
+./gradlew buildDockerImage
 ```
 
-## Install Helm Chart
+## Install Helm Chart from the source code
 
 ```bash
-$ helm install spark -f build-tools/helm/spark-kubernetes-operator/values.yaml 
build-tools/helm/spark-kubernetes-operator/
+helm install spark -f build-tools/helm/spark-kubernetes-operator/values.yaml 
build-tools/helm/spark-kubernetes-operator/
 ```
 
 ## Run Spark Pi App
@@ -97,14 +98,14 @@ sparkcluster.spark.apache.org "prod" deleted
 
 ## Run Spark Pi App on Apache YuniKorn scheduler
 
-If you have not yet done so, follow [YuniKorn 
docs](https://yunikorn.apache.org/docs/#install) to install the latest version: 
+If you have not yet done so, follow [YuniKorn 
docs](https://yunikorn.apache.org/docs/#install) to install the latest version:
 
 ```bash
-$ helm repo add yunikorn https://apache.github.io/yunikorn-release
+helm repo add yunikorn https://apache.github.io/yunikorn-release
 
-$ helm repo update
+helm repo update
 
-$ helm install yunikorn yunikorn/yunikorn --namespace yunikorn --version 1.6.3 
--create-namespace --set embedAdmissionController=false
+helm install yunikorn yunikorn/yunikorn --namespace yunikorn --version 1.6.3 
--create-namespace --set embedAdmissionController=false
 ```
 
 Submit a Spark app to YuniKorn enabled cluster:
@@ -134,7 +135,7 @@ sparkapplication.spark.apache.org "pi-on-yunikorn" deleted
 
 Check the existing Spark applications and clusters. If exists, delete them.
 
-```
+```bash
 $ kubectl get sparkapp
 No resources found in default namespace.
 
@@ -144,12 +145,12 @@ No resources found in default namespace.
 
 Remove HelmChart and CRDs.
 
-```
-$ helm uninstall spark-kubernetes-operator
+```bash
+helm uninstall spark-kubernetes-operator
 
-$ kubectl delete crd sparkapplications.spark.apache.org
+kubectl delete crd sparkapplications.spark.apache.org
 
-$ kubectl delete crd sparkclusters.spark.apache.org
+kubectl delete crd sparkclusters.spark.apache.org
 ```
 
 ## Contributing
diff --git a/docs/architecture.md b/docs/architecture.md
index 0539355..040db9f 100644
--- a/docs/architecture.md
+++ b/docs/architecture.md
@@ -23,10 +23,10 @@ under the License.
 deployment lifecycle of Spark applications and clusters. The Operator can be 
installed on Kubernetes
 cluster(s) using Helm. In most production environments it is typically 
deployed in a designated
 namespace and controls Spark workload in one or more managed namespaces.
-Spark Operator enables user to describe Spark application(s) or cluster(s) as 
-[Custom 
Resources](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/).
 
+Spark Operator enables user to describe Spark application(s) or cluster(s) as
+[Custom 
Resources](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/).
 
-The Operator continuously tracks events related to the Spark custom resources 
in its reconciliation 
+The Operator continuously tracks events related to the Spark custom resources 
in its reconciliation
 loops:
 
 For SparkApplications:
@@ -43,39 +43,39 @@ For SparkClusters:
 * Operator releases all Spark-cluster owned resources to cluster upon failure
 
 The Operator is built with the [Java Operator 
SDK](https://javaoperatorsdk.io/) for
-launching Spark deployments and submitting jobs under the hood. It also uses 
+launching Spark deployments and submitting jobs under the hood. It also uses
 [fabric8](https://fabric8.io/) client to interact with Kubernetes API Server.
 
 ## Application State Transition
 
-[<img 
src="resources/application_state_machine.png">](resources/application_state_machine.png)
+[![Application State 
Transition](resources/application_state_machine.png)](resources/application_state_machine.png)
 
 * Spark applications are expected to run from submitted to succeeded before 
releasing resources
 * User may configure the app CR to time-out after given threshold of time if 
it cannot reach healthy
-  state after given threshold. The timeout can be configured for different 
lifecycle stages, 
+  state after given threshold. The timeout can be configured for different 
lifecycle stages,
   when driver starting and when requesting executor pods. To update the 
default threshold,  
-  configure `.spec.applicationTolerations.applicationTimeoutConfig` for the 
application.        
-* K8s resources created for an application would be deleted as the final stage 
of the application 
+  configure `.spec.applicationTolerations.applicationTimeoutConfig` for the 
application.
+* K8s resources created for an application would be deleted as the final stage 
of the application
   lifecycle by default. This is to ensure resource quota release for completed 
applications.  
-* It is also possible to retain the created k8s resources for debug or audit 
purpose. To do so,   
-  user may set `.spec.applicationTolerations.resourceRetainPolicy` to 
`OnFailure` to retain 
-  resources upon application failure, or set to `Always` to retain resources 
regardless of 
+* It is also possible to retain the created k8s resources for debug or audit 
purpose. To do so,
+  user may set `.spec.applicationTolerations.resourceRetainPolicy` to 
`OnFailure` to retain
+  resources upon application failure, or set to `Always` to retain resources 
regardless of
   application final state.
-    - This controls the behavior of k8s resources created by Operator for the 
application, including
-      driver pod, config map, service, and PVC(if enabled). This does not 
apply to resources created 
+  * This controls the behavior of k8s resources created by Operator for the 
application, including
+      driver pod, config map, service, and PVC(if enabled). This does not 
apply to resources created
       by driver (for example, executor pods). User may configure SparkConf to
-      include `spark.kubernetes.executor.deleteOnTermination` for executor 
retention. Please refer 
+      include `spark.kubernetes.executor.deleteOnTermination` for executor 
retention. Please refer
       [Spark 
docs](https://spark.apache.org/docs/latest/running-on-kubernetes.html) for 
details.
-    - The created k8s resources have `ownerReference` to their related 
`SparkApplication` custom
+  * The created k8s resources have `ownerReference` to their related 
`SparkApplication` custom
       resource, such that they could be garbage collected when the 
`SparkApplication` is deleted.
-    - Please be advised that k8s resources would not be retained if the 
application is configured to
-      restart. This is to avoid resource quota usage increase unexpectedly or 
resource conflicts 
+  * Please be advised that k8s resources would not be retained if the 
application is configured to
+      restart. This is to avoid resource quota usage increase unexpectedly or 
resource conflicts
       among multiple attempts.
 
 ## Cluster State Transition
 
-[<img 
src="resources/cluster_state_machine.png">](resources/application_state_machine.png)
+[![Cluster State 
Transition](resources/application_state_machine.png)](resources/application_state_machine.png)
 
 * Spark clusters are expected to be always running after submitted.
-* Similar to Spark applications, K8s resources created for a cluster would be 
deleted as the final 
+* Similar to Spark applications, K8s resources created for a cluster would be 
deleted as the final
   stage of the cluster lifecycle by default.
diff --git a/docs/configuration.md b/docs/configuration.md
index bafd3c5..d13a82c 100644
--- a/docs/configuration.md
+++ b/docs/configuration.md
@@ -29,20 +29,20 @@ Spark Operator supports different ways to configure the 
behavior:
   files](../build-tools/helm/spark-kubernetes-operator/values.yaml).
 * **System Properties** : when provided as system properties (e.g. via -D 
options to the
   operator JVM), it overrides the values provided in property file.
-* **Hot property loading** : when enabled, a 
-  [configmap](https://kubernetes.io/docs/concepts/configuration/configmap/) 
would be created with 
-  the operator in the same namespace. Operator can monitor updates performed 
on the configmap. Hot 
+* **Hot property loading** : when enabled, a
+  [configmap](https://kubernetes.io/docs/concepts/configuration/configmap/) 
would be created with
+  the operator in the same namespace. Operator can monitor updates performed 
on the configmap. Hot
   properties reloading takes higher precedence comparing with default 
properties override.
-    - An example use case: operator use hot properties to figure the list of 
namespace(s) to
+  * An example use case: operator use hot properties to figure the list of 
namespace(s) to
       operate Spark applications. The hot properties config map can be updated 
and
       maintained by user or additional microservice to tune the operator 
behavior without
       rebooting it.
-    - Please be advised that not all properties can be hot-loaded and honored 
at runtime.
+  * Please be advised that not all properties can be hot-loaded and honored at 
runtime.
       Refer the list of [supported properties](./config_properties.md) for 
more details.
 
 To enable hot properties loading, update the **helm chart values file** with
 
-```
+```yaml
 operatorConfiguration:
   spark-operator.properties: |+
     spark.operator.dynamic.config.enabled=true
@@ -60,18 +60,18 @@ the [Dropwizard Metrics 
Library](https://metrics.dropwizard.io/4.2.25/). Note th
 does not have Spark UI, MetricsServlet
 and PrometheusServlet from org.apache.spark.metrics.sink package are not 
supported. If you are
 interested in Prometheus metrics exporting, please take a look at below
-section [Forward Metrics to Prometheus](#Forward-Metrics-to-Prometheus)
+section [Forward Metrics to Prometheus](#forward-metrics-to-prometheus)
 
 ### JVM Metrics
 
 Spark Operator collects JVM metrics
 via [Codahale JVM 
Metrics](https://javadoc.io/doc/com.codahale.metrics/metrics-jvm/latest/index.html)
 
-- BufferPoolMetricSet
-- FileDescriptorRatioGauge
-- GarbageCollectorMetricSet
-- MemoryUsageGaugeSet
-- ThreadStatesGaugeSet
+* BufferPoolMetricSet
+* FileDescriptorRatioGauge
+* GarbageCollectorMetricSet
+* MemoryUsageGaugeSet
+* ThreadStatesGaugeSet
 
 ### Kubernetes Client Metrics
 
@@ -81,15 +81,15 @@ via [Codahale JVM 
Metrics](https://javadoc.io/doc/com.codahale.metrics/metrics-j
 | kubernetes.client.http.response                           | Meter      | 
Tracking the rates of HTTP response from the Kubernetes API Server              
                                         |
 | kubernetes.client.http.response.failed                    | Meter      | 
Tracking the rates of HTTP requests which have no response from the Kubernetes 
API Server                                |
 | kubernetes.client.http.response.latency.nanos             | Histograms | 
Measures the statistical distribution of HTTP response latency from the 
Kubernetes API Server                            |
-| kubernetes.client.http.response.<ResponseCode>            | Meter      | 
Tracking the rates of HTTP response based on response code from the Kubernetes 
API Server                                |
-| kubernetes.client.http.request.<RequestMethod>            | Meter      | 
Tracking the rates of HTTP request based type of method to the Kubernetes API 
Server                                     |
+| kubernetes.client.http.response.`ResponseCode`            | Meter      | 
Tracking the rates of HTTP response based on response code from the Kubernetes 
API Server                                |
+| kubernetes.client.http.request.`RequestMethod`            | Meter      | 
Tracking the rates of HTTP request based type of method to the Kubernetes API 
Server                                     |
 | kubernetes.client.http.response.1xx                       | Meter      | 
Tracking the rates of HTTP Code 1xx responses (informational) received from the 
Kubernetes API Server per response code. |
 | kubernetes.client.http.response.2xx                       | Meter      | 
Tracking the rates of HTTP Code 2xx responses (success) received from the 
Kubernetes API Server per response code.       |
 | kubernetes.client.http.response.3xx                       | Meter      | 
Tracking the rates of HTTP Code 3xx responses (redirection) received from the 
Kubernetes API Server per response code.   |
 | kubernetes.client.http.response.4xx                       | Meter      | 
Tracking the rates of HTTP Code 4xx responses (client error) received from the 
Kubernetes API Server per response code.  |
 | kubernetes.client.http.response.5xx                       | Meter      | 
Tracking the rates of HTTP Code 5xx responses (server error) received from the 
Kubernetes API Server per response code.  |
-| kubernetes.client.<ResourceName>.<Method>                 | Meter      | 
Tracking the rates of HTTP request for a combination of one Kubernetes resource 
and one http method                      |
-| kubernetes.client.<NamespaceName>.<ResourceName>.<Method> | Meter      | 
Tracking the rates of HTTP request for a combination of one namespace-scoped 
Kubernetes resource and one http method     |
+| kubernetes.client.`ResourceName`.`Method`                 | Meter      | 
Tracking the rates of HTTP request for a combination of one Kubernetes resource 
and one http method                      |
+| kubernetes.client.`NamespaceName`.`ResourceName`.`Method` | Meter      | 
Tracking the rates of HTTP request for a combination of one namespace-scoped 
Kubernetes resource and one http method     |
 
 ### Forward Metrics to Prometheus
 
@@ -141,4 +141,4 @@ kubectl port-forward --address 0.0.0.0 
pod/prometheus-server-654bc74fc9-8hgkb  8
 
 open your browser with address `localhost:8080`. Click on Status Targets tab, 
you should be able
 to find target as below.
-[<img src="resources/prometheus.png">](resources/prometheus.png)
+[![Prometheus](resources/prometheus.png)](resources/prometheus.png)
diff --git a/docs/operations.md b/docs/operations.md
index 80fba14..f16fa4a 100644
--- a/docs/operations.md
+++ b/docs/operations.md
@@ -17,15 +17,17 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-### Compatibility
+# Operations
+
+## Compatibility
 
 - Java 17, 21 and 24
 - Kubernetes version compatibility:
-    + k8s version >= 1.30 is recommended. Operator attempts to be API 
compatible as possible, but
+  - k8s version >= 1.30 is recommended. Operator attempts to be API compatible 
as possible, but
       patch support will not be performed on k8s versions that reached EOL.
 - Spark versions 3.5 or above.
 
-### Spark Application Namespaces
+## Spark Application Namespaces
 
 By default, Spark applications are created in the same namespace as the 
operator deployment.
 You many also configure the chart deployment to add necessary RBAC resources 
for
@@ -38,7 +40,7 @@ in `values.yaml`) for the Helm chart.
 
 To override single parameters you can use `--set`, for example:
 
-```
+```bash
 helm install --set image.repository=<my_registory>/spark-kubernetes-operator \
    -f build-tools/helm/spark-kubernetes-operator/values.yaml \
   build-tools/helm/spark-kubernetes-operator/
@@ -47,7 +49,7 @@ helm install --set 
image.repository=<my_registory>/spark-kubernetes-operator \
 You can also provide multiple custom values file by using the `-f` flag, the 
latest takes
 higher precedence:
 
-```
+```bash
 helm install spark-kubernetes-operator \
    -f build-tools/helm/spark-kubernetes-operator/values.yaml \
    -f my_values.yaml \
diff --git a/docs/spark_custom_resources.md b/docs/spark_custom_resources.md
index 62f1ff6..9e34dbe 100644
--- a/docs/spark_custom_resources.md
+++ b/docs/spark_custom_resources.md
@@ -17,21 +17,21 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-## Spark Operator API
+# Spark Operator API
 
-The core user facing API of the Spark Kubernetes Operator is the 
`SparkApplication` and 
-`SparkCluster` Custom Resources Definition (CRD). Spark custom resource 
extends 
+The core user facing API of the Spark Kubernetes Operator is the 
`SparkApplication` and
+`SparkCluster` Custom Resources Definition (CRD). Spark custom resource extends
 standard k8s API, defines Spark Application spec and tracks status.
 
 Once the Spark Operator is installed and running in your Kubernetes 
environment, it will
-continuously watch SparkApplication(s) and SparkCluster(s) submitted, via k8s 
API client or 
+continuously watch SparkApplication(s) and SparkCluster(s) submitted, via k8s 
API client or
 kubectl by the user, orchestrate secondary resources (pods, configmaps .etc).
 
 Please check out the [quickstart](../README.md) as well for installing 
operator.
 
 ## SparkApplication
 
-SparkApplication can be defined in YAML format. User may configure the 
application entrypoint 
+SparkApplication can be defined in YAML format. User may configure the 
application entrypoint
 and configurations. Let's start with the [Spark-Pi 
example](../examples/pi.yaml):
 
 ```yaml
@@ -59,7 +59,7 @@ spec:
 After application is submitted, Operator will add status information to your 
application based on
 the observed state:
 
-```
+```bash
 kubectl get sparkapp pi -o yaml
 ```
 
@@ -101,8 +101,8 @@ refer [Spark 
doc](https://spark.apache.org/docs/latest/running-on-kubernetes.htm
 ## Enable Additional Ingress for Driver
 
 Operator may create 
[Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/) for
-Spark driver of running applications on demand. For example, to expose Spark 
UI - which is by 
-default enabled on driver port 4040, you may configure 
+Spark driver of running applications on demand. For example, to expose Spark 
UI - which is by
+default enabled on driver port 4040, you may configure
 
 ```yaml
 spec:
@@ -132,16 +132,16 @@ spec:
                         number: 80
 ```
 
-Spark Operator by default would populate the `.spec.selector` field of the 
created Service to match 
+Spark Operator by default would populate the `.spec.selector` field of the 
created Service to match
 the driver labels. If `.ingressSpec.rules` is not provided, Spark Operator 
would also populate one
-default rule backed by the associated Service. It's recommended to always 
provide the ingress spec 
-to make sure it's compatible with your 
+default rule backed by the associated Service. It's recommended to always 
provide the ingress spec
+to make sure it's compatible with your
 
[IngressController](https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/).
 
 ## Create and Mount ConfigMap
 
-It is possible to ask operator to create configmap so they can be used by 
driver and/or executor 
-pods on the fly. `configMapSpecs` allows you to specify the desired metadata 
and data as string 
+It is possible to ask operator to create configmap so they can be used by 
driver and/or executor
+pods on the fly. `configMapSpecs` allows you to specify the desired metadata 
and data as string
 literals for the configmap(s) to be created.
 
 ```yaml
@@ -155,9 +155,9 @@ spec:
 Like other app-specific resources, the created configmap has owner reference 
to Spark driver and
 therefore shares the same lifecycle and garbage collection mechanism with the 
associated app.  
 
-This feature can be used to create lightweight override config files for given 
Spark app. For 
+This feature can be used to create lightweight override config files for given 
Spark app. For
 example, below snippet would create and mount a configmap with metrics 
property file, then use it
-in SparkConf:   
+in SparkConf:
 
 ```yaml
 spec:
@@ -201,17 +201,11 @@ with non-zero code), Spark Operator introduces a few 
different failure state for
 app status monitoring at high level, and for ease of setting up different 
handlers if users
 are creating / managing SparkApplications with external microservices or 
workflow engines.
 
-
 Spark Operator recognizes "infrastructure failure" in the best effort way. It 
is possible to
 configure different restart policy on general failure(s) vs. on potential 
infrastructure
 failure(s). For example, you may configure the app to restart only upon 
infrastructure
-failures. If Spark application fails as a result of
-
-```
-DriverStartTimedOut
-ExecutorsStartTimedOut
-SchedulingFailure
-```
+failures. If Spark application fails as a result of `DriverStartTimedOut`,
+`ExecutorsStartTimedOut`, `SchedulingFailure`.
 
 It is more likely that the app failed as a result of infrastructure reason(s), 
including
 scenarios like driver or executors cannot be scheduled or cannot initialize in 
configured
@@ -242,9 +236,8 @@ restartConfig:
 
 ### Timeouts
 
-It's possible to configure applications to be proactively terminated and 
resubmitted in particular 
-cases to avoid resource deadlock. 
-
+It's possible to configure applications to be proactively terminated and 
resubmitted in particular
+cases to avoid resource deadlock.
 
 | Field                                                                        
           | Type    | Default Value | Descritpion                              
                                                                          |
 
|-----------------------------------------------------------------------------------------|---------|---------------|--------------------------------------------------------------------------------------------------------------------|
@@ -254,7 +247,6 @@ cases to avoid resource deadlock.
 | 
.spec.applicationTolerations.applicationTimeoutConfig.driverReadyTimeoutMillis  
        | integer | 300000        | Time to wait for driver reaches ready 
state.                                                                       |
 | 
.spec.applicationTolerations.applicationTimeoutConfig.terminationRequeuePeriodMillis
    | integer | 2000          | Back-off time when releasing resource need to 
be re-attempted for application.                                     |
 
-
 ### Instance Config
 
 Instance Config helps operator to decide whether an application is running 
healthy. When
@@ -318,5 +310,5 @@ worker instances would be deployed as 
[StatefulSets](https://kubernetes.io/docs/
 and exposed via k8s 
[service(s)](https://kubernetes.io/docs/concepts/services-networking/service/).
 
 Like Pod Template Support for Applications, it's also possible to submit 
template(s) for the Spark
-instances for `SparkCluster` to configure spec that's not supported via 
SparkConf. It's worth notice 
+instances for `SparkCluster` to configure spec that's not supported via 
SparkConf. It's worth notice
 that Spark may overwrite certain fields.


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to