dongjoon-hyun commented on a change in pull request #34734:
URL: https://github.com/apache/spark/pull/34734#discussion_r758925368



##########
File path: docs/running-on-kubernetes.md
##########
@@ -1322,6 +1347,144 @@ See the [configuration page](configuration.html) for 
information on Spark config
   </td>
   <td>3.3.0</td>
 </tr>
+<tr>
+  <td><code>spark.kubernetes.configMap.maxSize</code></td>
+  <td>1572864</td>
+  <td>
+    Max size limit for a config map. This is configurable as per 
https://etcd.io/docs/v3.4.0/dev-guide/limit/ on k8s server end.
+  </td>
+  <td>3.1.0</td>
+</tr>
+<tr>
+  <td><code>spark.kubernetes.allocation.executor.timeout</code></td>
+  <td>600s</td>
+  <td>
+    Time to wait before a newly created executor POD request, which does not 
reached
+    the POD pending state yet, considered timedout and will be deleted.
+  </td>
+  <td>3.1.0</td>
+</tr>
+<tr>
+  <td><code>spark.kubernetes.executor.missingPodDetectDelta</code></td>
+  <td>30s</td>
+  <td>
+    When a registered executor's POD is missing from the Kubernetes API 
server's polled
+    list of PODs then this delta time is taken as the accepted time difference 
between the
+    registration time and the time of the polling. After this time the POD is 
considered
+    missing from the cluster and the executor will be removed.
+  </td>
+  <td>3.1.1</td>
+</tr>
+<tr>
+  <td><code>spark.kubernetes.allocation.driver.readinessTimeout</code></td>
+  <td>1s</td>
+  <td>
+    Time to wait for driver pod to get ready before creating executor pods. 
This wait
+    only happens on application start. If timeout happens, executor pods will 
still be
+    created.
+  </td>
+  <td>3.1.3</td>
+</tr>
+<tr>
+  <td><code>spark.kubernetes.decommission.script</code></td>
+  <td>/opt/decom.sh</td>
+  <td>
+    The location of the script to use for graceful decommissioning.
+  </td>
+  <td>3.2.0</td>
+</tr>
+<tr>
+  <td><code>spark.kubernetes.driver.service.deleteOnTermination</code></td>
+  <td>true</td>

Review comment:
       ditto

##########
File path: docs/running-on-kubernetes.md
##########
@@ -1322,6 +1347,144 @@ See the [configuration page](configuration.html) for 
information on Spark config
   </td>
   <td>3.3.0</td>
 </tr>
+<tr>
+  <td><code>spark.kubernetes.configMap.maxSize</code></td>
+  <td>1572864</td>
+  <td>
+    Max size limit for a config map. This is configurable as per 
https://etcd.io/docs/v3.4.0/dev-guide/limit/ on k8s server end.
+  </td>
+  <td>3.1.0</td>
+</tr>
+<tr>
+  <td><code>spark.kubernetes.allocation.executor.timeout</code></td>
+  <td>600s</td>
+  <td>
+    Time to wait before a newly created executor POD request, which does not 
reached
+    the POD pending state yet, considered timedout and will be deleted.
+  </td>
+  <td>3.1.0</td>
+</tr>
+<tr>
+  <td><code>spark.kubernetes.executor.missingPodDetectDelta</code></td>
+  <td>30s</td>
+  <td>
+    When a registered executor's POD is missing from the Kubernetes API 
server's polled
+    list of PODs then this delta time is taken as the accepted time difference 
between the
+    registration time and the time of the polling. After this time the POD is 
considered
+    missing from the cluster and the executor will be removed.
+  </td>
+  <td>3.1.1</td>
+</tr>
+<tr>
+  <td><code>spark.kubernetes.allocation.driver.readinessTimeout</code></td>
+  <td>1s</td>
+  <td>
+    Time to wait for driver pod to get ready before creating executor pods. 
This wait
+    only happens on application start. If timeout happens, executor pods will 
still be
+    created.
+  </td>
+  <td>3.1.3</td>
+</tr>
+<tr>
+  <td><code>spark.kubernetes.decommission.script</code></td>
+  <td>/opt/decom.sh</td>
+  <td>
+    The location of the script to use for graceful decommissioning.
+  </td>
+  <td>3.2.0</td>
+</tr>
+<tr>
+  <td><code>spark.kubernetes.driver.service.deleteOnTermination</code></td>
+  <td>true</td>
+  <td>
+    If true, driver service will be deleted on Spark application termination. 
If false, it will be cleaned up when the driver pod is deletion.
+  </td>
+  <td>3.2.0</td>
+</tr>
+<tr>
+  <td><code>spark.kubernetes.driver.ownPersistentVolumeClaim</code></td>
+  <td>false</td>

Review comment:
       ditto




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to