[GitHub] [spark] dongjoon-hyun commented on a change in pull request #34734: [SPARK-37480][K8S][DOC] Sync Kubernetes configuration to latest in running-on-k8s.md

2021-11-29 Thread GitBox


dongjoon-hyun commented on a change in pull request #34734:
URL: https://github.com/apache/spark/pull/34734#discussion_r758926832



##
File path: docs/running-on-kubernetes.md
##
@@ -1322,6 +1347,144 @@ See the [configuration page](configuration.html) for 
information on Spark config
   
   3.3.0
 
+
+  spark.kubernetes.configMap.maxSize
+  1572864
+  
+Max size limit for a config map. This is configurable as per 
https://etcd.io/docs/v3.4.0/dev-guide/limit/ on k8s server end.
+  
+  3.1.0
+
+
+  spark.kubernetes.allocation.executor.timeout
+  600s
+  
+Time to wait before a newly created executor POD request, which does not 
reached
+the POD pending state yet, considered timedout and will be deleted.
+  
+  3.1.0
+
+
+  spark.kubernetes.executor.missingPodDetectDelta
+  30s
+  
+When a registered executor's POD is missing from the Kubernetes API 
server's polled
+list of PODs then this delta time is taken as the accepted time difference 
between the
+registration time and the time of the polling. After this time the POD is 
considered
+missing from the cluster and the executor will be removed.
+  
+  3.1.1
+
+
+  spark.kubernetes.allocation.driver.readinessTimeout
+  1s
+  
+Time to wait for driver pod to get ready before creating executor pods. 
This wait
+only happens on application start. If timeout happens, executor pods will 
still be
+created.
+  
+  3.1.3
+
+
+  spark.kubernetes.decommission.script
+  /opt/decom.sh
+  
+The location of the script to use for graceful decommissioning.
+  
+  3.2.0
+
+
+  spark.kubernetes.driver.service.deleteOnTermination
+  true
+  
+If true, driver service will be deleted on Spark application termination. 
If false, it will be cleaned up when the driver pod is deletion.
+  
+  3.2.0
+
+
+  spark.kubernetes.driver.ownPersistentVolumeClaim
+  false
+  
+If true, driver pod becomes the owner of on-demand persistent volume 
claims instead of the executor pods
+  
+  3.2.0
+
+
+  spark.kubernetes.driver.reusePersistentVolumeClaim
+  false
+  
+If true, driver pod tries to reuse driver-owned on-demand persistent 
volume claims
+of the deleted executor pods if exists. This can be useful to reduce 
executor pod
+creation delay by skipping persistent volume creations. Note that a pod in
+`Terminating` pod status is not a deleted pod by definition and its 
resources
+including persistent volume claims are not reusable yet. Spark will create 
new
+persistent volume claims when there exists no reusable one. In other 
words, the total
+number of persistent volume claims can be larger than the number of 
running executors
+sometimes. This config requires 
spark.kubernetes.driver.ownPersistentVolumeClaim=true.
+  
+  3.2.0
+
+
+  spark.kubernetes.executor.disableConfigMap
+  false
+  
+If true, disable ConfigMap creation for executors.
+  
+  3.2.0
+
+
+  spark.kubernetes.driver.pod.featureSteps
+  (none)
+  
+Class names of an extra driver pod feature step implementing
+KubernetesFeatureConfigStep. This is a developer API. Comma separated.
+Runs after all of Spark internal feature steps."
+  
+  3.2.0
+
+
+  spark.kubernetes.executor.pod.featureSteps
+  (none)
+  
+Class name of an extra executor pod feature step implementing
+KubernetesFeatureConfigStep. This is a developer API. Comma separated.
+Runs after all of Spark internal feature steps.
+  
+  3.2.0
+
+
+  spark.kubernetes.allocation.maxPendingPods

Review comment:
   Is there a reason why you put `spark.kubernetes.allocation` after 
`spark.kubernetes.executor`?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dongjoon-hyun commented on a change in pull request #34734: [SPARK-37480][K8S][DOC] Sync Kubernetes configuration to latest in running-on-k8s.md

2021-11-29 Thread GitBox


dongjoon-hyun commented on a change in pull request #34734:
URL: https://github.com/apache/spark/pull/34734#discussion_r758926274



##
File path: docs/running-on-kubernetes.md
##
@@ -1322,6 +1347,144 @@ See the [configuration page](configuration.html) for 
information on Spark config
   
   3.3.0
 
+
+  spark.kubernetes.configMap.maxSize
+  1572864
+  
+Max size limit for a config map. This is configurable as per 
https://etcd.io/docs/v3.4.0/dev-guide/limit/ on k8s server end.
+  
+  3.1.0
+
+
+  spark.kubernetes.allocation.executor.timeout
+  600s
+  
+Time to wait before a newly created executor POD request, which does not 
reached
+the POD pending state yet, considered timedout and will be deleted.
+  
+  3.1.0
+
+
+  spark.kubernetes.executor.missingPodDetectDelta
+  30s
+  
+When a registered executor's POD is missing from the Kubernetes API 
server's polled
+list of PODs then this delta time is taken as the accepted time difference 
between the
+registration time and the time of the polling. After this time the POD is 
considered
+missing from the cluster and the executor will be removed.
+  
+  3.1.1
+
+
+  spark.kubernetes.allocation.driver.readinessTimeout
+  1s
+  
+Time to wait for driver pod to get ready before creating executor pods. 
This wait
+only happens on application start. If timeout happens, executor pods will 
still be
+created.
+  
+  3.1.3
+
+
+  spark.kubernetes.decommission.script
+  /opt/decom.sh
+  
+The location of the script to use for graceful decommissioning.
+  
+  3.2.0
+
+
+  spark.kubernetes.driver.service.deleteOnTermination
+  true
+  
+If true, driver service will be deleted on Spark application termination. 
If false, it will be cleaned up when the driver pod is deletion.
+  
+  3.2.0
+
+
+  spark.kubernetes.driver.ownPersistentVolumeClaim
+  false
+  
+If true, driver pod becomes the owner of on-demand persistent volume 
claims instead of the executor pods
+  
+  3.2.0
+
+
+  spark.kubernetes.driver.reusePersistentVolumeClaim
+  false
+  
+If true, driver pod tries to reuse driver-owned on-demand persistent 
volume claims
+of the deleted executor pods if exists. This can be useful to reduce 
executor pod
+creation delay by skipping persistent volume creations. Note that a pod in
+`Terminating` pod status is not a deleted pod by definition and its 
resources
+including persistent volume claims are not reusable yet. Spark will create 
new
+persistent volume claims when there exists no reusable one. In other 
words, the total
+number of persistent volume claims can be larger than the number of 
running executors
+sometimes. This config requires 
spark.kubernetes.driver.ownPersistentVolumeClaim=true.
+  
+  3.2.0
+
+
+  spark.kubernetes.executor.disableConfigMap
+  false
+  
+If true, disable ConfigMap creation for executors.
+  
+  3.2.0
+
+
+  spark.kubernetes.driver.pod.featureSteps
+  (none)
+  
+Class names of an extra driver pod feature step implementing
+KubernetesFeatureConfigStep. This is a developer API. Comma separated.
+Runs after all of Spark internal feature steps."
+  
+  3.2.0
+
+
+  spark.kubernetes.executor.pod.featureSteps
+  (none)
+  
+Class name of an extra executor pod feature step implementing
+KubernetesFeatureConfigStep. This is a developer API. Comma separated.
+Runs after all of Spark internal feature steps.
+  
+  3.2.0
+
+
+  spark.kubernetes.allocation.maxPendingPods
+  Int.MaxValue
+  
+Maximum number of pending PODs allowed during executor allocation for this
+application. Those newly requested executors which are unknown by 
Kubernetes yet are
+also counted into this limit as they will change into pending PODs by time.
+This limit is independent from the resource profiles as it limits the sum 
of all
+allocation for all the used resource profiles.
+  
+  3.2.0
+
+
+  spark.kubernetes.allocation.pods.allocator
+  direct
+  
+Allocator to use for pods. Possible values are direct (the default) and 
statefulset,

Review comment:
   codify `direct` and `statefulset`?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dongjoon-hyun commented on a change in pull request #34734: [SPARK-37480][K8S][DOC] Sync Kubernetes configuration to latest in running-on-k8s.md

2021-11-29 Thread GitBox


dongjoon-hyun commented on a change in pull request #34734:
URL: https://github.com/apache/spark/pull/34734#discussion_r758926255



##
File path: docs/running-on-kubernetes.md
##
@@ -1322,6 +1347,144 @@ See the [configuration page](configuration.html) for 
information on Spark config
   
   3.3.0
 
+
+  spark.kubernetes.configMap.maxSize
+  1572864
+  
+Max size limit for a config map. This is configurable as per 
https://etcd.io/docs/v3.4.0/dev-guide/limit/ on k8s server end.
+  
+  3.1.0
+
+
+  spark.kubernetes.allocation.executor.timeout
+  600s
+  
+Time to wait before a newly created executor POD request, which does not 
reached
+the POD pending state yet, considered timedout and will be deleted.
+  
+  3.1.0
+
+
+  spark.kubernetes.executor.missingPodDetectDelta
+  30s
+  
+When a registered executor's POD is missing from the Kubernetes API 
server's polled
+list of PODs then this delta time is taken as the accepted time difference 
between the
+registration time and the time of the polling. After this time the POD is 
considered
+missing from the cluster and the executor will be removed.
+  
+  3.1.1
+
+
+  spark.kubernetes.allocation.driver.readinessTimeout
+  1s
+  
+Time to wait for driver pod to get ready before creating executor pods. 
This wait
+only happens on application start. If timeout happens, executor pods will 
still be
+created.
+  
+  3.1.3
+
+
+  spark.kubernetes.decommission.script
+  /opt/decom.sh
+  
+The location of the script to use for graceful decommissioning.
+  
+  3.2.0
+
+
+  spark.kubernetes.driver.service.deleteOnTermination
+  true
+  
+If true, driver service will be deleted on Spark application termination. 
If false, it will be cleaned up when the driver pod is deletion.
+  
+  3.2.0
+
+
+  spark.kubernetes.driver.ownPersistentVolumeClaim
+  false
+  
+If true, driver pod becomes the owner of on-demand persistent volume 
claims instead of the executor pods
+  
+  3.2.0
+
+
+  spark.kubernetes.driver.reusePersistentVolumeClaim
+  false
+  
+If true, driver pod tries to reuse driver-owned on-demand persistent 
volume claims
+of the deleted executor pods if exists. This can be useful to reduce 
executor pod
+creation delay by skipping persistent volume creations. Note that a pod in
+`Terminating` pod status is not a deleted pod by definition and its 
resources
+including persistent volume claims are not reusable yet. Spark will create 
new
+persistent volume claims when there exists no reusable one. In other 
words, the total
+number of persistent volume claims can be larger than the number of 
running executors
+sometimes. This config requires 
spark.kubernetes.driver.ownPersistentVolumeClaim=true.
+  
+  3.2.0
+
+
+  spark.kubernetes.executor.disableConfigMap
+  false
+  
+If true, disable ConfigMap creation for executors.
+  
+  3.2.0
+
+
+  spark.kubernetes.driver.pod.featureSteps
+  (none)
+  
+Class names of an extra driver pod feature step implementing
+KubernetesFeatureConfigStep. This is a developer API. Comma separated.
+Runs after all of Spark internal feature steps."
+  
+  3.2.0
+
+
+  spark.kubernetes.executor.pod.featureSteps
+  (none)
+  
+Class name of an extra executor pod feature step implementing
+KubernetesFeatureConfigStep. This is a developer API. Comma separated.
+Runs after all of Spark internal feature steps.
+  
+  3.2.0
+
+
+  spark.kubernetes.allocation.maxPendingPods
+  Int.MaxValue
+  
+Maximum number of pending PODs allowed during executor allocation for this
+application. Those newly requested executors which are unknown by 
Kubernetes yet are
+also counted into this limit as they will change into pending PODs by time.
+This limit is independent from the resource profiles as it limits the sum 
of all
+allocation for all the used resource profiles.
+  
+  3.2.0
+
+
+  spark.kubernetes.allocation.pods.allocator
+  direct

Review comment:
   codify?

##
File path: docs/running-on-kubernetes.md
##
@@ -1322,6 +1347,144 @@ See the [configuration page](configuration.html) for 
information on Spark config
   
   3.3.0
 
+
+  spark.kubernetes.configMap.maxSize
+  1572864
+  
+Max size limit for a config map. This is configurable as per 
https://etcd.io/docs/v3.4.0/dev-guide/limit/ on k8s server end.
+  
+  3.1.0
+
+
+  spark.kubernetes.allocation.executor.timeout
+  600s
+  
+Time to wait before a newly created executor POD request, which does not 
reached
+the POD pending state yet, considered timedout and will be deleted.
+  
+  3.1.0
+
+
+  spark.kubernetes.executor.missingPodDetectDelta
+  30s
+  
+When a registered executor's POD is missing from the Kubernetes API 
server's polled
+list of PODs then this delta time is taken as the accepted time difference 
between the
+registration time and the time of the polling. After this time the POD is 

[GitHub] [spark] dongjoon-hyun commented on a change in pull request #34734: [SPARK-37480][K8S][DOC] Sync Kubernetes configuration to latest in running-on-k8s.md

2021-11-29 Thread GitBox


dongjoon-hyun commented on a change in pull request #34734:
URL: https://github.com/apache/spark/pull/34734#discussion_r758926045



##
File path: docs/running-on-kubernetes.md
##
@@ -1322,6 +1347,144 @@ See the [configuration page](configuration.html) for 
information on Spark config
   
   3.3.0
 
+
+  spark.kubernetes.configMap.maxSize
+  1572864
+  
+Max size limit for a config map. This is configurable as per 
https://etcd.io/docs/v3.4.0/dev-guide/limit/ on k8s server end.
+  
+  3.1.0
+
+
+  spark.kubernetes.allocation.executor.timeout
+  600s
+  
+Time to wait before a newly created executor POD request, which does not 
reached
+the POD pending state yet, considered timedout and will be deleted.
+  
+  3.1.0
+
+
+  spark.kubernetes.executor.missingPodDetectDelta
+  30s
+  
+When a registered executor's POD is missing from the Kubernetes API 
server's polled
+list of PODs then this delta time is taken as the accepted time difference 
between the
+registration time and the time of the polling. After this time the POD is 
considered
+missing from the cluster and the executor will be removed.
+  
+  3.1.1
+
+
+  spark.kubernetes.allocation.driver.readinessTimeout
+  1s
+  
+Time to wait for driver pod to get ready before creating executor pods. 
This wait
+only happens on application start. If timeout happens, executor pods will 
still be
+created.
+  
+  3.1.3
+
+
+  spark.kubernetes.decommission.script
+  /opt/decom.sh
+  
+The location of the script to use for graceful decommissioning.
+  
+  3.2.0
+
+
+  spark.kubernetes.driver.service.deleteOnTermination
+  true
+  
+If true, driver service will be deleted on Spark application termination. 
If false, it will be cleaned up when the driver pod is deletion.
+  
+  3.2.0
+
+
+  spark.kubernetes.driver.ownPersistentVolumeClaim
+  false
+  
+If true, driver pod becomes the owner of on-demand persistent volume 
claims instead of the executor pods
+  
+  3.2.0
+
+
+  spark.kubernetes.driver.reusePersistentVolumeClaim
+  false
+  
+If true, driver pod tries to reuse driver-owned on-demand persistent 
volume claims
+of the deleted executor pods if exists. This can be useful to reduce 
executor pod
+creation delay by skipping persistent volume creations. Note that a pod in
+`Terminating` pod status is not a deleted pod by definition and its 
resources
+including persistent volume claims are not reusable yet. Spark will create 
new
+persistent volume claims when there exists no reusable one. In other 
words, the total
+number of persistent volume claims can be larger than the number of 
running executors
+sometimes. This config requires 
spark.kubernetes.driver.ownPersistentVolumeClaim=true.
+  
+  3.2.0
+
+
+  spark.kubernetes.executor.disableConfigMap
+  false
+  
+If true, disable ConfigMap creation for executors.
+  
+  3.2.0
+
+
+  spark.kubernetes.driver.pod.featureSteps
+  (none)
+  
+Class names of an extra driver pod feature step implementing
+KubernetesFeatureConfigStep. This is a developer API. Comma separated.
+Runs after all of Spark internal feature steps."
+  
+  3.2.0
+
+
+  spark.kubernetes.executor.pod.featureSteps
+  (none)
+  
+Class name of an extra executor pod feature step implementing

Review comment:
   `Class name` -> `Class names`?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dongjoon-hyun commented on a change in pull request #34734: [SPARK-37480][K8S][DOC] Sync Kubernetes configuration to latest in running-on-k8s.md

2021-11-29 Thread GitBox


dongjoon-hyun commented on a change in pull request #34734:
URL: https://github.com/apache/spark/pull/34734#discussion_r758925884



##
File path: docs/running-on-kubernetes.md
##
@@ -1322,6 +1347,144 @@ See the [configuration page](configuration.html) for 
information on Spark config
   
   3.3.0
 
+
+  spark.kubernetes.configMap.maxSize
+  1572864
+  
+Max size limit for a config map. This is configurable as per 
https://etcd.io/docs/v3.4.0/dev-guide/limit/ on k8s server end.
+  
+  3.1.0
+
+
+  spark.kubernetes.allocation.executor.timeout
+  600s
+  
+Time to wait before a newly created executor POD request, which does not 
reached
+the POD pending state yet, considered timedout and will be deleted.
+  
+  3.1.0
+
+
+  spark.kubernetes.executor.missingPodDetectDelta
+  30s
+  
+When a registered executor's POD is missing from the Kubernetes API 
server's polled
+list of PODs then this delta time is taken as the accepted time difference 
between the
+registration time and the time of the polling. After this time the POD is 
considered
+missing from the cluster and the executor will be removed.
+  
+  3.1.1
+
+
+  spark.kubernetes.allocation.driver.readinessTimeout
+  1s
+  
+Time to wait for driver pod to get ready before creating executor pods. 
This wait
+only happens on application start. If timeout happens, executor pods will 
still be
+created.
+  
+  3.1.3
+
+
+  spark.kubernetes.decommission.script
+  /opt/decom.sh
+  
+The location of the script to use for graceful decommissioning.
+  
+  3.2.0
+
+
+  spark.kubernetes.driver.service.deleteOnTermination
+  true
+  
+If true, driver service will be deleted on Spark application termination. 
If false, it will be cleaned up when the driver pod is deletion.
+  
+  3.2.0
+
+
+  spark.kubernetes.driver.ownPersistentVolumeClaim
+  false
+  
+If true, driver pod becomes the owner of on-demand persistent volume 
claims instead of the executor pods
+  
+  3.2.0
+
+
+  spark.kubernetes.driver.reusePersistentVolumeClaim
+  false
+  
+If true, driver pod tries to reuse driver-owned on-demand persistent 
volume claims
+of the deleted executor pods if exists. This can be useful to reduce 
executor pod
+creation delay by skipping persistent volume creations. Note that a pod in
+`Terminating` pod status is not a deleted pod by definition and its 
resources
+including persistent volume claims are not reusable yet. Spark will create 
new
+persistent volume claims when there exists no reusable one. In other 
words, the total
+number of persistent volume claims can be larger than the number of 
running executors
+sometimes. This config requires 
spark.kubernetes.driver.ownPersistentVolumeClaim=true.
+  
+  3.2.0
+
+
+  spark.kubernetes.executor.disableConfigMap
+  false
+  
+If true, disable ConfigMap creation for executors.
+  
+  3.2.0
+
+
+  spark.kubernetes.driver.pod.featureSteps
+  (none)
+  
+Class names of an extra driver pod feature step implementing
+KubernetesFeatureConfigStep. This is a developer API. Comma separated.

Review comment:
   Codify with `KubernetesFeatureConfigStep` or back-quotation?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dongjoon-hyun commented on a change in pull request #34734: [SPARK-37480][K8S][DOC] Sync Kubernetes configuration to latest in running-on-k8s.md

2021-11-29 Thread GitBox


dongjoon-hyun commented on a change in pull request #34734:
URL: https://github.com/apache/spark/pull/34734#discussion_r758925510



##
File path: docs/running-on-kubernetes.md
##
@@ -1322,6 +1347,144 @@ See the [configuration page](configuration.html) for 
information on Spark config
   
   3.3.0
 
+
+  spark.kubernetes.configMap.maxSize
+  1572864
+  
+Max size limit for a config map. This is configurable as per 
https://etcd.io/docs/v3.4.0/dev-guide/limit/ on k8s server end.
+  
+  3.1.0
+
+
+  spark.kubernetes.allocation.executor.timeout
+  600s
+  
+Time to wait before a newly created executor POD request, which does not 
reached
+the POD pending state yet, considered timedout and will be deleted.
+  
+  3.1.0
+
+
+  spark.kubernetes.executor.missingPodDetectDelta
+  30s
+  
+When a registered executor's POD is missing from the Kubernetes API 
server's polled
+list of PODs then this delta time is taken as the accepted time difference 
between the
+registration time and the time of the polling. After this time the POD is 
considered
+missing from the cluster and the executor will be removed.
+  
+  3.1.1
+
+
+  spark.kubernetes.allocation.driver.readinessTimeout
+  1s
+  
+Time to wait for driver pod to get ready before creating executor pods. 
This wait
+only happens on application start. If timeout happens, executor pods will 
still be
+created.
+  
+  3.1.3
+
+
+  spark.kubernetes.decommission.script
+  /opt/decom.sh
+  
+The location of the script to use for graceful decommissioning.
+  
+  3.2.0
+
+
+  spark.kubernetes.driver.service.deleteOnTermination
+  true
+  
+If true, driver service will be deleted on Spark application termination. 
If false, it will be cleaned up when the driver pod is deletion.
+  
+  3.2.0
+
+
+  spark.kubernetes.driver.ownPersistentVolumeClaim
+  false
+  
+If true, driver pod becomes the owner of on-demand persistent volume 
claims instead of the executor pods
+  
+  3.2.0
+
+
+  spark.kubernetes.driver.reusePersistentVolumeClaim
+  false
+  
+If true, driver pod tries to reuse driver-owned on-demand persistent 
volume claims
+of the deleted executor pods if exists. This can be useful to reduce 
executor pod
+creation delay by skipping persistent volume creations. Note that a pod in
+`Terminating` pod status is not a deleted pod by definition and its 
resources
+including persistent volume claims are not reusable yet. Spark will create 
new
+persistent volume claims when there exists no reusable one. In other 
words, the total
+number of persistent volume claims can be larger than the number of 
running executors
+sometimes. This config requires 
spark.kubernetes.driver.ownPersistentVolumeClaim=true.
+  
+  3.2.0
+
+
+  spark.kubernetes.executor.disableConfigMap
+  false

Review comment:
   ditto




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dongjoon-hyun commented on a change in pull request #34734: [SPARK-37480][K8S][DOC] Sync Kubernetes configuration to latest in running-on-k8s.md

2021-11-29 Thread GitBox


dongjoon-hyun commented on a change in pull request #34734:
URL: https://github.com/apache/spark/pull/34734#discussion_r758925368



##
File path: docs/running-on-kubernetes.md
##
@@ -1322,6 +1347,144 @@ See the [configuration page](configuration.html) for 
information on Spark config
   
   3.3.0
 
+
+  spark.kubernetes.configMap.maxSize
+  1572864
+  
+Max size limit for a config map. This is configurable as per 
https://etcd.io/docs/v3.4.0/dev-guide/limit/ on k8s server end.
+  
+  3.1.0
+
+
+  spark.kubernetes.allocation.executor.timeout
+  600s
+  
+Time to wait before a newly created executor POD request, which does not 
reached
+the POD pending state yet, considered timedout and will be deleted.
+  
+  3.1.0
+
+
+  spark.kubernetes.executor.missingPodDetectDelta
+  30s
+  
+When a registered executor's POD is missing from the Kubernetes API 
server's polled
+list of PODs then this delta time is taken as the accepted time difference 
between the
+registration time and the time of the polling. After this time the POD is 
considered
+missing from the cluster and the executor will be removed.
+  
+  3.1.1
+
+
+  spark.kubernetes.allocation.driver.readinessTimeout
+  1s
+  
+Time to wait for driver pod to get ready before creating executor pods. 
This wait
+only happens on application start. If timeout happens, executor pods will 
still be
+created.
+  
+  3.1.3
+
+
+  spark.kubernetes.decommission.script
+  /opt/decom.sh
+  
+The location of the script to use for graceful decommissioning.
+  
+  3.2.0
+
+
+  spark.kubernetes.driver.service.deleteOnTermination
+  true

Review comment:
   ditto

##
File path: docs/running-on-kubernetes.md
##
@@ -1322,6 +1347,144 @@ See the [configuration page](configuration.html) for 
information on Spark config
   
   3.3.0
 
+
+  spark.kubernetes.configMap.maxSize
+  1572864
+  
+Max size limit for a config map. This is configurable as per 
https://etcd.io/docs/v3.4.0/dev-guide/limit/ on k8s server end.
+  
+  3.1.0
+
+
+  spark.kubernetes.allocation.executor.timeout
+  600s
+  
+Time to wait before a newly created executor POD request, which does not 
reached
+the POD pending state yet, considered timedout and will be deleted.
+  
+  3.1.0
+
+
+  spark.kubernetes.executor.missingPodDetectDelta
+  30s
+  
+When a registered executor's POD is missing from the Kubernetes API 
server's polled
+list of PODs then this delta time is taken as the accepted time difference 
between the
+registration time and the time of the polling. After this time the POD is 
considered
+missing from the cluster and the executor will be removed.
+  
+  3.1.1
+
+
+  spark.kubernetes.allocation.driver.readinessTimeout
+  1s
+  
+Time to wait for driver pod to get ready before creating executor pods. 
This wait
+only happens on application start. If timeout happens, executor pods will 
still be
+created.
+  
+  3.1.3
+
+
+  spark.kubernetes.decommission.script
+  /opt/decom.sh
+  
+The location of the script to use for graceful decommissioning.
+  
+  3.2.0
+
+
+  spark.kubernetes.driver.service.deleteOnTermination
+  true
+  
+If true, driver service will be deleted on Spark application termination. 
If false, it will be cleaned up when the driver pod is deletion.
+  
+  3.2.0
+
+
+  spark.kubernetes.driver.ownPersistentVolumeClaim
+  false

Review comment:
   ditto




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dongjoon-hyun commented on a change in pull request #34734: [SPARK-37480][K8S][DOC] Sync Kubernetes configuration to latest in running-on-k8s.md

2021-11-29 Thread GitBox


dongjoon-hyun commented on a change in pull request #34734:
URL: https://github.com/apache/spark/pull/34734#discussion_r758925346



##
File path: docs/running-on-kubernetes.md
##
@@ -1322,6 +1347,144 @@ See the [configuration page](configuration.html) for 
information on Spark config
   
   3.3.0
 
+
+  spark.kubernetes.configMap.maxSize
+  1572864
+  
+Max size limit for a config map. This is configurable as per 
https://etcd.io/docs/v3.4.0/dev-guide/limit/ on k8s server end.
+  
+  3.1.0
+
+
+  spark.kubernetes.allocation.executor.timeout
+  600s
+  
+Time to wait before a newly created executor POD request, which does not 
reached
+the POD pending state yet, considered timedout and will be deleted.
+  
+  3.1.0
+
+
+  spark.kubernetes.executor.missingPodDetectDelta
+  30s
+  
+When a registered executor's POD is missing from the Kubernetes API 
server's polled
+list of PODs then this delta time is taken as the accepted time difference 
between the
+registration time and the time of the polling. After this time the POD is 
considered
+missing from the cluster and the executor will be removed.
+  
+  3.1.1
+
+
+  spark.kubernetes.allocation.driver.readinessTimeout
+  1s
+  
+Time to wait for driver pod to get ready before creating executor pods. 
This wait
+only happens on application start. If timeout happens, executor pods will 
still be
+created.
+  
+  3.1.3
+
+
+  spark.kubernetes.decommission.script
+  /opt/decom.sh

Review comment:
   ditto.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dongjoon-hyun commented on a change in pull request #34734: [SPARK-37480][K8S][DOC] Sync Kubernetes configuration to latest in running-on-k8s.md

2021-11-29 Thread GitBox


dongjoon-hyun commented on a change in pull request #34734:
URL: https://github.com/apache/spark/pull/34734#discussion_r758925174



##
File path: docs/running-on-kubernetes.md
##
@@ -1322,6 +1347,144 @@ See the [configuration page](configuration.html) for 
information on Spark config
   
   3.3.0
 
+
+  spark.kubernetes.configMap.maxSize
+  1572864
+  
+Max size limit for a config map. This is configurable as per 
https://etcd.io/docs/v3.4.0/dev-guide/limit/ on k8s server end.
+  
+  3.1.0
+
+
+  spark.kubernetes.allocation.executor.timeout
+  600s
+  
+Time to wait before a newly created executor POD request, which does not 
reached
+the POD pending state yet, considered timedout and will be deleted.
+  
+  3.1.0
+
+
+  spark.kubernetes.executor.missingPodDetectDelta
+  30s

Review comment:
   ditto.

##
File path: docs/running-on-kubernetes.md
##
@@ -1322,6 +1347,144 @@ See the [configuration page](configuration.html) for 
information on Spark config
   
   3.3.0
 
+
+  spark.kubernetes.configMap.maxSize
+  1572864
+  
+Max size limit for a config map. This is configurable as per 
https://etcd.io/docs/v3.4.0/dev-guide/limit/ on k8s server end.
+  
+  3.1.0
+
+
+  spark.kubernetes.allocation.executor.timeout
+  600s
+  
+Time to wait before a newly created executor POD request, which does not 
reached
+the POD pending state yet, considered timedout and will be deleted.
+  
+  3.1.0
+
+
+  spark.kubernetes.executor.missingPodDetectDelta
+  30s
+  
+When a registered executor's POD is missing from the Kubernetes API 
server's polled
+list of PODs then this delta time is taken as the accepted time difference 
between the
+registration time and the time of the polling. After this time the POD is 
considered
+missing from the cluster and the executor will be removed.
+  
+  3.1.1
+
+
+  spark.kubernetes.allocation.driver.readinessTimeout
+  1s

Review comment:
   ditto




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dongjoon-hyun commented on a change in pull request #34734: [SPARK-37480][K8S][DOC] Sync Kubernetes configuration to latest in running-on-k8s.md

2021-11-29 Thread GitBox


dongjoon-hyun commented on a change in pull request #34734:
URL: https://github.com/apache/spark/pull/34734#discussion_r758925122



##
File path: docs/running-on-kubernetes.md
##
@@ -1322,6 +1347,144 @@ See the [configuration page](configuration.html) for 
information on Spark config
   
   3.3.0
 
+
+  spark.kubernetes.configMap.maxSize
+  1572864
+  
+Max size limit for a config map. This is configurable as per 
https://etcd.io/docs/v3.4.0/dev-guide/limit/ on k8s server end.
+  
+  3.1.0
+
+
+  spark.kubernetes.allocation.executor.timeout
+  600s

Review comment:
   Codify?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dongjoon-hyun commented on a change in pull request #34734: [SPARK-37480][K8S][DOC] Sync Kubernetes configuration to latest in running-on-k8s.md

2021-11-29 Thread GitBox


dongjoon-hyun commented on a change in pull request #34734:
URL: https://github.com/apache/spark/pull/34734#discussion_r758925060



##
File path: docs/running-on-kubernetes.md
##
@@ -1322,6 +1347,144 @@ See the [configuration page](configuration.html) for 
information on Spark config
   
   3.3.0
 
+
+  spark.kubernetes.configMap.maxSize
+  1572864

Review comment:
   Codify?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dongjoon-hyun commented on a change in pull request #34734: [SPARK-37480][K8S][DOC] Sync Kubernetes configuration to latest in running-on-k8s.md

2021-11-29 Thread GitBox


dongjoon-hyun commented on a change in pull request #34734:
URL: https://github.com/apache/spark/pull/34734#discussion_r758924807



##
File path: docs/running-on-kubernetes.md
##
@@ -1278,6 +1295,14 @@ See the [configuration page](configuration.html) for 
information on Spark config
   
   3.0.0
 
+
+  spark.kubernetes.dynamicAllocation.deleteGracePeriod
+  5s

Review comment:
   I guess we need `` tag around this.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dongjoon-hyun commented on a change in pull request #34734: [SPARK-37480][K8S][DOC] Sync Kubernetes configuration to latest in running-on-k8s.md

2021-11-29 Thread GitBox


dongjoon-hyun commented on a change in pull request #34734:
URL: https://github.com/apache/spark/pull/34734#discussion_r758924421



##
File path: docs/running-on-kubernetes.md
##
@@ -1322,6 +1347,144 @@ See the [configuration page](configuration.html) for 
information on Spark config
   
   3.3.0
 
+
+  spark.kubernetes.configMap.maxSize
+  1572864
+  
+Max size limit for a config map. This is configurable as per 
https://etcd.io/docs/v3.4.0/dev-guide/limit/ on k8s server end.

Review comment:
   Could you make it as a hyper link by using ``?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dongjoon-hyun commented on a change in pull request #34734: [SPARK-37480][K8S][DOC] Sync Kubernetes configuration to latest in running-on-k8s.md

2021-11-29 Thread GitBox


dongjoon-hyun commented on a change in pull request #34734:
URL: https://github.com/apache/spark/pull/34734#discussion_r758922930



##
File path: docs/running-on-kubernetes.md
##
@@ -591,7 +591,7 @@ See the [configuration page](configuration.html) for 
information on Spark config
   spark.kubernetes.container.image.pullPolicy
   IfNotPresent
   
-Container image pull policy used when pulling images within Kubernetes.
+Container image pull policy used when pulling images within Kubernetes. 
Valid values are Always, Never, and IfNotPresent.

Review comment:
   Could you codify the values? For example, `Always`?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org