Github user liyinan926 commented on a diff in the pull request:

    https://github.com/apache/spark/pull/20059#discussion_r158722622
  
    --- Diff: docs/running-on-kubernetes.md ---
    @@ -528,51 +576,91 @@ specific to Spark on Kubernetes.
       </td>
     </tr>
     <tr>
    -   <td><code>spark.kubernetes.driver.limit.cores</code></td>
    -   <td>(none)</td>
    -   <td>
    -     Specify the hard CPU 
[limit](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#resource-requests-and-limits-of-pod-and-container)
 for the driver pod.
    -   </td>
    - </tr>
    - <tr>
    -   <td><code>spark.kubernetes.executor.limit.cores</code></td>
    -   <td>(none)</td>
    -   <td>
    -     Specify the hard CPU 
[limit](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#resource-requests-and-limits-of-pod-and-container)
 for each executor pod launched for the Spark Application.
    -   </td>
    - </tr>
    - <tr>
    -   <td><code>spark.kubernetes.node.selector.[labelKey]</code></td>
    -   <td>(none)</td>
    -   <td>
    -     Adds to the node selector of the driver pod and executor pods, with 
key <code>labelKey</code> and the value as the
    -     configuration's value. For example, setting 
<code>spark.kubernetes.node.selector.identifier</code> to 
<code>myIdentifier</code>
    -     will result in the driver pod and executors having a node selector 
with key <code>identifier</code> and value
    -      <code>myIdentifier</code>. Multiple node selector keys can be added 
by setting multiple configurations with this prefix.
    -    </td>
    -  </tr>
    - <tr>
    -   
<td><code>spark.kubernetes.driverEnv.[EnvironmentVariableName]</code></td>
    -   <td>(none)</td>
    -   <td>
    -     Add the environment variable specified by 
<code>EnvironmentVariableName</code> to
    -     the Driver process. The user can specify multiple of these to set 
multiple environment variables.
    -   </td>
    - </tr>
    -  <tr>
    -    
<td><code>spark.kubernetes.mountDependencies.jarsDownloadDir</code></td>
    -    <td><code>/var/spark-data/spark-jars</code></td>
    -    <td>
    -      Location to download jars to in the driver and executors.
    -      This directory must be empty and will be mounted as an empty 
directory volume on the driver and executor pods.
    -    </td>
    -  </tr>
    -   <tr>
    -     
<td><code>spark.kubernetes.mountDependencies.filesDownloadDir</code></td>
    -     <td><code>/var/spark-data/spark-files</code></td>
    -     <td>
    -       Location to download jars to in the driver and executors.
    -       This directory must be empty and will be mounted as an empty 
directory volume on the driver and executor pods.
    -     </td>
    -   </tr>
    +  <td><code>spark.kubernetes.driver.limit.cores</code></td>
    +  <td>(none)</td>
    +  <td>
    +    Specify the hard CPU 
[limit](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#resource-requests-and-limits-of-pod-and-container)
 for the driver pod.
    +  </td>
    +</tr>
    +<tr>
    +  <td><code>spark.kubernetes.executor.limit.cores</code></td>
    +  <td>(none)</td>
    +  <td>
    +    Specify the hard CPU 
[limit](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#resource-requests-and-limits-of-pod-and-container)
 for each executor pod launched for the Spark Application.
    +  </td>
    +</tr>
    +<tr>
    +  <td><code>spark.kubernetes.node.selector.[labelKey]</code></td>
    +  <td>(none)</td>
    +  <td>
    +    Adds to the node selector of the driver pod and executor pods, with 
key <code>labelKey</code> and the value as the
    +    configuration's value. For example, setting 
<code>spark.kubernetes.node.selector.identifier</code> to 
<code>myIdentifier</code>
    +    will result in the driver pod and executors having a node selector 
with key <code>identifier</code> and value
    +     <code>myIdentifier</code>. Multiple node selector keys can be added 
by setting multiple configurations with this prefix.
    +  </td>
    +</tr>
    +<tr>
    +  
<td><code>spark.kubernetes.driverEnv.[EnvironmentVariableName]</code></td>
    +  <td>(none)</td>
    +  <td>
    +    Add the environment variable specified by 
<code>EnvironmentVariableName</code> to
    +    the Driver process. The user can specify multiple of these to set 
multiple environment variables.
    +  </td>
    +</tr>
    +<tr>
    +  <td><code>spark.kubernetes.mountDependencies.jarsDownloadDir</code></td>
    +  <td><code>/var/spark-data/spark-jars</code></td>
    +  <td>
    +    Location to download jars to in the driver and executors.
    +    This directory must be empty and will be mounted as an empty directory 
volume on the driver and executor pods.
    +  </td>
    +</tr>
    +<tr>
    +  <td><code>spark.kubernetes.mountDependencies.filesDownloadDir</code></td>
    +  <td><code>/var/spark-data/spark-files</code></td>
    +  <td>
    +    Location to download jars to in the driver and executors.
    +    This directory must be empty and will be mounted as an empty directory 
volume on the driver and executor pods.
    +  </td>
    +</tr>
    +<tr>
    +  <td><code>spark.kubernetes.mountDependencies.timeout</code></td>
    +  <td>300 seconds</td>
    +  <td>
    +   Timeout in seconds before aborting the attempt to download and unpack 
dependencies from remote locations into
    +   the driver and executor pods.
    +  </td>
    +</tr>
    +<tr>
    +  
<td><code>spark.kubernetes.mountDependencies.maxSimultaneousDownloads</code></td>
    +  <td>5</td>
    +  <td>
    +   Maximum number of remote dependencies to download simultaneously in a 
driver or executor pod.
    +  </td>
    +</tr>
    +<tr>
    +  <td><code>spark.kubernetes.initContainer.image</code></td>
    +  <td>(none)</td>
    +  <td>
    +   Container image for the <a 
href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/";>init-container</a>
 of the driver and executors for downloading dependencies. This is usually of 
the form <code>example.com/repo/spark-init:v1.0.0</code>. This configuration is 
optional and must be provided by the user if any non-container local dependency 
is used and must be downloaded remotely.
    +  </td>
    +</tr>
    +<tr>
    +  <td><code>spark.kubernetes.driver.secrets.[SecretName]</code></td>
    +  <td>(none)</td>
    +  <td>
    +   Add the <a 
href="https://kubernetes.io/docs/concepts/configuration/secret/";>Kubernetes 
Secret</a> named <code>SecretName</code> to the driver pod on the path 
specified in the value. For example,
    +   <code>spark.kubernetes.driver.secrets.spark-secret=/etc/secrets</code>. 
Note that if an init-container is used,
    +   the secret will also be added to the init-container in the driver pod.
    +  </td>
    +</tr>
    +<tr>
    +  <td><code>spark.kubernetes.executor.secrets.[SecretName]</code></td>
    +  <td>5</td>
    --- End diff --
    
    Hmm, copy and paste error. Fixed.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to