Github user liyinan926 commented on a diff in the pull request:

    https://github.com/apache/spark/pull/20059#discussion_r158768984
  
    --- Diff: docs/running-on-kubernetes.md ---
    @@ -528,51 +576,91 @@ specific to Spark on Kubernetes.
       </td>
     </tr>
     <tr>
    -   <td><code>spark.kubernetes.driver.limit.cores</code></td>
    -   <td>(none)</td>
    -   <td>
    -     Specify the hard CPU 
[limit](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#resource-requests-and-limits-of-pod-and-container)
 for the driver pod.
    -   </td>
    - </tr>
    - <tr>
    -   <td><code>spark.kubernetes.executor.limit.cores</code></td>
    -   <td>(none)</td>
    -   <td>
    -     Specify the hard CPU 
[limit](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#resource-requests-and-limits-of-pod-and-container)
 for each executor pod launched for the Spark Application.
    -   </td>
    - </tr>
    - <tr>
    -   <td><code>spark.kubernetes.node.selector.[labelKey]</code></td>
    -   <td>(none)</td>
    -   <td>
    -     Adds to the node selector of the driver pod and executor pods, with 
key <code>labelKey</code> and the value as the
    -     configuration's value. For example, setting 
<code>spark.kubernetes.node.selector.identifier</code> to 
<code>myIdentifier</code>
    -     will result in the driver pod and executors having a node selector 
with key <code>identifier</code> and value
    -      <code>myIdentifier</code>. Multiple node selector keys can be added 
by setting multiple configurations with this prefix.
    -    </td>
    -  </tr>
    - <tr>
    -   
<td><code>spark.kubernetes.driverEnv.[EnvironmentVariableName]</code></td>
    -   <td>(none)</td>
    -   <td>
    -     Add the environment variable specified by 
<code>EnvironmentVariableName</code> to
    -     the Driver process. The user can specify multiple of these to set 
multiple environment variables.
    -   </td>
    - </tr>
    -  <tr>
    -    
<td><code>spark.kubernetes.mountDependencies.jarsDownloadDir</code></td>
    -    <td><code>/var/spark-data/spark-jars</code></td>
    -    <td>
    -      Location to download jars to in the driver and executors.
    -      This directory must be empty and will be mounted as an empty 
directory volume on the driver and executor pods.
    -    </td>
    -  </tr>
    -   <tr>
    -     
<td><code>spark.kubernetes.mountDependencies.filesDownloadDir</code></td>
    -     <td><code>/var/spark-data/spark-files</code></td>
    -     <td>
    -       Location to download jars to in the driver and executors.
    -       This directory must be empty and will be mounted as an empty 
directory volume on the driver and executor pods.
    -     </td>
    -   </tr>
    +  <td><code>spark.kubernetes.driver.limit.cores</code></td>
    +  <td>(none)</td>
    +  <td>
    +    Specify the hard CPU 
[limit](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#resource-requests-and-limits-of-pod-and-container)
 for the driver pod.
    +  </td>
    +</tr>
    +<tr>
    +  <td><code>spark.kubernetes.executor.limit.cores</code></td>
    +  <td>(none)</td>
    +  <td>
    +    Specify the hard CPU 
[limit](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#resource-requests-and-limits-of-pod-and-container)
 for each executor pod launched for the Spark Application.
    +  </td>
    +</tr>
    +<tr>
    +  <td><code>spark.kubernetes.node.selector.[labelKey]</code></td>
    +  <td>(none)</td>
    +  <td>
    +    Adds to the node selector of the driver pod and executor pods, with 
key <code>labelKey</code> and the value as the
    +    configuration's value. For example, setting 
<code>spark.kubernetes.node.selector.identifier</code> to 
<code>myIdentifier</code>
    +    will result in the driver pod and executors having a node selector 
with key <code>identifier</code> and value
    +     <code>myIdentifier</code>. Multiple node selector keys can be added 
by setting multiple configurations with this prefix.
    +  </td>
    +</tr>
    +<tr>
    +  
<td><code>spark.kubernetes.driverEnv.[EnvironmentVariableName]</code></td>
    +  <td>(none)</td>
    +  <td>
    +    Add the environment variable specified by 
<code>EnvironmentVariableName</code> to
    +    the Driver process. The user can specify multiple of these to set 
multiple environment variables.
    +  </td>
    +</tr>
    +<tr>
    +  <td><code>spark.kubernetes.mountDependencies.jarsDownloadDir</code></td>
    +  <td><code>/var/spark-data/spark-jars</code></td>
    +  <td>
    +    Location to download jars to in the driver and executors.
    +    This directory must be empty and will be mounted as an empty directory 
volume on the driver and executor pods.
    +  </td>
    +</tr>
    +<tr>
    +  <td><code>spark.kubernetes.mountDependencies.filesDownloadDir</code></td>
    +  <td><code>/var/spark-data/spark-files</code></td>
    +  <td>
    +    Location to download jars to in the driver and executors.
    +    This directory must be empty and will be mounted as an empty directory 
volume on the driver and executor pods.
    +  </td>
    +</tr>
    +<tr>
    +  <td><code>spark.kubernetes.mountDependencies.timeout</code></td>
    +  <td>300 seconds</td>
    --- End diff --
    
    Done.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to