Yikun commented on a change in pull request #35640:
URL: https://github.com/apache/spark/pull/35640#discussion_r814613104



##########
File path: docs/running-on-kubernetes.md
##########
@@ -1356,6 +1356,26 @@ See the [configuration page](configuration.html) for 
information on Spark config
   </td>
   <td>3.3.0</td>
 </tr>
+<tr>
+  <td><code>spark.kubernetes.job.minCPU</code></td>

Review comment:
       Yep, this's a good question, there were some related use case of Spark 
with volcano in production can be shared in here:
   - Case1 (this PR): `minCPU` = specified `minCPU`: for users who know their 
own cluster resources well, this is the basic user case. **Especially, when 
users don't want to set `minRes`  strictly to which the spark job real needed 
resource mount**.
   - Case2: `minCPU` = driver.request + `executor.number` * `executor.request`: 
for users who don't care much about job resource usage.
   - Case3: `minCPU` = (driver.request + `executor.number` * 
`executor.request`) * `factor`, for users want to guarantee the resources of 
the job in some level, but also want to improve the utilization of the cluster.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to