Ngone51 commented on a change in pull request #25047: [WIP][SPARK-27371][CORE] 
Support GPU-aware resources scheduling in Standalone
URL: https://github.com/apache/spark/pull/25047#discussion_r304915818
 
 

 ##########
 File path: core/src/main/scala/org/apache/spark/resource/ResourceUtils.scala
 ##########
 @@ -70,6 +86,258 @@ private[spark] object ResourceUtils extends Logging {
   // internally we currently only support addresses, so its just an integer 
count
   val AMOUNT = "amount"
 
+  /**
+   * Assign resources to workers from the same host to avoid address conflict.
+   * @param conf SparkConf
+   * @param componentName spark.driver / spark.worker
+   * @param resources the resources found by worker/driver on the host
+   * @param resourceRequirements the resource requirements asked by the 
worker/driver
+   * @param pid the process id of worker/driver to acquire resources.
+   * @return allocated resources for the worker/driver or throws exception if 
can't
+   *         meet worker/driver's requirement
+   */
+  def acquireResources(
+      conf: SparkConf,
+      componentName: String,
+      resources: Map[String, ResourceInformation],
+      resourceRequirements: Seq[ResourceRequirement],
+      pid: Int)
+    : Map[String, ResourceInformation] = {
+    if (resourceRequirements.isEmpty) {
 
 Review comment:
   Have added more comments to make it more readable, but sorry have't break it 
up. I tried it, but didn't find a good way. Let me think it a bit more.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to