tgravescs commented on a change in pull request #25047: 
[WIP][SPARK-27371][CORE] Support GPU-aware resources scheduling in Standalone
URL: https://github.com/apache/spark/pull/25047#discussion_r308310072
 
 

 ##########
 File path: docs/spark-standalone.md
 ##########
 @@ -243,6 +243,33 @@ SPARK_MASTER_OPTS supports the following system 
properties:
     receives no heartbeats.
   </td>
 </tr>
+<tr>
+  <td><code>spark.worker.resource.{resourceName}.amount</code></td>
+  <td>(none)</td>
+  <td>
+    Amount of a particular resource to use on the worker.
+  </td>
+</tr>
+<tr>
+  <td><code>spark.worker.resource.{resourceName}.discoveryScript</code></td>
+  <td>(none)</td>
+  <td>
+    Path to resource discovery script, which is used to find a particular 
resource while worker starting up.
+    And the output of the script should be formatted like <code>{"name": 
"gpu", "addresses": ["0","1"]}</code>.
+  </td>
+</tr>
+<tr>
+  <td><code>spark.worker.resourcesFile</code></td>
+  <td>(none)</td>
+  <td>
+    Path to resources file which is used to find various resources while 
worker starting up.
+    The content of resources file should be formatted like 
<code>[[{"id":{"componentName":
+    "spark.worker","resourceName":"gpu"},"addresses":["0","1","2"]}]]</code>.
 
 Review comment:
    its really still only used internal to Spark right now but the format of 
the Json has to match that.  Unless someone is going to use ResourceAllocation 
to generate the json file I think we can leave it private for now. We can 
always open it up later if needed.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to