tgravescs commented on a change in pull request #27118: [SPARK-30445][Core]
Accelerator aware scheduling handle setting configs to 0
URL: https://github.com/apache/spark/pull/27118#discussion_r363957812
##########
File path: core/src/main/scala/org/apache/spark/resource/ResourceUtils.scala
##########
@@ -119,18 +119,23 @@ private[spark] object ResourceUtils extends Logging {
def parseAllResourceRequests(
sparkConf: SparkConf,
componentName: String): Seq[ResourceRequest] = {
- listResourceIds(sparkConf, componentName).map { id =>
- parseResourceRequest(sparkConf, id)
+ listResourceIds(sparkConf, componentName).flatMap { id =>
+ val req = parseResourceRequest(sparkConf, id)
+ if (req.amount > 0) Some(req) else None
}
Review comment:
we could but we would have to break it apart into 2 steps - one that gets
the parsedResourceRequests back then one that filters that list of requests
because we don't have the amount until we parseResourceRequest. I'm not sure it
matter from performance point?
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]