tgravescs commented on a change in pull request #28085:
[SPARK-29641][PYTHON][CORE] Stage Level Sched: Add python api's and tests
URL: https://github.com/apache/spark/pull/28085#discussion_r408157102
##########
File path:
core/src/main/scala/org/apache/spark/resource/ExecutorResourceRequests.scala
##########
@@ -38,6 +39,8 @@ private[spark] class ExecutorResourceRequests() extends
Serializable {
def requests: Map[String, ExecutorResourceRequest] =
_executorResources.asScala.toMap
+ def requestsJMap: JMap[String, ExecutorResourceRequest] = requests.asJava
Review comment:
I guess I disagree with you and others already agreed on the previous PR for
TaskContextImpl. why not just have the function and its friendly to both java
and scala api's. We have lots of java specific versions of apis. Go look at the
DataSet api docs and look for "Java-specific". I agree that there is no set
rule for this though, but I would also rather make it more friendly for the
common case which I think is the scala api. One point here is that I don't
think Spark has many api that return things, instead its the input parameters.
there is collect() and collectAsList() but I think when I was looking for
TaskContextImpl I didn't find any other usages
@mengxr @jiangxb1987 as they were in on original api.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]