HyukjinKwon commented on a change in pull request #28085: 
[SPARK-29641][PYTHON][CORE] Stage Level Sched: Add python api's and tests
URL: https://github.com/apache/spark/pull/28085#discussion_r403648232
 
 

 ##########
 File path: python/pyspark/rdd.py
 ##########
 @@ -2483,6 +2485,32 @@ def _is_barrier(self):
         """
         return self._jrdd.rdd().isBarrier()
 
+    def withResources(self, profile):
+        """
+        .. note:: Experimental
+
+        Specify a ResourceProfile to use when calculating this RDD. This is 
only supported on
+        certain cluster managers and currently requires dynamic allocation to 
be enabled.
+        It will result in new executors with the resources specified being 
acquired to
+        calculate the RDD.
+
+        .. versionadded:: 3.1.0
+        """
+        self.has_resourceProfile = True
+        self._jrdd.withResources(profile._jResourceProfile)
+        return self
+
+    def getResourceProfile(self):
 
 Review comment:
   @tgravescs, can you clarify if we're going to make `getResourceProfile` and 
`withResources` as APIs or not first? I strongly think this is a bad idea to 
mix private and API declarations
   
   
https://github.com/apache/spark/blob/5d76b12e9b2ca0eb090c3c5145eee4cf78caba13/core/src/main/scala/org/apache/spark/rdd/RDD.scala#L1740-L1743

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to