Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/9264#discussion_r44856611
--- Diff: core/src/main/scala/org/apache/spark/rdd/AsyncRDDActions.scala ---
@@ -66,14 +65,22 @@ class AsyncRDDActions[T: ClassTag](self: RDD[T])
extends Serializable with Loggi
*/
def takeAsync(num: Int): FutureAction[Seq[T]] = self.withScope {
val f = new ComplexFutureAction[Seq[T]]
-
- f.run {
- // This is a blocking action so we should use
"AsyncRDDActions.futureExecutionContext" which
- // is a cached thread pool.
- val results = new ArrayBuffer[T](num)
- val totalParts = self.partitions.length
- var partsScanned = 0
- while (results.size < num && partsScanned < totalParts) {
+ // Cached thread pool to handle aggregation of subtasks.
+ implicit val executionContext = AsyncRDDActions.futureExecutionContext
+ val results = new ArrayBuffer[T](num)
+ val totalParts = self.partitions.length
+
+ /*
+ Recursively triggers jobs to scan partitions until either the
requested
+ number of elements are retrieved, or the partitions to scan are
exhausted.
+ This implementation is non-blocking, asynchronously handling the
+ results of each job and triggering the next job using callbacks on
futures.
+ */
+ def continue(partsScanned : Int) : Future[Seq[T]] =
+ if (results.size >= num || partsScanned >= totalParts) {
+ Future.successful(results.toSeq)
+ }
--- End diff --
`else` on same line
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]