HyukjinKwon commented on code in PR #35899:
URL: https://github.com/apache/spark/pull/35899#discussion_r861445740
##########
sql/core/src/main/scala/org/apache/spark/sql/KeyValueGroupedDataset.scala:
##########
@@ -171,6 +171,86 @@ class KeyValueGroupedDataset[K, V] private[sql](
flatMapGroups((key, data) => f.call(key, data.asJava).asScala)(encoder)
}
+ /**
+ * (Scala-specific)
+ * Applies the given function to each group of data. For each unique group,
the function will
+ * be passed the group key and a sorted iterator that contains all of the
elements in the group.
+ * The function can return an iterator containing elements of an arbitrary
type which will be
+ * returned as a new [[Dataset]].
+ *
+ * This function does not support partial aggregation, and as a result
requires shuffling all
+ * the data in the [[Dataset]]. If an application intends to perform an
aggregation over each
+ * key, it is best to use the reduce function or an
+ * `org.apache.spark.sql.expressions#Aggregator`.
+ *
+ * Internally, the implementation will spill to disk if any given group is
too large to fit into
+ * memory. However, users must take care to avoid materializing the whole
iterator for a group
+ * (for example, by calling `toList`) unless they are sure that this is
possible given the memory
+ * constraints of their cluster.
+ *
+ * @since 3.4.0
+ */
+ def flatMapSortedGroups[S: Encoder, U : Encoder]
Review Comment:
I think it's too much to add an API to only allow sorting data part.
Especially, we can already do this by sorting the iterator right? the only
problem this API solves is that when each group is too big to fix in the memory
at the executor.
Another problem is that, what if we want to sort in the reversed order or
only a couple of columns?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]