hvanhovell commented on code in PR #40796:
URL: https://github.com/apache/spark/pull/40796#discussion_r1183186633
##########
connector/connect/client/jvm/src/main/scala/org/apache/spark/sql/Dataset.scala:
##########
@@ -1271,10 +1268,35 @@ class Dataset[T] private[sql] (
val colNames: Seq[String] = col1 +: cols
new RelationalGroupedDataset(
toDF(),
- colNames.map(colName => Column(colName).expr),
+ colNames.map(colName => Column(colName)),
proto.Aggregate.GroupType.GROUP_TYPE_GROUPBY)
}
+ /**
+ * (Scala-specific) Reduces the elements of this Dataset using the specified
binary function.
+ * The given `func` must be commutative and associative or the result may be
non-deterministic.
+ *
+ * @group action
+ * @since 3.5.0
+ */
+ def reduce(func: (T, T) => T): T = {
+ val list = this
+ .groupByKey(UdfUtils.groupAllUnderBoolTrue())(PrimitiveBooleanEncoder)
Review Comment:
How about `df.groupBy().as[Unit, T].reduceGroups(func).as[T].head`? That
should stop us from submitting an aggregate with a group.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]