hvanhovell commented on code in PR #40057:
URL: https://github.com/apache/spark/pull/40057#discussion_r1109131122
##########
connector/connect/client/jvm/src/main/scala/org/apache/spark/sql/Dataset.scala:
##########
@@ -1035,6 +1035,29 @@ class Dataset[T] private[sql] (val session:
SparkSession, private[sql] val plan:
}
}
+ /**
+ * Groups the Dataset using the specified columns, so we can run aggregation
on them. See
+ * [[RelationalGroupedDataset]] for all the available aggregate functions.
+ *
+ * {{{
+ * // Compute the average for all numeric columns grouped by department.
+ * ds.groupBy($"department").avg()
+ *
+ * // Compute the max age and average salary, grouped by department and
gender.
+ * ds.groupBy($"department", $"gender").agg(Map(
+ * "salary" -> "avg",
+ * "age" -> "max"
+ * ))
+ * }}}
+ *
+ * @group untypedrel
+ * @since 3.4.0
+ */
+ @scala.annotation.varargs
+ def groupBy(cols: Column*): RelationalGroupedDataset = {
+ RelationalGroupedDataset(toDF(), cols.map(_.expr))
Review Comment:
Nit: you don't have to convert to DataFrame here (as soon as we introduce
encoders it might actually be a bit faster if we don't). You could also pass in
the columns as is.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]