Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/18307#discussion_r125146063
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala ---
@@ -2205,37 +2205,151 @@ class Dataset[T] private[sql](
* // max 92.0 192.0
* }}}
*
+ * See also [[summary]]
+ *
+ * @param cols Columns to compute statistics on.
+ *
* @group action
* @since 1.6.0
*/
@scala.annotation.varargs
- def describe(cols: String*): DataFrame = withPlan {
+ def describe(cols: String*): DataFrame = {
+ val selected = if (cols.isEmpty) this else select(cols.head,
cols.tail: _*)
+ selected.summary("count", "mean", "stddev", "min", "max")
+ }
+
+ /**
+ * Computes specified statistics for numeric and string columns.
Available statistics are:
+ *
+ * - count
+ * - mean
+ * - stddev
+ * - min
+ * - max
+ * - arbitrary approximate percentiles specified as a percentage (eg,
75%)
+ *
+ * If no statistics are given, this function computes count, mean,
stddev, min,
+ * approximate quartiles, and max.
+ *
+ * This function is meant for exploratory data analysis, as we make no
guarantee about the
+ * backward compatibility of the schema of the resulting Dataset. If you
want to
+ * programmatically compute summary statistics, use the `agg` function
instead.
+ *
+ * {{{
+ * ds.summary().show()
+ *
+ * // output:
+ * // summary age height
+ * // count 10.0 10.0
+ * // mean 53.3 178.05
+ * // stddev 11.6 15.7
+ * // min 18.0 163.0
+ * // 25% 24.0 176.0
+ * // 50% 24.0 176.0
+ * // 75% 32.0 180.0
+ * // max 92.0 192.0
+ * }}}
+ *
+ * {{{
+ * ds.summary("count", "min", "25%", "75%", "max").show()
+ *
+ * // output:
+ * // summary age height
+ * // count 10.0 10.0
+ * // min 18.0 163.0
+ * // 25% 24.0 176.0
+ * // 75% 32.0 180.0
+ * // max 92.0 192.0
+ * }}}
+ *
+ * @param statistics Statistics from above list to be computed.
+ *
+ * @group action
+ * @since 2.3.0
+ */
+ @scala.annotation.varargs
+ def summary(statistics: String*): DataFrame = withPlan {
--- End diff --
can we move the implementation into
org.apache.spark.sql.execution.stat.StatFunctions? I worry Dataset is getting
too long. It should probably be mostly an interface / delegation and most of
the implementations are elsewhere.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]