HyukjinKwon commented on code in PR #36150:
URL: https://github.com/apache/spark/pull/36150#discussion_r881076458
##########
sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala:
##########
@@ -2012,6 +2012,152 @@ class Dataset[T] private[sql](
@scala.annotation.varargs
def agg(expr: Column, exprs: Column*): DataFrame = groupBy().agg(expr, exprs
: _*)
+ /**
+ * (Scala-specific)
+ * Unpivot a DataFrame from wide format to long format, optionally
+ * leaving identifier variables set.
+ *
+ * This function is useful to massage a DataFrame into a format where some
+ * columns are identifier variables (`ids`), while all other columns,
+ * considered measured variables (`values`), are "unpivoted" to the rows,
+ * leaving just two non-identifier columns, 'variable' and 'value'.
+ *
+ * {{{
+ * val df = Seq((1, 11, 12L), (2, 21, 22L)).toDF("id", "int", "long")
+ * df.show()
+ * // output:
+ * // +---+---+----+
+ * // | id|int|long|
+ * // +---+---+----+
+ * // | 1| 11| 12|
+ * // | 2| 21| 22|
+ * // +---+---+----+
+ *
+ * df.melt(Seq("id")).show()
+ * // output:
+ * // +---+--------+-----+
+ * // | id|variable|value|
+ * // +---+--------+-----+
+ * // | 1| int| 11|
+ * // | 1| long| 12|
+ * // | 2| int| 21|
+ * // | 2| long| 22|
+ * // +---+--------+-----+
+ *
+ * df.melt(Seq("id")).printSchema
+ * //root
+ * // |-- id: integer (nullable = false)
+ * // |-- variable: string (nullable = false)
+ * // |-- value: long (nullable = true)
+ * }}}
+ *
+ * When no id columns are given, the unpivoted DataFrame consists of only the
+ * `variable` and `value` columns. When no value columns are given, all
non-identifier
+ * columns are considered value columns.
+ *
+ * All value columns must be of the same data type. If they are not the same
data type,
+ * all value columns are cast to the nearest common data type. For instance,
+ * types `IntegerType` and `LongType` are compatible and cast to `LongType`,
+ * while `IntegerType` and `StringType` are not compatible and `melt` fails.
+ *
+ * The type of the `value` column is the nearest common data type of the
value columns.
+ *
+ * @param ids names of the id columns
+ * @param values names of the value columns
+ * @param dropNulls rows with null values are dropped from the returned
DataFrame
+ * @param variableColumnName name of the variable column, default `variable`
+ * @param valueColumnName name of the value column, default `value`
+ *
+ * @group untypedrel
+ * @since 3.4.0
+ */
+ def melt(ids: Seq[String],
Review Comment:
Let's avoid using default arguments for Java compat. See also
https://github.com/databricks/scala-style-guide#java-default-param-values
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]