HyukjinKwon commented on a change in pull request #27278:
[SPARK-30569][SQL][PYSPARK][SPARKR] Add percentile_approx DSL functions.
URL: https://github.com/apache/spark/pull/27278#discussion_r368780883
##########
File path: sql/core/src/main/scala/org/apache/spark/sql/functions.scala
##########
@@ -652,6 +652,122 @@ object functions {
*/
def min(columnName: String): Column = min(Column(columnName))
+ /**
+ * Aggregate function: Returns and array of the approximate percentile values
+ * of numeric column col at the given percentages.
+ *
+ * Each value of the percentage array must be between 0.0 and 1.0.
+ *
+ * The accuracy parameter is a positive numeric literal
+ * which controls approximation accuracy at the cost of memory.
+ * Higher value of accuracy yields better accuracy, 1.0/accuracy
+ * is the relative error of the approximation.
+ *
+ * @group agg_funcs
+ * @since 3.0.0
+ */
+ def percentile_approx(e: Column, percentage: Array[Double], accuracy: Long):
Column = {
+ withAggregateFunction {
+ new ApproximatePercentile(
+ e.expr, typedLit(percentage).expr, lit(accuracy).expr
+ )
+ }
+ }
+
+ /**
+ * Aggregate function: Returns and array of the approximate percentile values
+ * of numeric column col at the given percentages.
+ *
+ * Each value of the percentage array must be between 0.0 and 1.0.
+ *
+ * The accuracy parameter is a positive numeric literal
+ * which controls approximation accuracy at the cost of memory.
+ * Higher value of accuracy yields better accuracy, 1.0/accuracy
+ * is the relative error of the approximation.
+ *
+ * @group agg_funcs
+ * @since 3.0.0
+ */
+ def percentile_approx(columnName: String, percentage: Array[Double],
accuracy: Long): Column = {
+ percentile_approx(Column(columnName), percentage, accuracy)
+ }
+
+ /**
+ * Aggregate function: Returns and array of the approximate percentile values
+ * of numeric column col at the given percentages.
+ *
+ * Each value of the percentage array must be between 0.0 and 1.0.
+ *
+ * The accuracy parameter is a positive numeric literal
+ * which controls approximation accuracy at the cost of memory.
+ * Higher value of accuracy yields better accuracy, 1.0/accuracy
+ * is the relative error of the approximation.
+ *
+ * @group agg_funcs
+ * @since 3.0.0
+ */
+ def percentile_approx(e: Column, percentage: Seq[Double], accuracy: Long):
Column = {
+ percentile_approx(e, percentage.toArray, accuracy)
+ }
+
+ /**
+ * Aggregate function: Returns and array of the approximate percentile values
+ * of numeric column col at the given percentages.
+ *
+ * Each value of the percentage array must be between 0.0 and 1.0.
+ *
+ * The accuracy parameter is a positive numeric literal
+ * which controls approximation accuracy at the cost of memory.
+ * Higher value of accuracy yields better accuracy, 1.0/accuracy
+ * is the relative error of the approximation.
+ *
+ * @group agg_funcs
+ * @since 3.0.0
+ */
+ def percentile_approx(columnName: String, percentage: Seq[Double], accuracy:
Long): Column = {
+ percentile_approx(Column(columnName), percentage.toArray, accuracy)
+ }
+
+ /**
+ * Aggregate function: Returns the approximate percentile value of numeric
+ * column col at the given percentage.
+ *
+ * The value of percentage must be between 0.0 and 1.0.\
+ *
+ * The accuracy parameter is a positive numeric literal
+ * which controls approximation accuracy at the cost of memory.
+ * Higher value of accuracy yields better accuracy, 1.0/accuracy
+ * is the relative error of the approximation.
+ *
+ * @group agg_funcs
+ * @since 3.0.0
+ */
+ def percentile_approx(e: Column, percentage: Double, accuracy: Long): Column
= {
+ withAggregateFunction {
+ new ApproximatePercentile(
+ e.expr, lit(percentage).expr, lit(accuracy).expr
+ )
+ }
+ }
+
+ /**
+ * Aggregate function: Returns the approximate percentile value of numeric
+ * column col at the given percentage.
+ *
+ * The value of percentage must be between 0.0 and 1.0.\
+ *
+ * The accuracy parameter is a positive numeric literal
+ * which controls approximation accuracy at the cost of memory.
+ * Higher value of accuracy yields better accuracy, 1.0/accuracy
+ * is the relative error of the approximation.
+ *
+ * @group agg_funcs
+ * @since 3.0.0
+ */
+ def percentile_approx(columnName: String, percentage: Double, accuracy:
Long): Column = {
Review comment:
This is what I meant `(Column, Column, Column) -> Column` to have a single
method to cover all other cases.
What we need to do might be just wrap it by `lit` or `col:
`percentile_approx(col(...), lit(...), lit(...))` so won't be too difficult. We
don't need to duplicate docs with less maintenance.
I did quick test and seems working fine in general:
Scala:
```scala
scala> spark.range(1).select(lit(Array(1, 2, 3))).show()
```
```
+---------+
| [1,2,3]|
+---------+
|[1, 2, 3]|
+---------+
```
Java:
```java
spark.range(1).select(lit(new String[]{"a", "b"})).show();
```
```
+------+
| [a,b]|
+------+
|[a, b]|
+------+
```
Python:
```python
>>> from pyspark.sql.functions import lit
>>> import array
>>> spark.range(1).select(lit(["a", "b"]))
```
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/.../spark/python/pyspark/sql/functions.py", line 54, in _
jc = getattr(sc._jvm.functions, name)(col._jc if isinstance(col, Column)
else col)
File "/.../spark/python/lib/py4j-0.10.8.1-src.zip/py4j/java_gateway.py",
line 1286, in __call__
File "/.../spark/python/pyspark/sql/utils.py", line 98, in deco
return f(*a, **kw)
File "/.../spark/python/lib/py4j-0.10.8.1-src.zip/py4j/protocol.py", line
328, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling
z:org.apache.spark.sql.functions.lit.
: java.lang.RuntimeException: Unsupported literal type class
java.util.ArrayList [a, b]
at
org.apache.spark.sql.catalyst.expressions.Literal$.apply(literals.scala:85)
at
org.apache.spark.sql.catalyst.expressions.Literal$.$anonfun$create$2(literals.scala:145)
at scala.util.Failure.getOrElse(Try.scala:222)
at
org.apache.spark.sql.catalyst.expressions.Literal$.create(literals.scala:145)
at org.apache.spark.sql.functions$.typedLit(functions.scala:131)
at org.apache.spark.sql.functions$.lit(functions.scala:114)
at org.apache.spark.sql.functions.lit(functions.scala)
```
R:
```R
> collect(select(createDataFrame(mtcars), lit(list('a', 'b', 'c'))))
[a,b,c]
1 a, b, c
2 a, b, c
```
Seems we should fix PySpark's `lit` to take `list` as an array. It won't be
too difficult - we can check if this is a list with numeric elements, and
create an Array manually.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]