zhengruifeng commented on code in PR #47343:
URL: https://github.com/apache/spark/pull/47343#discussion_r1683700834
##########
python/pyspark/sql/functions/builtin.py:
##########
@@ -3820,17 +4092,85 @@ def regr_r2(y: "ColumnOrName", x: "ColumnOrName") ->
Column:
Examples
--------
- >>> from pyspark.sql import functions as sf
- >>> x = (sf.col("id") % 3).alias("x")
- >>> y = (sf.randn(42) + x * 10).alias("y")
- >>> spark.range(0, 1000, 1, 1).select(x, y).select(
- ... sf.regr_r2("y", "x")
- ... ).show()
+ Example 1: All paris are non-null
+
+ >>> import pyspark.sql.functions as sf
+ >>> from pyspark.sql.types import IntegerType, StructField, StructType
+ >>> schema = StructType([
+ ... StructField('y', IntegerType(), True),
+ ... StructField('x', IntegerType(), True)
+ ... ])
+ >>> df = spark.createDataFrame([(1, 2), (2, 2), (2, 3), (2, 4)], schema)
+ >>> df.select(sf.regr_r2("y", "x")).show()
+ +------------------+
+ | regr_r2(y, x)|
+ +------------------+
+ |0.2727272727272726|
Review Comment:
```suggestion
|0.2727272727272...|
```
##########
python/pyspark/sql/functions/builtin.py:
##########
@@ -3671,16 +3671,84 @@ def regr_avgx(y: "ColumnOrName", x: "ColumnOrName") ->
Column:
Examples
--------
- >>> from pyspark.sql import functions as sf
- >>> x = (sf.col("id") % 3).alias("x")
- >>> y = (sf.randn(42) + x * 10).alias("y")
- >>> spark.range(0, 1000, 1, 1).select(x, y).select(
- ... sf.regr_avgx("y", "x"), sf.avg("x")
- ... ).show()
+ Example 1: All paris are non-null
+
+ >>> import pyspark.sql.functions as sf
+ >>> from pyspark.sql.types import IntegerType, StructField, StructType
+ >>> schema = StructType([
+ ... StructField('y', IntegerType(), True),
+ ... StructField('x', IntegerType(), True)
+ ... ])
+ >>> df = spark.createDataFrame([(1, 2), (2, 2), (2, 3), (2, 4)], schema)
Review Comment:
shall we simplify the dataframe creation (here and other places) by single
line:
```
spark.sql("SELECT * FROM VALUES (1, 2), (2, NULL), (3, 3), (4, 4) AS t(x,
y)")
```
or
```
spark.createDataFrame([(1, 2), (2, None), (2, 3), (2, 4)], ['x', 'y'])
```
##########
python/pyspark/sql/functions/builtin.py:
##########
@@ -3708,17 +3776,85 @@ def regr_avgy(y: "ColumnOrName", x: "ColumnOrName") ->
Column:
Examples
--------
- >>> from pyspark.sql import functions as sf
- >>> x = (sf.col("id") % 3).alias("x")
- >>> y = (sf.randn(42) + x * 10).alias("y")
- >>> spark.range(0, 1000, 1, 1).select(x, y).select(
- ... sf.regr_avgy("y", "x"), sf.avg("y")
- ... ).show()
- +-----------------+-----------------+
- | regr_avgy(y, x)| avg(y)|
- +-----------------+-----------------+
- |9.980732994136...|9.980732994136...|
- +-----------------+-----------------+
+ Example 1: All paris are non-null
+
+ >>> import pyspark.sql.functions as sf
+ >>> from pyspark.sql.types import IntegerType, StructField, StructType
+ >>> schema = StructType([
+ ... StructField('y', IntegerType(), True),
+ ... StructField('x', IntegerType(), True)
+ ... ])
+ >>> df = spark.createDataFrame([(1, 2), (2, 2), (2, 3), (2, 4)], schema)
+ >>> df.select(sf.regr_avgy("y", "x"), sf.avg("y")).show()
+ +---------------+------+
+ |regr_avgy(y, x)|avg(y)|
+ +---------------+------+
+ | 1.75| 1.75|
+ +---------------+------+
+
+ Example 2: All paris's x values are null
+
+ >>> import pyspark.sql.functions as sf
+ >>> from pyspark.sql.types import IntegerType, StructField, StructType
+ >>> schema = StructType([
+ ... StructField('y', IntegerType(), True),
+ ... StructField('x', IntegerType(), True)
+ ... ])
+ >>> df = spark.createDataFrame([(1, None)], schema)
+ >>> df.select(sf.regr_avgy("y", "x"), sf.avg("y")).show()
+ +---------------+------+
+ |regr_avgy(y, x)|avg(y)|
+ +---------------+------+
+ | NULL| 1.0|
+ +---------------+------+
+
+ Example 3: All paris's y values are null
+
+ >>> import pyspark.sql.functions as sf
+ >>> from pyspark.sql.types import IntegerType, StructField, StructType
+ >>> schema = StructType([
+ ... StructField('y', IntegerType(), True),
+ ... StructField('x', IntegerType(), True)
+ ... ])
+ >>> df = spark.createDataFrame([(None, 1)], schema)
+ >>> df.select(sf.regr_avgy("y", "x"), sf.avg("y")).show()
+ +---------------+------+
+ |regr_avgy(y, x)|avg(y)|
+ +---------------+------+
+ | NULL| NULL|
+ +---------------+------+
+
+ Example 4: Some paris's x values are null
+
+ >>> import pyspark.sql.functions as sf
+ >>> from pyspark.sql.types import IntegerType, StructField, StructType
+ >>> schema = StructType([
+ ... StructField('y', IntegerType(), True),
+ ... StructField('x', IntegerType(), True)
+ ... ])
+ >>> df = spark.createDataFrame([(1, 2), (2, None), (2, 3), (2, 4)], schema)
+ >>> df.select(sf.regr_avgy("y", "x"), sf.avg("y")).show()
+ +------------------+------+
+ | regr_avgy(y, x)|avg(y)|
+ +------------------+------+
+ |1.6666666666666667| 1.75|
+ +------------------+------+
+
+ Example 5: Some paris's x or y values are null
+
+ >>> import pyspark.sql.functions as sf
+ >>> from pyspark.sql.types import IntegerType, StructField, StructType
+ >>> schema = StructType([
+ ... StructField('y', IntegerType(), True),
+ ... StructField('x', IntegerType(), True)
+ ... ])
+ >>> df = spark.createDataFrame([(1, 2), (2, None), (None, 3), (2, 4)],
schema)
+ >>> df.select(sf.regr_avgy("y", "x"), sf.avg("y")).show()
+ +---------------+------------------+
+ |regr_avgy(y, x)| avg(y)|
+ +---------------+------------------+
+ | 1.5|1.6666666666666667|
Review Comment:
```suggestion
| 1.5|1.6666666666666...|
```
##########
python/pyspark/sql/functions/builtin.py:
##########
@@ -3894,17 +4302,85 @@ def regr_sxx(y: "ColumnOrName", x: "ColumnOrName") ->
Column:
Examples
--------
- >>> from pyspark.sql import functions as sf
- >>> x = (sf.col("id") % 3).alias("x")
- >>> y = (sf.randn(42) + x * 10).alias("y")
- >>> spark.range(0, 1000, 1, 1).select(x, y).select(
- ... sf.regr_sxx("y", "x")
- ... ).show()
- +-----------------+
- | regr_sxx(y, x)|
- +-----------------+
- |666.9989999999...|
- +-----------------+
+ Example 1: All paris are non-null
+
+ >>> import pyspark.sql.functions as sf
+ >>> from pyspark.sql.types import IntegerType, StructField, StructType
+ >>> schema = StructType([
+ ... StructField('y', IntegerType(), True),
+ ... StructField('x', IntegerType(), True)
+ ... ])
+ >>> df = spark.createDataFrame([(1, 2), (2, 2), (2, 3), (2, 4)], schema)
+ >>> df.select(sf.regr_sxx("y", "x")).show()
+ +------------------+
+ | regr_sxx(y, x)|
+ +------------------+
+ |2.7499999999999996|
Review Comment:
```suggestion
|2.7499999999999...|
```
##########
python/pyspark/sql/functions/builtin.py:
##########
@@ -3820,17 +4092,85 @@ def regr_r2(y: "ColumnOrName", x: "ColumnOrName") ->
Column:
Examples
--------
- >>> from pyspark.sql import functions as sf
- >>> x = (sf.col("id") % 3).alias("x")
- >>> y = (sf.randn(42) + x * 10).alias("y")
- >>> spark.range(0, 1000, 1, 1).select(x, y).select(
- ... sf.regr_r2("y", "x")
- ... ).show()
+ Example 1: All paris are non-null
+
+ >>> import pyspark.sql.functions as sf
+ >>> from pyspark.sql.types import IntegerType, StructField, StructType
+ >>> schema = StructType([
+ ... StructField('y', IntegerType(), True),
+ ... StructField('x', IntegerType(), True)
+ ... ])
+ >>> df = spark.createDataFrame([(1, 2), (2, 2), (2, 3), (2, 4)], schema)
+ >>> df.select(sf.regr_r2("y", "x")).show()
+ +------------------+
+ | regr_r2(y, x)|
+ +------------------+
+ |0.2727272727272726|
+ +------------------+
+
+ Example 2: All paris's x values are null
+
+ >>> import pyspark.sql.functions as sf
+ >>> from pyspark.sql.types import IntegerType, StructField, StructType
+ >>> schema = StructType([
+ ... StructField('y', IntegerType(), True),
+ ... StructField('x', IntegerType(), True)
+ ... ])
+ >>> df = spark.createDataFrame([(1, None)], schema)
+ >>> df.select(sf.regr_r2("y", "x")).show()
+ +-------------+
+ |regr_r2(y, x)|
+ +-------------+
+ | NULL|
+ +-------------+
+
+ Example 3: All paris's y values are null
+
+ >>> import pyspark.sql.functions as sf
+ >>> from pyspark.sql.types import IntegerType, StructField, StructType
+ >>> schema = StructType([
+ ... StructField('y', IntegerType(), True),
+ ... StructField('x', IntegerType(), True)
+ ... ])
+ >>> df = spark.createDataFrame([(None, 1)], schema)
+ >>> df.select(sf.regr_r2("y", "x")).show()
+ +-------------+
+ |regr_r2(y, x)|
+ +-------------+
+ | NULL|
+ +-------------+
+
+ Example 4: Some paris's x values are null
+
+ >>> import pyspark.sql.functions as sf
+ >>> from pyspark.sql.types import IntegerType, StructField, StructType
+ >>> schema = StructType([
+ ... StructField('y', IntegerType(), True),
+ ... StructField('x', IntegerType(), True)
+ ... ])
+ >>> df = spark.createDataFrame([(1, 2), (2, None), (2, 3), (2, 4)], schema)
+ >>> df.select(sf.regr_r2("y", "x")).show()
+------------------+
| regr_r2(y, x)|
+------------------+
- |0.9851908293645...|
+ |0.7500000000000001|
Review Comment:
```suggestion
|0.7500000000000...|
```
##########
python/pyspark/sql/functions/builtin.py:
##########
@@ -3968,17 +4512,85 @@ def regr_syy(y: "ColumnOrName", x: "ColumnOrName") ->
Column:
Examples
--------
- >>> from pyspark.sql import functions as sf
- >>> x = (sf.col("id") % 3).alias("x")
- >>> y = (sf.randn(42) + x * 10).alias("y")
- >>> spark.range(0, 1000, 1, 1).select(x, y).select(
- ... sf.regr_syy("y", "x")
- ... ).show()
- +-----------------+
- | regr_syy(y, x)|
- +-----------------+
- |68250.53503811...|
- +-----------------+
+ Example 1: All paris are non-null
+
+ >>> import pyspark.sql.functions as sf
+ >>> from pyspark.sql.types import IntegerType, StructField, StructType
+ >>> schema = StructType([
+ ... StructField('y', IntegerType(), True),
+ ... StructField('x', IntegerType(), True)
+ ... ])
+ >>> df = spark.createDataFrame([(1, 2), (2, 2), (2, 3), (2, 4)], schema)
+ >>> df.select(sf.regr_syy("y", "x")).show()
+ +------------------+
+ | regr_syy(y, x)|
+ +------------------+
+ |0.7499999999999999|
Review Comment:
```suggestion
|0.7499999999999...|
```
##########
python/pyspark/sql/functions/builtin.py:
##########
@@ -3931,17 +4407,85 @@ def regr_sxy(y: "ColumnOrName", x: "ColumnOrName") ->
Column:
Examples
--------
- >>> from pyspark.sql import functions as sf
- >>> x = (sf.col("id") % 3).alias("x")
- >>> y = (sf.randn(42) + x * 10).alias("y")
- >>> spark.range(0, 1000, 1, 1).select(x, y).select(
- ... sf.regr_sxy("y", "x")
- ... ).show()
- +----------------+
- | regr_sxy(y, x)|
- +----------------+
- |6696.93065315...|
- +----------------+
+ Example 1: All paris are non-null
+
+ >>> import pyspark.sql.functions as sf
+ >>> from pyspark.sql.types import IntegerType, StructField, StructType
+ >>> schema = StructType([
+ ... StructField('y', IntegerType(), True),
+ ... StructField('x', IntegerType(), True)
+ ... ])
+ >>> df = spark.createDataFrame([(1, 2), (2, 2), (2, 3), (2, 4)], schema)
+ >>> df.select(sf.regr_sxy("y", "x")).show()
+ +------------------+
+ | regr_sxy(y, x)|
+ +------------------+
+ |0.7499999999999998|
Review Comment:
```suggestion
|0.7499999999999...|
```
##########
python/pyspark/sql/functions/builtin.py:
##########
@@ -3671,16 +3671,84 @@ def regr_avgx(y: "ColumnOrName", x: "ColumnOrName") ->
Column:
Examples
--------
- >>> from pyspark.sql import functions as sf
- >>> x = (sf.col("id") % 3).alias("x")
- >>> y = (sf.randn(42) + x * 10).alias("y")
- >>> spark.range(0, 1000, 1, 1).select(x, y).select(
- ... sf.regr_avgx("y", "x"), sf.avg("x")
- ... ).show()
+ Example 1: All paris are non-null
+
+ >>> import pyspark.sql.functions as sf
+ >>> from pyspark.sql.types import IntegerType, StructField, StructType
+ >>> schema = StructType([
+ ... StructField('y', IntegerType(), True),
+ ... StructField('x', IntegerType(), True)
+ ... ])
+ >>> df = spark.createDataFrame([(1, 2), (2, 2), (2, 3), (2, 4)], schema)
Review Comment:
It seems the specified schema is not needed
##########
python/pyspark/sql/functions/builtin.py:
##########
@@ -3708,17 +3776,85 @@ def regr_avgy(y: "ColumnOrName", x: "ColumnOrName") ->
Column:
Examples
--------
- >>> from pyspark.sql import functions as sf
- >>> x = (sf.col("id") % 3).alias("x")
- >>> y = (sf.randn(42) + x * 10).alias("y")
- >>> spark.range(0, 1000, 1, 1).select(x, y).select(
- ... sf.regr_avgy("y", "x"), sf.avg("y")
- ... ).show()
- +-----------------+-----------------+
- | regr_avgy(y, x)| avg(y)|
- +-----------------+-----------------+
- |9.980732994136...|9.980732994136...|
- +-----------------+-----------------+
+ Example 1: All paris are non-null
+
+ >>> import pyspark.sql.functions as sf
+ >>> from pyspark.sql.types import IntegerType, StructField, StructType
+ >>> schema = StructType([
+ ... StructField('y', IntegerType(), True),
+ ... StructField('x', IntegerType(), True)
+ ... ])
+ >>> df = spark.createDataFrame([(1, 2), (2, 2), (2, 3), (2, 4)], schema)
+ >>> df.select(sf.regr_avgy("y", "x"), sf.avg("y")).show()
+ +---------------+------+
+ |regr_avgy(y, x)|avg(y)|
+ +---------------+------+
+ | 1.75| 1.75|
+ +---------------+------+
+
+ Example 2: All paris's x values are null
+
+ >>> import pyspark.sql.functions as sf
+ >>> from pyspark.sql.types import IntegerType, StructField, StructType
+ >>> schema = StructType([
+ ... StructField('y', IntegerType(), True),
+ ... StructField('x', IntegerType(), True)
+ ... ])
+ >>> df = spark.createDataFrame([(1, None)], schema)
+ >>> df.select(sf.regr_avgy("y", "x"), sf.avg("y")).show()
+ +---------------+------+
+ |regr_avgy(y, x)|avg(y)|
+ +---------------+------+
+ | NULL| 1.0|
+ +---------------+------+
+
+ Example 3: All paris's y values are null
+
+ >>> import pyspark.sql.functions as sf
+ >>> from pyspark.sql.types import IntegerType, StructField, StructType
+ >>> schema = StructType([
+ ... StructField('y', IntegerType(), True),
+ ... StructField('x', IntegerType(), True)
+ ... ])
+ >>> df = spark.createDataFrame([(None, 1)], schema)
+ >>> df.select(sf.regr_avgy("y", "x"), sf.avg("y")).show()
+ +---------------+------+
+ |regr_avgy(y, x)|avg(y)|
+ +---------------+------+
+ | NULL| NULL|
+ +---------------+------+
+
+ Example 4: Some paris's x values are null
+
+ >>> import pyspark.sql.functions as sf
+ >>> from pyspark.sql.types import IntegerType, StructField, StructType
+ >>> schema = StructType([
+ ... StructField('y', IntegerType(), True),
+ ... StructField('x', IntegerType(), True)
+ ... ])
+ >>> df = spark.createDataFrame([(1, 2), (2, None), (2, 3), (2, 4)], schema)
+ >>> df.select(sf.regr_avgy("y", "x"), sf.avg("y")).show()
+ +------------------+------+
+ | regr_avgy(y, x)|avg(y)|
+ +------------------+------+
+ |1.6666666666666667| 1.75|
Review Comment:
The output may varies with different envs, e.g. different CPUs
##########
python/pyspark/sql/functions/builtin.py:
##########
@@ -3708,17 +3776,85 @@ def regr_avgy(y: "ColumnOrName", x: "ColumnOrName") ->
Column:
Examples
--------
- >>> from pyspark.sql import functions as sf
- >>> x = (sf.col("id") % 3).alias("x")
- >>> y = (sf.randn(42) + x * 10).alias("y")
- >>> spark.range(0, 1000, 1, 1).select(x, y).select(
- ... sf.regr_avgy("y", "x"), sf.avg("y")
- ... ).show()
- +-----------------+-----------------+
- | regr_avgy(y, x)| avg(y)|
- +-----------------+-----------------+
- |9.980732994136...|9.980732994136...|
- +-----------------+-----------------+
+ Example 1: All paris are non-null
+
+ >>> import pyspark.sql.functions as sf
+ >>> from pyspark.sql.types import IntegerType, StructField, StructType
+ >>> schema = StructType([
+ ... StructField('y', IntegerType(), True),
+ ... StructField('x', IntegerType(), True)
+ ... ])
+ >>> df = spark.createDataFrame([(1, 2), (2, 2), (2, 3), (2, 4)], schema)
+ >>> df.select(sf.regr_avgy("y", "x"), sf.avg("y")).show()
+ +---------------+------+
+ |regr_avgy(y, x)|avg(y)|
+ +---------------+------+
+ | 1.75| 1.75|
+ +---------------+------+
+
+ Example 2: All paris's x values are null
+
+ >>> import pyspark.sql.functions as sf
+ >>> from pyspark.sql.types import IntegerType, StructField, StructType
+ >>> schema = StructType([
+ ... StructField('y', IntegerType(), True),
+ ... StructField('x', IntegerType(), True)
+ ... ])
+ >>> df = spark.createDataFrame([(1, None)], schema)
+ >>> df.select(sf.regr_avgy("y", "x"), sf.avg("y")).show()
+ +---------------+------+
+ |regr_avgy(y, x)|avg(y)|
+ +---------------+------+
+ | NULL| 1.0|
+ +---------------+------+
+
+ Example 3: All paris's y values are null
+
+ >>> import pyspark.sql.functions as sf
+ >>> from pyspark.sql.types import IntegerType, StructField, StructType
+ >>> schema = StructType([
+ ... StructField('y', IntegerType(), True),
+ ... StructField('x', IntegerType(), True)
+ ... ])
+ >>> df = spark.createDataFrame([(None, 1)], schema)
+ >>> df.select(sf.regr_avgy("y", "x"), sf.avg("y")).show()
+ +---------------+------+
+ |regr_avgy(y, x)|avg(y)|
+ +---------------+------+
+ | NULL| NULL|
+ +---------------+------+
+
+ Example 4: Some paris's x values are null
+
+ >>> import pyspark.sql.functions as sf
+ >>> from pyspark.sql.types import IntegerType, StructField, StructType
+ >>> schema = StructType([
+ ... StructField('y', IntegerType(), True),
+ ... StructField('x', IntegerType(), True)
+ ... ])
+ >>> df = spark.createDataFrame([(1, 2), (2, None), (2, 3), (2, 4)], schema)
+ >>> df.select(sf.regr_avgy("y", "x"), sf.avg("y")).show()
+ +------------------+------+
+ | regr_avgy(y, x)|avg(y)|
+ +------------------+------+
+ |1.6666666666666667| 1.75|
Review Comment:
```suggestion
|1.6666666666666...| 1.75|
```
please do not test exactly against a float value like this
##########
python/pyspark/sql/functions/builtin.py:
##########
@@ -3968,17 +4512,85 @@ def regr_syy(y: "ColumnOrName", x: "ColumnOrName") ->
Column:
Examples
--------
- >>> from pyspark.sql import functions as sf
- >>> x = (sf.col("id") % 3).alias("x")
- >>> y = (sf.randn(42) + x * 10).alias("y")
- >>> spark.range(0, 1000, 1, 1).select(x, y).select(
- ... sf.regr_syy("y", "x")
- ... ).show()
- +-----------------+
- | regr_syy(y, x)|
- +-----------------+
- |68250.53503811...|
- +-----------------+
+ Example 1: All paris are non-null
+
+ >>> import pyspark.sql.functions as sf
+ >>> from pyspark.sql.types import IntegerType, StructField, StructType
+ >>> schema = StructType([
+ ... StructField('y', IntegerType(), True),
+ ... StructField('x', IntegerType(), True)
+ ... ])
+ >>> df = spark.createDataFrame([(1, 2), (2, 2), (2, 3), (2, 4)], schema)
+ >>> df.select(sf.regr_syy("y", "x")).show()
+ +------------------+
+ | regr_syy(y, x)|
+ +------------------+
+ |0.7499999999999999|
+ +------------------+
+
+ Example 2: All paris's x values are null
+
+ >>> import pyspark.sql.functions as sf
+ >>> from pyspark.sql.types import IntegerType, StructField, StructType
+ >>> schema = StructType([
+ ... StructField('y', IntegerType(), True),
+ ... StructField('x', IntegerType(), True)
+ ... ])
+ >>> df = spark.createDataFrame([(1, None)], schema)
+ >>> df.select(sf.regr_syy("y", "x")).show()
+ +--------------+
+ |regr_syy(y, x)|
+ +--------------+
+ | NULL|
+ +--------------+
+
+ Example 3: All paris's y values are null
+
+ >>> import pyspark.sql.functions as sf
+ >>> from pyspark.sql.types import IntegerType, StructField, StructType
+ >>> schema = StructType([
+ ... StructField('y', IntegerType(), True),
+ ... StructField('x', IntegerType(), True)
+ ... ])
+ >>> df = spark.createDataFrame([(None, 1)], schema)
+ >>> df.select(sf.regr_syy("y", "x")).show()
+ +--------------+
+ |regr_syy(y, x)|
+ +--------------+
+ | NULL|
+ +--------------+
+
+ Example 4: Some paris's x values are null
+
+ >>> import pyspark.sql.functions as sf
+ >>> from pyspark.sql.types import IntegerType, StructField, StructType
+ >>> schema = StructType([
+ ... StructField('y', IntegerType(), True),
+ ... StructField('x', IntegerType(), True)
+ ... ])
+ >>> df = spark.createDataFrame([(1, 2), (2, None), (2, 3), (2, 4)], schema)
+ >>> df.select(sf.regr_syy("y", "x")).show()
+ +------------------+
+ | regr_syy(y, x)|
+ +------------------+
+ |0.6666666666666666|
Review Comment:
```suggestion
|0.6666666666666...|
```
##########
python/pyspark/sql/functions/builtin.py:
##########
@@ -3968,17 +4512,85 @@ def regr_syy(y: "ColumnOrName", x: "ColumnOrName") ->
Column:
Examples
--------
- >>> from pyspark.sql import functions as sf
- >>> x = (sf.col("id") % 3).alias("x")
- >>> y = (sf.randn(42) + x * 10).alias("y")
- >>> spark.range(0, 1000, 1, 1).select(x, y).select(
- ... sf.regr_syy("y", "x")
- ... ).show()
- +-----------------+
- | regr_syy(y, x)|
- +-----------------+
- |68250.53503811...|
- +-----------------+
+ Example 1: All paris are non-null
+
+ >>> import pyspark.sql.functions as sf
+ >>> from pyspark.sql.types import IntegerType, StructField, StructType
+ >>> schema = StructType([
+ ... StructField('y', IntegerType(), True),
+ ... StructField('x', IntegerType(), True)
+ ... ])
+ >>> df = spark.createDataFrame([(1, 2), (2, 2), (2, 3), (2, 4)], schema)
+ >>> df.select(sf.regr_syy("y", "x")).show()
+ +------------------+
+ | regr_syy(y, x)|
+ +------------------+
+ |0.7499999999999999|
Review Comment:
Can you adjust the input values to make the result more stable?
I suspect it may output `0.75` in some envs.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]