[spark] branch master updated (18b5e93034b -> ccf027cf499)

2022-09-02 Thread yumwang
This is an automated email from the ASF dual-hosted git repository.

yumwang pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


from 18b5e93034b [SPARK-40319][SQL] Remove duplicated query execution error 
method for PARSE_DATETIME_BY_NEW_PARSER
 add ccf027cf499 [SPARK-40312][CORE][DOCS] Add missing configuration 
documentation in Spark History Server

No new revisions were added by this update.

Summary of changes:
 docs/monitoring.md | 19 +++
 1 file changed, 19 insertions(+)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated: [SPARK-40319][SQL] Remove duplicated query execution error method for PARSE_DATETIME_BY_NEW_PARSER

2022-09-02 Thread maxgekk
This is an automated email from the ASF dual-hosted git repository.

maxgekk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
 new 18b5e93034b [SPARK-40319][SQL] Remove duplicated query execution error 
method for PARSE_DATETIME_BY_NEW_PARSER
18b5e93034b is described below

commit 18b5e93034b6a264e4c491ba4823b489190e741c
Author: Gengliang Wang 
AuthorDate: Sat Sep 3 08:02:25 2022 +0300

[SPARK-40319][SQL] Remove duplicated query execution error method for 
PARSE_DATETIME_BY_NEW_PARSER

### What changes were proposed in this pull request?

Remove duplicated query execution error method for 
PARSE_DATETIME_BY_NEW_PARSER
### Why are the changes needed?

code clean up

### Does this PR introduce _any_ user-facing change?

No
### How was this patch tested?

Existing UT

Closes #37776 from gengliangwang/minorDeDuplicate.

Authored-by: Gengliang Wang 
Signed-off-by: Max Gekk 
---
 .../spark/sql/catalyst/util/DateTimeFormatterHelper.scala |  2 +-
 .../org/apache/spark/sql/errors/QueryExecutionErrors.scala| 11 ---
 2 files changed, 1 insertion(+), 12 deletions(-)

diff --git 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/DateTimeFormatterHelper.scala
 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/DateTimeFormatterHelper.scala
index cb03ab2ee4a..96812cd65c1 100644
--- 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/DateTimeFormatterHelper.scala
+++ 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/DateTimeFormatterHelper.scala
@@ -161,7 +161,7 @@ trait DateTimeFormatterHelper {
   } catch {
 case _: Throwable => throw e
   }
-  throw 
QueryExecutionErrors.failToFormatDateTimeInNewFormatterError(resultCandidate, e)
+  throw 
QueryExecutionErrors.failToParseDateTimeInNewParserError(resultCandidate, e)
   }
 
   /**
diff --git 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala
 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala
index 3dcefcc5368..f4ec70e81d9 100644
--- 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala
+++ 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala
@@ -1068,17 +1068,6 @@ private[sql] object QueryExecutionErrors extends 
QueryErrorsBase {
   e)
   }
 
-  def failToFormatDateTimeInNewFormatterError(
-  resultCandidate: String, e: Throwable): Throwable = {
-new SparkUpgradeException(
-  errorClass = "INCONSISTENT_BEHAVIOR_CROSS_VERSION",
-  errorSubClass = Some("PARSE_DATETIME_BY_NEW_PARSER"),
-  messageParameters = Array(
-toSQLValue(resultCandidate, StringType),
-toSQLConf(SQLConf.LEGACY_TIME_PARSER_POLICY.key)),
-  e)
-  }
-
   def failToRecognizePatternAfterUpgradeError(pattern: String, e: Throwable): 
Throwable = {
 new SparkUpgradeException(
   errorClass = "INCONSISTENT_BEHAVIOR_CROSS_VERSION",


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated: [SPARK-40283][INFRA] Make MiMa check default exclude `private object` and bump `previousSparkVersion` to 3.3.0

2022-09-02 Thread dongjoon
This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
 new 28c8073decc [SPARK-40283][INFRA] Make MiMa check default  exclude 
`private object` and bump `previousSparkVersion` to 3.3.0
28c8073decc is described below

commit 28c8073decc471852f16e2a1a4012f7f3326199b
Author: yangjie01 
AuthorDate: Fri Sep 2 14:46:20 2022 -0700

[SPARK-40283][INFRA] Make MiMa check default  exclude `private object` and 
bump `previousSparkVersion` to 3.3.0

### What changes were proposed in this pull request?
The main change of this pr as follows:
1. Make MiMa check excludes `private object` by default
2. Bump MiMa's `previousSparkVersion` to 3.3.0
3. Supplement missing `ProblemFilters` to `MimaExcludes`
4. Clean up expired rules and case match
5. Correct the rule added by SPARK-38679 and  SPARK-39506 but misplaced

### Why are the changes needed?
To ensure that MiMa checks cover new APIs added in Spark 3.3.0.

### Does this PR introduce _any_ user-facing change?
No.

### How was this patch tested?

Scala 2.12

```
dev/mima -Pscala-2.12
```

Scala 2.13

```
dev/change-scala-version.sh 2.13
dev/mima -Pscala-2.13
```

Closes #37741 from LuciferYang/SPARK-40283.

Lead-authored-by: yangjie01 
Co-authored-by: YangJie 
Signed-off-by: Dongjoon Hyun 
---
 project/MimaBuild.scala|   2 +-
 project/MimaExcludes.scala | 118 -
 .../apache/spark/tools/GenerateMIMAIgnore.scala|   3 +-
 3 files changed, 26 insertions(+), 97 deletions(-)

diff --git a/project/MimaBuild.scala b/project/MimaBuild.scala
index 2bd05e60c02..ec9ce94a6c4 100644
--- a/project/MimaBuild.scala
+++ b/project/MimaBuild.scala
@@ -86,7 +86,7 @@ object MimaBuild {
 
   def mimaSettings(sparkHome: File, projectRef: ProjectRef): Seq[Setting[_]] = 
{
 val organization = "org.apache.spark"
-val previousSparkVersion = "3.2.0"
+val previousSparkVersion = "3.3.0"
 val project = projectRef.project
 val id = "spark-" + project
 
diff --git a/project/MimaExcludes.scala b/project/MimaExcludes.scala
index 3f3d8575477..939fa9a9f45 100644
--- a/project/MimaExcludes.scala
+++ b/project/MimaExcludes.scala
@@ -34,8 +34,8 @@ import com.typesafe.tools.mima.core.ProblemFilters._
  */
 object MimaExcludes {
 
-  // Exclude rules for 3.4.x
-  lazy val v34excludes = v33excludes ++ Seq(
+  // Exclude rules for 3.4.x from 3.3.0
+  lazy val v34excludes = defaultExcludes ++ Seq(
 
ProblemFilters.exclude[DirectMissingMethodProblem]("org.apache.spark.ml.recommendation.ALS.checkedCast"),
 
ProblemFilters.exclude[DirectMissingMethodProblem]("org.apache.spark.ml.recommendation.ALSModel.checkedCast"),
 
@@ -62,48 +62,34 @@ object MimaExcludes {
 
ProblemFilters.exclude[IncompatibleMethTypeProblem]("org.apache.spark.deploy.DeployMessages#RequestExecutors.copy"),
 
ProblemFilters.exclude[IncompatibleResultTypeProblem]("org.apache.spark.deploy.DeployMessages#RequestExecutors.copy$default$2"),
 
ProblemFilters.exclude[IncompatibleMethTypeProblem]("org.apache.spark.deploy.DeployMessages#RequestExecutors.this"),
-
ProblemFilters.exclude[IncompatibleMethTypeProblem]("org.apache.spark.deploy.DeployMessages#RequestExecutors.apply")
-  )
-
-  // Exclude rules for 3.3.x from 3.2.0
-  lazy val v33excludes = v32excludes ++ Seq(
-// [SPARK-35672][CORE][YARN] Pass user classpath entries to executors 
using config instead of command line
-// The followings are necessary for Scala 2.13.
-
ProblemFilters.exclude[DirectMissingMethodProblem]("org.apache.spark.executor.CoarseGrainedExecutorBackend#Arguments.*"),
-
ProblemFilters.exclude[IncompatibleResultTypeProblem]("org.apache.spark.executor.CoarseGrainedExecutorBackend#Arguments.*"),
-
ProblemFilters.exclude[MissingTypesProblem]("org.apache.spark.executor.CoarseGrainedExecutorBackend$Arguments$"),
+
ProblemFilters.exclude[IncompatibleMethTypeProblem]("org.apache.spark.deploy.DeployMessages#RequestExecutors.apply"),
 
-// [SPARK-37391][SQL] JdbcConnectionProvider tells if it modifies security 
context
-
ProblemFilters.exclude[ReversedMissingMethodProblem]("org.apache.spark.sql.jdbc.JdbcConnectionProvider.modifiesSecurityContext"),
+// [SPARK-38679][CORE] Expose the number of partitions in a stage to 
TaskContext
+
ProblemFilters.exclude[ReversedMissingMethodProblem]("org.apache.spark.TaskContext.numPartitions"),
 
-// [SPARK-37780][SQL] QueryExecutionListener support SQLConf as 
constructor parameter
-
ProblemFilters.exclude[DirectMissingMethodProblem]("org.apache.spark.sql.util.ExecutionListenerManager.this"),
-// [SPARK-37786][SQL] StreamingQueryListener support use 

[spark] branch master updated: [SPARK-40163][SQL][TESTS][FOLLOWUP] Use Junit `Assert` api instead of Java `assert` in `JavaSparkSessionSuite.java`

2022-09-02 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
 new 339d6380e1d [SPARK-40163][SQL][TESTS][FOLLOWUP] Use Junit `Assert` api 
instead of Java `assert` in `JavaSparkSessionSuite.java`
339d6380e1d is described below

commit 339d6380e1d00cac5821ddc44349cbcd3f58ad7d
Author: yangjie01 
AuthorDate: Fri Sep 2 14:49:23 2022 -0500

[SPARK-40163][SQL][TESTS][FOLLOWUP] Use Junit `Assert` api instead of Java 
`assert` in `JavaSparkSessionSuite.java`

### What changes were proposed in this pull request?
This pr is a minor fix of https://github.com/apache/spark/pull/37478,  just 
change to use Junit  api to assert in Java suite.

### Why are the changes needed?
In Java suites, should use JUnit API to make assertions.

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
Pass GitHub Actions

Closes #37772 from LuciferYang/SPARK-40163-followup.

Authored-by: yangjie01 
Signed-off-by: Sean Owen 
---
 .../src/test/java/test/org/apache/spark/sql/JavaSparkSessionSuite.java | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git 
a/sql/core/src/test/java/test/org/apache/spark/sql/JavaSparkSessionSuite.java 
b/sql/core/src/test/java/test/org/apache/spark/sql/JavaSparkSessionSuite.java
index 00f744f4d86..b1df377936d 100644
--- 
a/sql/core/src/test/java/test/org/apache/spark/sql/JavaSparkSessionSuite.java
+++ 
b/sql/core/src/test/java/test/org/apache/spark/sql/JavaSparkSessionSuite.java
@@ -19,6 +19,7 @@ package test.org.apache.spark.sql;
 
 import org.apache.spark.sql.*;
 import org.junit.After;
+import org.junit.Assert;
 import org.junit.Test;
 
 import java.util.HashMap;
@@ -50,7 +51,7 @@ public class JavaSparkSessionSuite {
   .getOrCreate();
 
 for (Map.Entry e : map.entrySet()) {
-  assert(spark.conf().get(e.getKey()).equals(e.getValue().toString()));
+  Assert.assertEquals(spark.conf().get(e.getKey()), 
e.getValue().toString());
 }
   }
 }


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (9ea2a97db06 -> ab086ee5951)

2022-09-02 Thread maxgekk
This is an automated email from the ASF dual-hosted git repository.

maxgekk pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


from 9ea2a97db06 [SPARK-40251][BUILD][MLLIB] Upgrade dev.ludovic.netlib 
from 2.2.1 to 3.0.2 & breeze from 2.0 to 2.1.0
 add ab086ee5951 [SPARK-39906][INFRA][FOLLOWGUP] Eliminate build warnings - 
sbt 0.13 hell syntax is deprecated; use slash syntax instead

No new revisions were added by this update.

Summary of changes:
 .github/workflows/benchmark.yml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (ece79381e1a -> 9ea2a97db06)

2022-09-02 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


from ece79381e1a [SPARK-40098][SQL][FOLLOWUP] Revert the pretty format of 
error messages in the Thrift Server
 add 9ea2a97db06 [SPARK-40251][BUILD][MLLIB] Upgrade dev.ludovic.netlib 
from 2.2.1 to 3.0.2 & breeze from 2.0 to 2.1.0

No new revisions were added by this update.

Summary of changes:
 dev/deps/spark-deps-hadoop-2-hive-2.3  |  12 +-
 dev/deps/spark-deps-hadoop-3-hive-2.3  |  12 +-
 .../benchmarks/BLASBenchmark-jdk11-results.txt | 208 -
 .../benchmarks/BLASBenchmark-jdk17-results.txt | 260 ++---
 mllib-local/benchmarks/BLASBenchmark-results.txt   | 260 ++---
 .../scala/org/apache/spark/ml/linalg/BLAS.scala|   4 +-
 .../org/apache/spark/ml/linalg/BLASBenchmark.scala |  22 +-
 .../org/apache/spark/mllib/linalg/ARPACK.scala |   4 +-
 .../scala/org/apache/spark/mllib/linalg/BLAS.scala |   4 +-
 .../org/apache/spark/mllib/linalg/LAPACK.scala |   4 +-
 pom.xml|   5 +-
 11 files changed, 401 insertions(+), 394 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated: [SPARK-40098][SQL][FOLLOWUP] Revert the pretty format of error messages in the Thrift Server

2022-09-02 Thread maxgekk
This is an automated email from the ASF dual-hosted git repository.

maxgekk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
 new ece79381e1a [SPARK-40098][SQL][FOLLOWUP] Revert the pretty format of 
error messages in the Thrift Server
ece79381e1a is described below

commit ece79381e1a8eb613aa225f8cca76e704e9f4330
Author: Max Gekk 
AuthorDate: Fri Sep 2 19:29:25 2022 +0300

[SPARK-40098][SQL][FOLLOWUP] Revert the pretty format of error messages in 
the Thrift Server

### What changes were proposed in this pull request?
In the PR, I propose:
1. Output errors in the PRETTY format in the same way as before the PR 
https://github.com/apache/spark/pull/37520.
2. Do not output non-JSON elements in the MINIMAL and STANDARD formats.

### Why are the changes needed?
1. To not break existing apps that might expect text errors in particular 
format.
3. Do not output extra text when the Thrift Server outputs errors in an 
JSON format.

### Does this PR introduce _any_ user-facing change?
Yes.

### How was this patch tested?
By running the modified tests:
```
$ build/sbt -Phive -Phive-thriftserver "test:testOnly 
*ThriftServerWithSparkContextInBinarySuite"
```

Closes #37773 from MaxGekk/thrift-serv-json-errors-followup.

Authored-by: Max Gekk 
Signed-off-by: Max Gekk 
---
 .../spark/sql/hive/thriftserver/HiveThriftServerErrors.scala   | 7 +--
 .../sql/hive/thriftserver/ThriftServerWithSparkContextSuite.scala  | 6 +++---
 2 files changed, 8 insertions(+), 5 deletions(-)

diff --git 
a/sql/hive-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/HiveThriftServerErrors.scala
 
b/sql/hive-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/HiveThriftServerErrors.scala
index eaec382e782..8a8bdd4d38e 100644
--- 
a/sql/hive-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/HiveThriftServerErrors.scala
+++ 
b/sql/hive-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/HiveThriftServerErrors.scala
@@ -37,9 +37,12 @@ object HiveThriftServerErrors {
   }
 
   def runningQueryError(e: Throwable, format: ErrorMessageFormat.Value): 
Throwable = e match {
+case st: SparkThrowable if format == ErrorMessageFormat.PRETTY =>
+  val errorClassPrefix = Option(st.getErrorClass).map(e => s"[$e] 
").getOrElse("")
+  new HiveSQLException(
+s"Error running query: $errorClassPrefix${st.toString}", 
st.getSqlState, st)
 case st: SparkThrowable with Throwable =>
-  val message = SparkThrowableHelper.getMessage(st, format)
-  new HiveSQLException(s"Error running query: $message", st.getSqlState, 
st)
+  new HiveSQLException(SparkThrowableHelper.getMessage(st, format), 
st.getSqlState, st)
 case _ => new HiveSQLException(s"Error running query: ${e.toString}", e)
   }
 
diff --git 
a/sql/hive-thriftserver/src/test/scala/org/apache/spark/sql/hive/thriftserver/ThriftServerWithSparkContextSuite.scala
 
b/sql/hive-thriftserver/src/test/scala/org/apache/spark/sql/hive/thriftserver/ThriftServerWithSparkContextSuite.scala
index 3a38efd27cb..b0db14b4228 100644
--- 
a/sql/hive-thriftserver/src/test/scala/org/apache/spark/sql/hive/thriftserver/ThriftServerWithSparkContextSuite.scala
+++ 
b/sql/hive-thriftserver/src/test/scala/org/apache/spark/sql/hive/thriftserver/ThriftServerWithSparkContextSuite.scala
@@ -162,7 +162,7 @@ trait ThriftServerWithSparkContextSuite extends 
SharedThriftServer {
   val e1 = intercept[HiveSQLException](exec(sql))
   // scalastyle:off line.size.limit
   assert(e1.getMessage ===
-"""Error running query: [DIVIDE_BY_ZERO] Division by zero. Use 
`try_divide` to tolerate divisor being 0 and return NULL instead. If necessary 
set "spark.sql.ansi.enabled" to "false" to bypass this error.
+"""Error running query: [DIVIDE_BY_ZERO] 
org.apache.spark.SparkArithmeticException: [DIVIDE_BY_ZERO] Division by zero. 
Use `try_divide` to tolerate divisor being 0 and return NULL instead. If 
necessary set "spark.sql.ansi.enabled" to "false" to bypass this error.
   |== SQL(line 1, position 8) ==
   |select 1 / 0
   |   ^
@@ -171,7 +171,7 @@ trait ThriftServerWithSparkContextSuite extends 
SharedThriftServer {
   exec(s"set 
${SQLConf.ERROR_MESSAGE_FORMAT.key}=${ErrorMessageFormat.MINIMAL}")
   val e2 = intercept[HiveSQLException](exec(sql))
   assert(e2.getMessage ===
-"""Error running query: {
+"""{
   |  "errorClass" : "DIVIDE_BY_ZERO",
   |  "sqlState" : "22012",
   |  "messageParameters" : {
@@ -189,7 +189,7 @@ trait ThriftServerWithSparkContextSuite extends 
SharedThriftServer {
   exec(s"set 
${SQLConf.ERROR_MESSAGE_FORMAT.key}=${ErrorMessageFormat.STANDARD}")
   val e3 = 

[spark] branch master updated (96831bbb674 -> fb261c2d5df)

2022-09-02 Thread gengliang
This is an automated email from the ASF dual-hosted git repository.

gengliang pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


from 96831bbb674 [SPARK-40228][SQL] Do not simplify multiLike if child is 
not a cheap expression
 add fb261c2d5df [SPARK-40310][SQL] try_sum() should throw the exceptions 
from its child

No new revisions were added by this update.

Summary of changes:
 .../sql/catalyst/analysis/FunctionRegistry.scala   |   2 +-
 .../sql/catalyst/expressions/aggregate/Sum.scala   | 139 -
 .../resources/sql-tests/inputs/try_aggregates.sql  |   9 ++
 .../sql-tests/results/ansi/try_aggregates.sql.out  | 128 +++
 .../sql-tests/results/try_aggregates.sql.out   |  70 +++
 5 files changed, 257 insertions(+), 91 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (2ef2ad27faa -> 96831bbb674)

2022-09-02 Thread yumwang
This is an automated email from the ASF dual-hosted git repository.

yumwang pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


from 2ef2ad27faa [SPARK-39284][FOLLOW] Add Groupby.mad to API references
 add 96831bbb674 [SPARK-40228][SQL] Do not simplify multiLike if child is 
not a cheap expression

No new revisions were added by this update.

Summary of changes:
 .../org/apache/spark/sql/catalyst/optimizer/Optimizer.scala  |  2 +-
 .../apache/spark/sql/catalyst/optimizer/expressions.scala| 12 
 .../sql/catalyst/optimizer/LikeSimplificationSuite.scala | 11 +++
 3 files changed, 20 insertions(+), 5 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated: [SPARK-39284][FOLLOW] Add Groupby.mad to API references

2022-09-02 Thread ruifengz
This is an automated email from the ASF dual-hosted git repository.

ruifengz pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
 new 2ef2ad27faa [SPARK-39284][FOLLOW] Add Groupby.mad to API references
2ef2ad27faa is described below

commit 2ef2ad27faa2599c687f7ead2a2855fa9b7495a3
Author: Ruifeng Zheng 
AuthorDate: Fri Sep 2 17:30:41 2022 +0800

[SPARK-39284][FOLLOW] Add Groupby.mad to API references

### What changes were proposed in this pull request?
Add `Groupby.mad` to API references

### Why are the changes needed?
`Groupby.mad` was implemented in 
https://github.com/apache/spark/pull/36660, but I forgot to add it to the doc

### Does this PR introduce _any_ user-facing change?
yes, this API will be listed in the references

### How was this patch tested?
existing doc building

Closes #37767 from zhengruifeng/ps_ref_groupby_mad.

Authored-by: Ruifeng Zheng 
Signed-off-by: Ruifeng Zheng 
---
 python/docs/source/reference/pyspark.pandas/groupby.rst | 1 +
 1 file changed, 1 insertion(+)

diff --git a/python/docs/source/reference/pyspark.pandas/groupby.rst 
b/python/docs/source/reference/pyspark.pandas/groupby.rst
index 2aa39d25765..6d8eed8e684 100644
--- a/python/docs/source/reference/pyspark.pandas/groupby.rst
+++ b/python/docs/source/reference/pyspark.pandas/groupby.rst
@@ -68,6 +68,7 @@ Computations / Descriptive Stats
GroupBy.filter
GroupBy.first
GroupBy.last
+   GroupBy.mad
GroupBy.max
GroupBy.mean
GroupBy.median


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (fd0498f81df -> b61bfde61d3)

2022-09-02 Thread ruifengz
This is an automated email from the ASF dual-hosted git repository.

ruifengz pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


from fd0498f81df [SPARK-40304][K8S][TESTS] Add `decomTestTag` to K8s 
Integration Test
 add b61bfde61d3 [SPARK-40210][PYTHON] Fix math atan2, hypot, pow and pmod 
float argument call

No new revisions were added by this update.

Summary of changes:
 python/pyspark/sql/functions.py| 59 --
 python/pyspark/sql/tests/test_functions.py | 10 +
 2 files changed, 17 insertions(+), 52 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org