(spark) branch master updated: [SPARK-48239][INFRA][FOLLOWUP] install the missing `jq` library

2024-05-24 Thread wenchen
This is an automated email from the ASF dual-hosted git repository.

wenchen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
 new 416d7f24fc35 [SPARK-48239][INFRA][FOLLOWUP] install the missing `jq` 
library
416d7f24fc35 is described below

commit 416d7f24fc354e912773ceb160210ad6a0c5fe99
Author: Wenchen Fan 
AuthorDate: Fri May 24 20:53:00 2024 -0700

[SPARK-48239][INFRA][FOLLOWUP] install the missing `jq` library

### What changes were proposed in this pull request?

This is a followup of https://github.com/apache/spark/pull/46534 . We 
missed the `jq` library which is needed to create git tags.

### Why are the changes needed?

fix bug

### Does this PR introduce _any_ user-facing change?

no

### How was this patch tested?

manual

### Was this patch authored or co-authored using generative AI tooling?

no

Closes #46743 from cloud-fan/script.

Authored-by: Wenchen Fan 
Signed-off-by: Wenchen Fan 
---
 dev/create-release/release-util.sh | 3 +++
 dev/create-release/spark-rm/Dockerfile | 1 +
 2 files changed, 4 insertions(+)

diff --git a/dev/create-release/release-util.sh 
b/dev/create-release/release-util.sh
index 0394fb49c2fa..b5edbf40d487 100755
--- a/dev/create-release/release-util.sh
+++ b/dev/create-release/release-util.sh
@@ -128,6 +128,9 @@ function get_release_info {
 RC_COUNT=1
   fi
 
+  if [ "$GIT_BRANCH" = "master" ]; then
+RELEASE_VERSION="$RELEASE_VERSION-preview1"
+  fi
   export NEXT_VERSION
   export RELEASE_VERSION=$(read_config "Release" "$RELEASE_VERSION")
 
diff --git a/dev/create-release/spark-rm/Dockerfile 
b/dev/create-release/spark-rm/Dockerfile
index adaa4df3f579..5fdaf58feee2 100644
--- a/dev/create-release/spark-rm/Dockerfile
+++ b/dev/create-release/spark-rm/Dockerfile
@@ -58,6 +58,7 @@ RUN apt-get update && apt-get install -y \
 texinfo \
 texlive-latex-extra \
 qpdf \
+jq \
 r-base \
 ruby \
 ruby-dev \


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



(spark) branch master updated (1a536f01ead3 -> 6cd1ccc56321)

2024-05-24 Thread dongjoon
This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


from 1a536f01ead3 [SPARK-48407][SQL][DOCS] Teradata: Document Type 
Conversion rules between Spark SQL and teradata
 add 6cd1ccc56321 [SPARK-48394][CORE] Cleanup mapIdToMapIndex on mapoutput 
unregister

No new revisions were added by this update.

Summary of changes:
 .../scala/org/apache/spark/MapOutputTracker.scala  | 26 ++
 .../org/apache/spark/MapOutputTrackerSuite.scala   | 55 ++
 2 files changed, 72 insertions(+), 9 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



(spark) branch master updated: [SPARK-48407][SQL][DOCS] Teradata: Document Type Conversion rules between Spark SQL and teradata

2024-05-24 Thread dongjoon
This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
 new 1a536f01ead3 [SPARK-48407][SQL][DOCS] Teradata: Document Type 
Conversion rules between Spark SQL and teradata
1a536f01ead3 is described below

commit 1a536f01ead35b770467381c476e093338d81e7c
Author: Kent Yao 
AuthorDate: Fri May 24 15:56:19 2024 -0700

[SPARK-48407][SQL][DOCS] Teradata: Document Type Conversion rules between 
Spark SQL and teradata

### What changes were proposed in this pull request?

This PR adds documentation for the builtin teradata jdbc dialect's data 
type conversion rules

### Why are the changes needed?

doc improvement
### Does this PR introduce _any_ user-facing change?

no

### How was this patch tested?


![image](https://github.com/apache/spark/assets/8326978/e1ec0de5-cd83-4339-896a-50c58ad01c4d)

### Was this patch authored or co-authored using generative AI tooling?

no

Closes #46728 from yaooqinn/SPARK-48407.

Authored-by: Kent Yao 
Signed-off-by: Dongjoon Hyun 
---
 docs/sql-data-sources-jdbc.md | 214 ++
 1 file changed, 214 insertions(+)

diff --git a/docs/sql-data-sources-jdbc.md b/docs/sql-data-sources-jdbc.md
index 371dc0595071..9ffd96cd40ee 100644
--- a/docs/sql-data-sources-jdbc.md
+++ b/docs/sql-data-sources-jdbc.md
@@ -1991,3 +1991,217 @@ The Spark Catalyst data types below are not supported 
with suitable DB2 types.
 - NullType
 - ObjectType
 - VariantType
+
+### Mapping Spark SQL Data Types from Teradata
+
+The below table describes the data type conversions from Teradata data types 
to Spark SQL Data Types,
+when reading data from a Teradata table using the built-in jdbc data source 
with the [Teradata JDBC 
Driver](https://mvnrepository.com/artifact/com.teradata.jdbc/terajdbc)
+as the activated JDBC Driver.
+
+
+  
+
+  Teradata Data Type
+  Spark SQL Data Type
+  Remarks
+
+  
+  
+
+  BYTEINT
+  ByteType
+  
+
+
+  SMALLINT
+  ShortType
+  
+
+
+  INTEGER, INT
+  IntegerType
+  
+
+
+  BIGINT
+  LongType
+  
+
+
+  REAL, DOUBLE PRECISION, FLOAT
+  DoubleType
+  
+
+
+  DECIMAL, NUMERIC, NUMBER
+  DecimalType
+  
+
+
+  DATE
+  DateType
+  
+
+
+  TIMESTAMP, TIMESTAMP WITH TIME ZONE
+  TimestampType
+  (Default)preferTimestampNTZ=false or 
spark.sql.timestampType=TIMESTAMP_LTZ
+
+
+  TIMESTAMP, TIMESTAMP WITH TIME ZONE
+  TimestampNTZType
+  preferTimestampNTZ=true or spark.sql.timestampType=TIMESTAMP_NTZ
+
+
+  TIME, TIME WITH TIME ZONE
+  TimestampType
+  (Default)preferTimestampNTZ=false or 
spark.sql.timestampType=TIMESTAMP_LTZ
+
+
+  TIME, TIME WITH TIME ZONE
+  TimestampNTZType
+  preferTimestampNTZ=true or spark.sql.timestampType=TIMESTAMP_NTZ
+
+
+  CHARACTER(n), CHAR(n), GRAPHIC(n)
+  CharType(n)
+  
+
+
+  VARCHAR(n), VARGRAPHIC(n)
+  VarcharType(n)
+  
+
+
+  BYTE(n), VARBYTE(n)
+  BinaryType
+  
+
+
+  CLOB
+  StringType
+  
+
+
+  BLOB
+  BinaryType
+  
+
+
+  INTERVAL Data Types
+  -
+  The INTERVAL data types are unknown yet
+
+
+  Period Data Types, ARRAY, UDT
+  -
+  Not Supported
+
+  
+
+
+### Mapping Spark SQL Data Types to Teradata
+
+The below table describes the data type conversions from Spark SQL Data Types 
to Teradata data types,
+when creating, altering, or writing data to a Teradata table using the 
built-in jdbc data source with
+the [Teradata JDBC 
Driver](https://mvnrepository.com/artifact/com.teradata.jdbc/terajdbc) as the 
activated JDBC Driver.
+
+
+  
+
+  Spark SQL Data Type
+  Teradata Data Type
+  Remarks
+
+  
+  
+
+  BooleanType
+  CHAR(1)
+  
+
+
+  ByteType
+  BYTEINT
+  
+
+
+  ShortType
+  SMALLINT
+  
+
+
+  IntegerType
+  INTEGER
+  
+
+
+  LongType
+  BIGINT
+  
+
+
+  FloatType
+  REAL
+  
+
+
+  DoubleType
+  DOUBLE PRECISION
+  
+
+
+  DecimalType(p, s)
+  DECIMAL(p,s)
+  
+
+
+  DateType
+  DATE
+  
+
+
+  TimestampType
+  TIMESTAMP
+  
+
+
+  TimestampNTZType
+  TIMESTAMP
+  
+
+
+  StringType
+  VARCHAR(255)
+  
+
+
+  BinaryType
+  BLOB
+  
+
+
+  CharType(n)
+  CHAR(n)
+  
+
+
+  VarcharType(n)
+  VARCHAR(n)
+  
+
+  
+
+
+The 

(spark) branch master updated: [SPARK-48325][CORE] Always specify messages in ExecutorRunner.killProcess

2024-05-24 Thread dongjoon
This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
 new 7d96334902f2 [SPARK-48325][CORE] Always specify messages in 
ExecutorRunner.killProcess
7d96334902f2 is described below

commit 7d96334902f22a80af63ce1253d5abda63178c4e
Author: Bo Zhang 
AuthorDate: Fri May 24 15:54:21 2024 -0700

[SPARK-48325][CORE] Always specify messages in ExecutorRunner.killProcess

### What changes were proposed in this pull request?
This change is to always specify the message in 
`ExecutorRunner.killProcess`.

### Why are the changes needed?
This is to get the occurrence rate for different cases when killing the 
executor process, in order to analyze executor running stability.

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
N/A

### Was this patch authored or co-authored using generative AI tooling?
No

Closes #46641 from bozhang2820/spark-48325.

Authored-by: Bo Zhang 
Signed-off-by: Dongjoon Hyun 
---
 .../scala/org/apache/spark/deploy/worker/ExecutorRunner.scala  | 10 +-
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git 
a/core/src/main/scala/org/apache/spark/deploy/worker/ExecutorRunner.scala 
b/core/src/main/scala/org/apache/spark/deploy/worker/ExecutorRunner.scala
index 7bb8b74eb021..bd98f19cdb60 100644
--- a/core/src/main/scala/org/apache/spark/deploy/worker/ExecutorRunner.scala
+++ b/core/src/main/scala/org/apache/spark/deploy/worker/ExecutorRunner.scala
@@ -88,7 +88,7 @@ private[deploy] class ExecutorRunner(
   if (state == ExecutorState.LAUNCHING || state == ExecutorState.RUNNING) {
 state = ExecutorState.FAILED
   }
-  killProcess(Some("Worker shutting down")) }
+  killProcess("Worker shutting down") }
   }
 
   /**
@@ -96,7 +96,7 @@ private[deploy] class ExecutorRunner(
*
* @param message the exception message which caused the executor's death
*/
-  private def killProcess(message: Option[String]): Unit = {
+  private def killProcess(message: String): Unit = {
 var exitCode: Option[Int] = None
 if (process != null) {
   logInfo("Killing process!")
@@ -113,7 +113,7 @@ private[deploy] class ExecutorRunner(
   }
 }
 try {
-  worker.send(ExecutorStateChanged(appId, execId, state, message, 
exitCode))
+  worker.send(ExecutorStateChanged(appId, execId, state, Some(message), 
exitCode))
 } catch {
   case e: IllegalStateException => logWarning(log"${MDC(ERROR, 
e.getMessage())}", e)
 }
@@ -206,11 +206,11 @@ private[deploy] class ExecutorRunner(
   case interrupted: InterruptedException =>
 logInfo("Runner thread for executor " + fullId + " interrupted")
 state = ExecutorState.KILLED
-killProcess(None)
+killProcess(s"Runner thread for executor $fullId interrupted")
   case e: Exception =>
 logError("Error running executor", e)
 state = ExecutorState.FAILED
-killProcess(Some(e.toString))
+killProcess(s"Error running executor: $e")
 }
   }
 }


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



(spark) tag v4.0.0-preview1-rc2 created (now 7cfe5a6e44e8)

2024-05-24 Thread wenchen
This is an automated email from the ASF dual-hosted git repository.

wenchen pushed a change to tag v4.0.0-preview1-rc2
in repository https://gitbox.apache.org/repos/asf/spark.git


  at 7cfe5a6e44e8 (commit)
This tag includes the following new commits:

 new 7cfe5a6e44e8 Preparing Spark release v4.0.0-preview1-rc2

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.



-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



(spark) 01/01: Preparing Spark release v4.0.0-preview1-rc2

2024-05-24 Thread wenchen
This is an automated email from the ASF dual-hosted git repository.

wenchen pushed a commit to tag v4.0.0-preview1-rc2
in repository https://gitbox.apache.org/repos/asf/spark.git

commit 7cfe5a6e44e8d7079ae29ad3e2cee7231cd3dc66
Author: Wenchen Fan 
AuthorDate: Fri May 24 18:53:15 2024 +

Preparing Spark release v4.0.0-preview1-rc2
---
 R/pkg/R/sparkR.R   | 4 ++--
 assembly/pom.xml   | 2 +-
 common/kvstore/pom.xml | 2 +-
 common/network-common/pom.xml  | 2 +-
 common/network-shuffle/pom.xml | 2 +-
 common/network-yarn/pom.xml| 2 +-
 common/sketch/pom.xml  | 2 +-
 common/tags/pom.xml| 2 +-
 common/unsafe/pom.xml  | 2 +-
 common/utils/pom.xml   | 2 +-
 common/variant/pom.xml | 2 +-
 connector/avro/pom.xml | 2 +-
 connector/connect/client/jvm/pom.xml   | 2 +-
 connector/connect/common/pom.xml   | 2 +-
 connector/connect/server/pom.xml   | 2 +-
 connector/docker-integration-tests/pom.xml | 2 +-
 connector/kafka-0-10-assembly/pom.xml  | 2 +-
 connector/kafka-0-10-sql/pom.xml   | 2 +-
 connector/kafka-0-10-token-provider/pom.xml| 2 +-
 connector/kafka-0-10/pom.xml   | 2 +-
 connector/kinesis-asl-assembly/pom.xml | 2 +-
 connector/kinesis-asl/pom.xml  | 2 +-
 connector/profiler/pom.xml | 2 +-
 connector/protobuf/pom.xml | 2 +-
 connector/spark-ganglia-lgpl/pom.xml   | 2 +-
 core/pom.xml   | 2 +-
 docs/_config.yml   | 6 +++---
 examples/pom.xml   | 2 +-
 graphx/pom.xml | 2 +-
 hadoop-cloud/pom.xml   | 2 +-
 launcher/pom.xml   | 2 +-
 mllib-local/pom.xml| 2 +-
 mllib/pom.xml  | 2 +-
 pom.xml| 2 +-
 python/pyspark/version.py  | 2 +-
 repl/pom.xml   | 2 +-
 resource-managers/kubernetes/core/pom.xml  | 2 +-
 resource-managers/kubernetes/integration-tests/pom.xml | 2 +-
 resource-managers/yarn/pom.xml | 2 +-
 sql/api/pom.xml| 2 +-
 sql/catalyst/pom.xml   | 2 +-
 sql/core/pom.xml   | 2 +-
 sql/hive-thriftserver/pom.xml  | 2 +-
 sql/hive/pom.xml   | 2 +-
 streaming/pom.xml  | 2 +-
 tools/pom.xml  | 2 +-
 46 files changed, 49 insertions(+), 49 deletions(-)

diff --git a/R/pkg/R/sparkR.R b/R/pkg/R/sparkR.R
index 0be7e5da24d2..478acf514ef3 100644
--- a/R/pkg/R/sparkR.R
+++ b/R/pkg/R/sparkR.R
@@ -456,8 +456,8 @@ sparkR.session <- function(
 
   # Check if version number of SparkSession matches version number of SparkR 
package
   jvmVersion <- callJMethod(sparkSession, "version")
-  # Remove -SNAPSHOT from jvm versions
-  jvmVersionStrip <- gsub("-SNAPSHOT", "", jvmVersion, fixed = TRUE)
+  # Remove -preview1 from jvm versions
+  jvmVersionStrip <- gsub("-preview1", "", jvmVersion, fixed = TRUE)
   rPackageVersion <- paste0(packageVersion("SparkR"))
 
   if (jvmVersionStrip != rPackageVersion) {
diff --git a/assembly/pom.xml b/assembly/pom.xml
index 58e7ae5bb0c7..417e7c23ca9f 100644
--- a/assembly/pom.xml
+++ b/assembly/pom.xml
@@ -21,7 +21,7 @@
   
 org.apache.spark
 spark-parent_2.13
-4.0.0-SNAPSHOT
+4.0.0-preview1
 ../pom.xml
   
 
diff --git a/common/kvstore/pom.xml b/common/kvstore/pom.xml
index 046648e9c2ae..e1a4497387a2 100644
--- a/common/kvstore/pom.xml
+++ b/common/kvstore/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.spark
 spark-parent_2.13
-4.0.0-SNAPSHOT
+4.0.0-preview1
 ../../pom.xml
   
 
diff --git a/common/network-common/pom.xml b/common/network-common/pom.xml
index cdb5bd72158a..d8dff6996cec 100644
--- a/common/network-common/pom.xml
+++ b/common/network-common/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.spark
 spark-parent_2.13
-4.0.0-SNAPSHOT
+4.0.0-preview1
 ../../pom.xml
   
 
diff --git a/common/network-shuffle/pom.xml b/common/network-shuffle/pom.xml
index 0f7036ef746c..d50f8ad6d8ce 100644
--- a/common/network-shuffle/pom.xml
+++ 

(spark) tag v4.0.0-preview-rc1 deleted (was 9fec87d16a04)

2024-05-24 Thread wenchen
This is an automated email from the ASF dual-hosted git repository.

wenchen pushed a change to tag v4.0.0-preview-rc1
in repository https://gitbox.apache.org/repos/asf/spark.git


*** WARNING: tag v4.0.0-preview-rc1 was deleted! ***

 was 9fec87d16a04 Preparing Spark release v4.0.0-preview-rc1

This change permanently discards the following revisions:

 discard 9fec87d16a04 Preparing Spark release v4.0.0-preview-rc1


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



(spark) branch master updated: [SPARK-47579][SQL][FOLLOWUP] Restore the `--help` print format of spark sql shell

2024-05-24 Thread gengliang
This is an automated email from the ASF dual-hosted git repository.

gengliang pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
 new 3cb30c2366b2 [SPARK-47579][SQL][FOLLOWUP] Restore the `--help` print 
format of spark sql shell
3cb30c2366b2 is described below

commit 3cb30c2366b27c5a65ec02121c30bd1a4eb20584
Author: Kent Yao 
AuthorDate: Fri May 24 09:43:03 2024 -0700

[SPARK-47579][SQL][FOLLOWUP] Restore the `--help` print format of spark sql 
shell

### What changes were proposed in this pull request?

Restore the print format of spark sql shell

### Why are the changes needed?

bugfix

### Does this PR introduce _any_ user-facing change?

no

### How was this patch tested?

manually


![image](https://github.com/apache/spark/assets/8326978/17b9d009-5d93-4d84-9367-7308b4cda426)


![image](https://github.com/apache/spark/assets/8326978/a5e333bd-0e22-4d5a-83f1-843767f6d5f5)

### Was this patch authored or co-authored using generative AI tooling?

no

Closes #46735 from yaooqinn/SPARK-47579.

Authored-by: Kent Yao 
Signed-off-by: Gengliang Wang 
---
 common/utils/src/main/scala/org/apache/spark/internal/LogKey.scala | 1 -
 core/src/main/scala/org/apache/spark/deploy/SparkSubmitArguments.scala | 3 ++-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/common/utils/src/main/scala/org/apache/spark/internal/LogKey.scala 
b/common/utils/src/main/scala/org/apache/spark/internal/LogKey.scala
index 1f67a211c01f..99fc58b03503 100644
--- a/common/utils/src/main/scala/org/apache/spark/internal/LogKey.scala
+++ b/common/utils/src/main/scala/org/apache/spark/internal/LogKey.scala
@@ -585,7 +585,6 @@ object LogKeys {
   case object SESSION_KEY extends LogKey
   case object SET_CLIENT_INFO_REQUEST extends LogKey
   case object SHARD_ID extends LogKey
-  case object SHELL_OPTIONS extends LogKey
   case object SHORT_USER_NAME extends LogKey
   case object SHUFFLE_BLOCK_INFO extends LogKey
   case object SHUFFLE_DB_BACKEND_KEY extends LogKey
diff --git 
a/core/src/main/scala/org/apache/spark/deploy/SparkSubmitArguments.scala 
b/core/src/main/scala/org/apache/spark/deploy/SparkSubmitArguments.scala
index 61235a701907..e47596a6ae43 100644
--- a/core/src/main/scala/org/apache/spark/deploy/SparkSubmitArguments.scala
+++ b/core/src/main/scala/org/apache/spark/deploy/SparkSubmitArguments.scala
@@ -588,7 +588,8 @@ private[deploy] class SparkSubmitArguments(args: 
Seq[String], env: Map[String, S
 )
 
 if (SparkSubmit.isSqlShell(mainClass)) {
-  logInfo(log"CLI options:\n${MDC(SHELL_OPTIONS, getSqlShellOptions())}")
+  logInfo("CLI options:")
+  logInfo(getSqlShellOptions())
 }
 
 throw SparkUserAppException(exitCode)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



(spark) branch master updated (bd95040c3170 -> 7ae939ae12a6)

2024-05-24 Thread ulyssesyou
This is an automated email from the ASF dual-hosted git repository.

ulyssesyou pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


from bd95040c3170 [SPARK-48412][PYTHON] Refactor data type json parse
 add 7ae939ae12a6 [SPARK-48168][SQL] Add bitwise shifting operators support

No new revisions were added by this update.

Summary of changes:
 .../explain-results/function_shiftleft.explain |   2 +-
 .../explain-results/function_shiftright.explain|   2 +-
 .../function_shiftrightunsigned.explain|   2 +-
 .../grouping_and_grouping_id.explain   |   2 +-
 .../spark/sql/catalyst/parser/SqlBaseLexer.g4  |  40 +-
 .../spark/sql/catalyst/parser/SqlBaseParser.g4 |   8 ++
 .../sql/catalyst/analysis/FunctionRegistry.scala   |   3 +
 .../sql/catalyst/expressions/mathExpressions.scala | 134 +++--
 .../spark/sql/catalyst/parser/AstBuilder.scala |  11 ++
 .../spark/sql/catalyst/SQLKeywordSuite.scala   |   2 +-
 .../sql-functions/sql-expression-schema.md |   9 +-
 .../sql-tests/analyzer-results/bitwise.sql.out | 112 +
 .../analyzer-results/group-analytics.sql.out   |  10 +-
 .../analyzer-results/grouping_set.sql.out  |   6 +-
 .../postgreSQL/groupingsets.sql.out|  44 +++
 .../analyzer-results/postgreSQL/int2.sql.out   |   4 +-
 .../analyzer-results/postgreSQL/int4.sql.out   |   4 +-
 .../analyzer-results/postgreSQL/int8.sql.out   |   4 +-
 .../udf/udf-group-analytics.sql.out|  10 +-
 .../test/resources/sql-tests/inputs/bitwise.sql|  12 ++
 .../resources/sql-tests/results/bitwise.sql.out| 128 
 .../sql-tests/results/postgreSQL/int2.sql.out  |   4 +-
 .../sql-tests/results/postgreSQL/int4.sql.out  |   4 +-
 .../sql-tests/results/postgreSQL/int8.sql.out  |   2 +-
 .../approved-plans-v1_4/q17/explain.txt|   2 +-
 .../approved-plans-v1_4/q25/explain.txt|   2 +-
 .../approved-plans-v1_4/q27.sf100/explain.txt  |   2 +-
 .../approved-plans-v1_4/q27/explain.txt|   2 +-
 .../approved-plans-v1_4/q29/explain.txt|   2 +-
 .../approved-plans-v1_4/q36.sf100/explain.txt  |   2 +-
 .../approved-plans-v1_4/q36/explain.txt|   2 +-
 .../approved-plans-v1_4/q39a/explain.txt   |   2 +-
 .../approved-plans-v1_4/q39b/explain.txt   |   2 +-
 .../approved-plans-v1_4/q49/explain.txt|   6 +-
 .../approved-plans-v1_4/q5/explain.txt |   2 +-
 .../approved-plans-v1_4/q64/explain.txt|   4 +-
 .../approved-plans-v1_4/q70.sf100/explain.txt  |   2 +-
 .../approved-plans-v1_4/q70/explain.txt|   2 +-
 .../approved-plans-v1_4/q72/explain.txt|   2 +-
 .../approved-plans-v1_4/q85/explain.txt|   2 +-
 .../approved-plans-v1_4/q86.sf100/explain.txt  |   2 +-
 .../approved-plans-v1_4/q86/explain.txt|   2 +-
 .../approved-plans-v2_7/q24.sf100/explain.txt  |   2 +-
 .../approved-plans-v2_7/q49/explain.txt|   6 +-
 .../approved-plans-v2_7/q5a/explain.txt|   2 +-
 .../approved-plans-v2_7/q64/explain.txt|   4 +-
 .../approved-plans-v2_7/q72/explain.txt|   2 +-
 .../sql/expressions/ExpressionInfoSuite.scala  |   5 +-
 48 files changed, 469 insertions(+), 153 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



(spark) branch master updated: [SPARK-48412][PYTHON] Refactor data type json parse

2024-05-24 Thread ruifengz
This is an automated email from the ASF dual-hosted git repository.

ruifengz pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
 new bd95040c3170 [SPARK-48412][PYTHON] Refactor data type json parse
bd95040c3170 is described below

commit bd95040c3170aaed61ee5e9090d1b8580351ee80
Author: Ruifeng Zheng 
AuthorDate: Fri May 24 17:36:46 2024 +0800

[SPARK-48412][PYTHON] Refactor data type json parse

### What changes were proposed in this pull request?
Refactor data type json parse

### Why are the changes needed?
the `_all_atomic_types` causes confusions:

- it is only used in json parse, so it should use the `jsonValue` instead 
of `typeName` (and so it causes the `typeName` not consistent with Scala, will 
fix in separate PR);
- not all atomic types are included in it (e.g. `YearMonthIntervalType`);
- not all atomic types should be placed in it (e.g. `VarcharType` which has 
to be excluded here and there)

### Does this PR introduce _any_ user-facing change?
no

### How was this patch tested?
ci, added tests

### Was this patch authored or co-authored using generative AI tooling?
no

Closes #46733 from zhengruifeng/refactor_json_parse.

Authored-by: Ruifeng Zheng 
Signed-off-by: Ruifeng Zheng 
---
 python/pyspark/sql/tests/test_types.py | 42 ++---
 python/pyspark/sql/types.py| 57 --
 2 files changed, 79 insertions(+), 20 deletions(-)

diff --git a/python/pyspark/sql/tests/test_types.py 
b/python/pyspark/sql/tests/test_types.py
index 6c64a9471363..d665053d9490 100644
--- a/python/pyspark/sql/tests/test_types.py
+++ b/python/pyspark/sql/tests/test_types.py
@@ -1136,12 +1136,46 @@ class TypesTestsMixin:
 self.assertRaises(IndexError, lambda: struct1[9])
 self.assertRaises(TypeError, lambda: struct1[9.9])
 
+def test_parse_datatype_json_string(self):
+from pyspark.sql.types import _parse_datatype_json_string
+
+for dataType in [
+StringType(),
+CharType(5),
+VarcharType(10),
+BinaryType(),
+BooleanType(),
+DecimalType(),
+DecimalType(10, 2),
+FloatType(),
+DoubleType(),
+ByteType(),
+ShortType(),
+IntegerType(),
+LongType(),
+DateType(),
+TimestampType(),
+TimestampNTZType(),
+NullType(),
+VariantType(),
+YearMonthIntervalType(),
+YearMonthIntervalType(YearMonthIntervalType.YEAR),
+YearMonthIntervalType(YearMonthIntervalType.YEAR, 
YearMonthIntervalType.MONTH),
+DayTimeIntervalType(),
+DayTimeIntervalType(DayTimeIntervalType.DAY),
+DayTimeIntervalType(DayTimeIntervalType.HOUR, 
DayTimeIntervalType.SECOND),
+CalendarIntervalType(),
+]:
+json_str = dataType.json()
+parsed = _parse_datatype_json_string(json_str)
+self.assertEqual(dataType, parsed)
+
 def test_parse_datatype_string(self):
-from pyspark.sql.types import _all_atomic_types, _parse_datatype_string
+from pyspark.sql.types import _all_mappable_types, 
_parse_datatype_string
+
+for k, t in _all_mappable_types.items():
+self.assertEqual(t(), _parse_datatype_string(k))
 
-for k, t in _all_atomic_types.items():
-if k != "varchar" and k != "char":
-self.assertEqual(t(), _parse_datatype_string(k))
 self.assertEqual(IntegerType(), _parse_datatype_string("int"))
 self.assertEqual(StringType(), _parse_datatype_string("string"))
 self.assertEqual(CharType(1), _parse_datatype_string("char(1)"))
diff --git a/python/pyspark/sql/types.py b/python/pyspark/sql/types.py
index 17b019240f82..b9db59e0a58a 100644
--- a/python/pyspark/sql/types.py
+++ b/python/pyspark/sql/types.py
@@ -1756,13 +1756,45 @@ _atomic_types: List[Type[DataType]] = [
 TimestampNTZType,
 NullType,
 VariantType,
+YearMonthIntervalType,
+DayTimeIntervalType,
 ]
-_all_atomic_types: Dict[str, Type[DataType]] = dict((t.typeName(), t) for t in 
_atomic_types)
 
-_complex_types: List[Type[Union[ArrayType, MapType, StructType]]] = 
[ArrayType, MapType, StructType]
-_all_complex_types: Dict[str, Type[Union[ArrayType, MapType, StructType]]] = 
dict(
-(v.typeName(), v) for v in _complex_types
-)
+_complex_types: List[Type[Union[ArrayType, MapType, StructType]]] = [
+ArrayType,
+MapType,
+StructType,
+]
+_all_complex_types: Dict[str, Type[Union[ArrayType, MapType, StructType]]] = {
+"array": ArrayType,
+"map": MapType,
+"struct": StructType,
+}
+
+# Datatypes that can be directly parsed by mapping a json string 

(spark) branch master updated: [SPARK-48409][BUILD][TESTS] Upgrade MySQL & Postgres & Mariadb docker image version

2024-05-24 Thread yao
This is an automated email from the ASF dual-hosted git repository.

yao pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
 new b15b6cf1f537 [SPARK-48409][BUILD][TESTS] Upgrade MySQL & Postgres & 
Mariadb docker image version
b15b6cf1f537 is described below

commit b15b6cf1f537756eafbe8dd31a3b03dc500077f3
Author: panbingkun 
AuthorDate: Fri May 24 17:04:38 2024 +0800

[SPARK-48409][BUILD][TESTS] Upgrade MySQL & Postgres & Mariadb docker image 
version

### What changes were proposed in this pull request?
The pr aims to upgrade some db docker image version, include:
- `MySQL` from `8.3.0` to `8.4.0`
- `Postgres` from `10.5.12` to `10.5.25`
- `Mariadb` from `16.2-alpine` to `16.3-alpine`

### Why are the changes needed?
Tests dependencies upgrading.

### Does this PR introduce _any_ user-facing change?
No.

### How was this patch tested?
Pass GA.

### Was this patch authored or co-authored using generative AI tooling?
No.

Closes #46704 from panbingkun/db_images_upgrade.

Authored-by: panbingkun 
Signed-off-by: Kent Yao 
---
 .../org/apache/spark/sql/jdbc/MariaDBKrbIntegrationSuite.scala  | 6 +++---
 .../scala/org/apache/spark/sql/jdbc/MySQLDatabaseOnDocker.scala | 2 +-
 .../scala/org/apache/spark/sql/jdbc/PostgresIntegrationSuite.scala  | 6 +++---
 .../org/apache/spark/sql/jdbc/PostgresKrbIntegrationSuite.scala | 6 +++---
 .../apache/spark/sql/jdbc/querytest/GeneratedSubquerySuite.scala| 6 +++---
 .../apache/spark/sql/jdbc/querytest/PostgreSQLQueryTestSuite.scala  | 6 +++---
 .../org/apache/spark/sql/jdbc/v2/PostgresIntegrationSuite.scala | 6 +++---
 .../scala/org/apache/spark/sql/jdbc/v2/PostgresNamespaceSuite.scala | 6 +++---
 8 files changed, 22 insertions(+), 22 deletions(-)

diff --git 
a/connector/docker-integration-tests/src/test/scala/org/apache/spark/sql/jdbc/MariaDBKrbIntegrationSuite.scala
 
b/connector/docker-integration-tests/src/test/scala/org/apache/spark/sql/jdbc/MariaDBKrbIntegrationSuite.scala
index 6825c001f767..efb2fa09f6a3 100644
--- 
a/connector/docker-integration-tests/src/test/scala/org/apache/spark/sql/jdbc/MariaDBKrbIntegrationSuite.scala
+++ 
b/connector/docker-integration-tests/src/test/scala/org/apache/spark/sql/jdbc/MariaDBKrbIntegrationSuite.scala
@@ -25,9 +25,9 @@ import 
org.apache.spark.sql.execution.datasources.jdbc.connection.SecureConnecti
 import org.apache.spark.tags.DockerTest
 
 /**
- * To run this test suite for a specific version (e.g., mariadb:10.5.12):
+ * To run this test suite for a specific version (e.g., mariadb:10.5.25):
  * {{{
- *   ENABLE_DOCKER_INTEGRATION_TESTS=1 
MARIADB_DOCKER_IMAGE_NAME=mariadb:10.5.12
+ *   ENABLE_DOCKER_INTEGRATION_TESTS=1 
MARIADB_DOCKER_IMAGE_NAME=mariadb:10.5.25
  * ./build/sbt -Pdocker-integration-tests
  * "docker-integration-tests/testOnly 
org.apache.spark.sql.jdbc.MariaDBKrbIntegrationSuite"
  * }}}
@@ -38,7 +38,7 @@ class MariaDBKrbIntegrationSuite extends 
DockerKrbJDBCIntegrationSuite {
   override protected val keytabFileName = "mariadb.keytab"
 
   override val db = new DatabaseOnDocker {
-override val imageName = sys.env.getOrElse("MARIADB_DOCKER_IMAGE_NAME", 
"mariadb:10.5.12")
+override val imageName = sys.env.getOrElse("MARIADB_DOCKER_IMAGE_NAME", 
"mariadb:10.5.25")
 override val env = Map(
   "MYSQL_ROOT_PASSWORD" -> "rootpass"
 )
diff --git 
a/connector/docker-integration-tests/src/test/scala/org/apache/spark/sql/jdbc/MySQLDatabaseOnDocker.scala
 
b/connector/docker-integration-tests/src/test/scala/org/apache/spark/sql/jdbc/MySQLDatabaseOnDocker.scala
index 568eb5f10973..570a81ac3947 100644
--- 
a/connector/docker-integration-tests/src/test/scala/org/apache/spark/sql/jdbc/MySQLDatabaseOnDocker.scala
+++ 
b/connector/docker-integration-tests/src/test/scala/org/apache/spark/sql/jdbc/MySQLDatabaseOnDocker.scala
@@ -18,7 +18,7 @@
 package org.apache.spark.sql.jdbc
 
 class MySQLDatabaseOnDocker extends DatabaseOnDocker {
-  override val imageName = sys.env.getOrElse("MYSQL_DOCKER_IMAGE_NAME", 
"mysql:8.3.0")
+  override val imageName = sys.env.getOrElse("MYSQL_DOCKER_IMAGE_NAME", 
"mysql:8.4.0")
   override val env = Map(
 "MYSQL_ROOT_PASSWORD" -> "rootpass"
   )
diff --git 
a/connector/docker-integration-tests/src/test/scala/org/apache/spark/sql/jdbc/PostgresIntegrationSuite.scala
 
b/connector/docker-integration-tests/src/test/scala/org/apache/spark/sql/jdbc/PostgresIntegrationSuite.scala
index 5ad4f15216b7..12a71dbd7c7f 100644
--- 
a/connector/docker-integration-tests/src/test/scala/org/apache/spark/sql/jdbc/PostgresIntegrationSuite.scala
+++ 
b/connector/docker-integration-tests/src/test/scala/org/apache/spark/sql/jdbc/PostgresIntegrationSuite.scala
@@ -32,9 +32,9 @@ import org.apache.spark.sql.types._
 import org.apache.spark.tags.DockerTest
 
 /**

(spark) branch master updated (3346afd4b250 -> ef43bbbc1163)

2024-05-24 Thread yangjie01
This is an automated email from the ASF dual-hosted git repository.

yangjie01 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


from 3346afd4b250 [SPARK-46090][SQL][FOLLOWUP] Add DeveloperApi import
 add ef43bbbc1163 [SPARK-48384][BUILD] Exclude 
`io.netty:netty-tcnative-boringssl-static` from `zookeeper`

No new revisions were added by this update.

Summary of changes:
 dev/deps/spark-deps-hadoop-3-hive-2.3 | 1 -
 pom.xml   | 4 
 2 files changed, 4 insertions(+), 1 deletion(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



(spark) branch master updated: [SPARK-46090][SQL][FOLLOWUP] Add DeveloperApi import

2024-05-24 Thread yao
This is an automated email from the ASF dual-hosted git repository.

yao pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
 new 3346afd4b250 [SPARK-46090][SQL][FOLLOWUP] Add DeveloperApi import
3346afd4b250 is described below

commit 3346afd4b250c3aead5a237666d4942018a463e0
Author: ulysses-you 
AuthorDate: Fri May 24 14:53:26 2024 +0800

[SPARK-46090][SQL][FOLLOWUP] Add DeveloperApi import

### What changes were proposed in this pull request?

Add DeveloperApi import

### Why are the changes needed?

Fix compile issue

### Does this PR introduce _any_ user-facing change?

Fix compile issue

### How was this patch tested?

pass CI

### Was this patch authored or co-authored using generative AI tooling?

no

Closes #46730 from ulysses-you/hot-fix.

Authored-by: ulysses-you 
Signed-off-by: Kent Yao 
---
 .../org/apache/spark/sql/execution/adaptive/AdaptiveRuleContext.scala   | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/sql/core/src/main/scala/org/apache/spark/sql/execution/adaptive/AdaptiveRuleContext.scala
 
b/sql/core/src/main/scala/org/apache/spark/sql/execution/adaptive/AdaptiveRuleContext.scala
index fce20b79e113..23817be71c89 100644
--- 
a/sql/core/src/main/scala/org/apache/spark/sql/execution/adaptive/AdaptiveRuleContext.scala
+++ 
b/sql/core/src/main/scala/org/apache/spark/sql/execution/adaptive/AdaptiveRuleContext.scala
@@ -19,7 +19,7 @@ package org.apache.spark.sql.execution.adaptive
 
 import scala.collection.mutable
 
-import org.apache.spark.annotation.Experimental
+import org.apache.spark.annotation.{DeveloperApi, Experimental}
 import org.apache.spark.sql.catalyst.SQLConfHelper
 
 /**


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



(spark) branch master updated: [SPARK-46090][SQL] Support plan fragment level SQL configs in AQE

2024-05-24 Thread ulyssesyou
This is an automated email from the ASF dual-hosted git repository.

ulyssesyou pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
 new a29c9653f3d4 [SPARK-46090][SQL] Support plan fragment level SQL 
configs in AQE
a29c9653f3d4 is described below

commit a29c9653f3d48d97875ae446d82896bdf0de61ca
Author: ulysses-you 
AuthorDate: Fri May 24 14:31:52 2024 +0800

[SPARK-46090][SQL] Support plan fragment level SQL configs in AQE

### What changes were proposed in this pull request?

This pr introduces `case class AdaptiveRuleContext(isSubquery: Boolean, 
isFinalStage: Boolean)` which can be used inside adaptive sql extension rules 
through thread local, so that developers can modify the next plan fragment 
configs using `AdaptiveRuleContext.get()`.

The plan fragment configs can be propagated through multi-phases, e.g., if 
set a config in `queryPostPlannerStrategyRules` then the config can be gotten 
in `queryStagePrepRules`, `queryStageOptimizerRules` and `columnarRules`. The 
configs will be cleanup before going to execute, so in next round the configs 
will be empty.

### Why are the changes needed?

To support modify the plan fragment level SQL configs through AQE rules.

### Does this PR introduce _any_ user-facing change?

no, only affect developers.

### How was this patch tested?

add new tests

### Was this patch authored or co-authored using generative AI tooling?

no

Closes #44013 from ulysses-you/rule-context.

Lead-authored-by: ulysses-you 
Co-authored-by: Kent Yao 
Signed-off-by: youxiduo 
---
 .../execution/adaptive/AdaptiveRuleContext.scala   |  89 +++
 .../execution/adaptive/AdaptiveSparkPlanExec.scala |  42 -
 .../adaptive/AdaptiveRuleContextSuite.scala| 176 +
 3 files changed, 299 insertions(+), 8 deletions(-)

diff --git 
a/sql/core/src/main/scala/org/apache/spark/sql/execution/adaptive/AdaptiveRuleContext.scala
 
b/sql/core/src/main/scala/org/apache/spark/sql/execution/adaptive/AdaptiveRuleContext.scala
new file mode 100644
index ..fce20b79e113
--- /dev/null
+++ 
b/sql/core/src/main/scala/org/apache/spark/sql/execution/adaptive/AdaptiveRuleContext.scala
@@ -0,0 +1,89 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.sql.execution.adaptive
+
+import scala.collection.mutable
+
+import org.apache.spark.annotation.Experimental
+import org.apache.spark.sql.catalyst.SQLConfHelper
+
+/**
+ * Provide the functionality to modify the next plan fragment configs in AQE 
rules.
+ * The configs will be cleanup before going to execute next plan fragment.
+ * To get instance, use: {{{ AdaptiveRuleContext.get() }}}
+ *
+ * @param isSubquery if the input query plan is subquery
+ * @param isFinalStage if the next stage is final stage
+ */
+@Experimental
+@DeveloperApi
+case class AdaptiveRuleContext(isSubquery: Boolean, isFinalStage: Boolean) {
+
+  /**
+   * Set SQL configs for next plan fragment. The configs will affect all of 
rules in AQE,
+   * i.e., the runtime optimizer, planner, queryStagePreparationRules, 
queryStageOptimizerRules,
+   * columnarRules.
+   * This configs will be cleared before going to get the next plan fragment.
+   */
+  private val nextPlanFragmentConf = new mutable.HashMap[String, String]()
+
+  private[sql] def withFinalStage(isFinalStage: Boolean): AdaptiveRuleContext 
= {
+if (this.isFinalStage == isFinalStage) {
+  this
+} else {
+  val newRuleContext = copy(isFinalStage = isFinalStage)
+  newRuleContext.setConfigs(this.configs())
+  newRuleContext
+}
+  }
+
+  def setConfig(key: String, value: String): Unit = {
+nextPlanFragmentConf.put(key, value)
+  }
+
+  def setConfigs(kvs: Map[String, String]): Unit = {
+kvs.foreach(kv => nextPlanFragmentConf.put(kv._1, kv._2))
+  }
+
+  private[sql] def configs(): Map[String, String] = nextPlanFragmentConf.toMap
+
+  private[sql] def clearConfigs(): Unit = nextPlanFragmentConf.clear()
+}
+
+object AdaptiveRuleContext extends 

(spark) branch master updated: [SPARK-48406][BUILD] Upgrade commons-cli to 1.8.0

2024-05-24 Thread yao
This is an automated email from the ASF dual-hosted git repository.

yao pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
 new f42ed6c76004 [SPARK-48406][BUILD] Upgrade commons-cli to 1.8.0
f42ed6c76004 is described below

commit f42ed6c760043b0213ebf0348a22dec7c0bb8244
Author: yangjie01 
AuthorDate: Fri May 24 14:23:23 2024 +0800

[SPARK-48406][BUILD] Upgrade commons-cli to 1.8.0

### What changes were proposed in this pull request?
This pr aims to upgrade Apache `commons-cli` from 1.6.0 to 1.8.0.

### Why are the changes needed?
The full release notes as follows:
- https://commons.apache.org/proper/commons-cli/changes-report.html#a1.7.0
- https://commons.apache.org/proper/commons-cli/changes-report.html#a1.8.0

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
Pass GitHub Actions

### Was this patch authored or co-authored using generative AI tooling?
No

Closes #46727 from LuciferYang/commons-cli-180.

Authored-by: yangjie01 
Signed-off-by: Kent Yao 
---
 dev/deps/spark-deps-hadoop-3-hive-2.3 | 2 +-
 pom.xml   | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/dev/deps/spark-deps-hadoop-3-hive-2.3 
b/dev/deps/spark-deps-hadoop-3-hive-2.3
index 35f6103e9fa4..46c5108e4eba 100644
--- a/dev/deps/spark-deps-hadoop-3-hive-2.3
+++ b/dev/deps/spark-deps-hadoop-3-hive-2.3
@@ -37,7 +37,7 @@ cats-kernel_2.13/2.8.0//cats-kernel_2.13-2.8.0.jar
 checker-qual/3.42.0//checker-qual-3.42.0.jar
 chill-java/0.10.0//chill-java-0.10.0.jar
 chill_2.13/0.10.0//chill_2.13-0.10.0.jar
-commons-cli/1.6.0//commons-cli-1.6.0.jar
+commons-cli/1.8.0//commons-cli-1.8.0.jar
 commons-codec/1.17.0//commons-codec-1.17.0.jar
 commons-collections/3.2.2//commons-collections-3.2.2.jar
 commons-collections4/4.4//commons-collections4-4.4.jar
diff --git a/pom.xml b/pom.xml
index ecd05ee996e1..e8d47afa1cca 100644
--- a/pom.xml
+++ b/pom.xml
@@ -210,7 +210,7 @@
 4.17.0
 3.1.0
 1.1.0
-1.6.0
+1.8.0
 1.78
 1.13.0
 6.0.0


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org