[spark] branch master updated (907074bafad -> 619b7b43450)

2022-04-16 Thread dongjoon
This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


from 907074bafad [SPARK-38881][DSTREAMS][KINESIS][PYSPARK] Added Support 
for CloudWatch MetricsLevel Config
 add 619b7b43450 [SPARK-38784][CORE] Upgrade Jetty to 9.4.46

No new revisions were added by this update.

Summary of changes:
 dev/deps/spark-deps-hadoop-2-hive-2.3 | 2 +-
 dev/deps/spark-deps-hadoop-3-hive-2.3 | 4 ++--
 pom.xml   | 2 +-
 3 files changed, 4 insertions(+), 4 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch branch-3.3 updated: [SPARK-38784][CORE] Upgrade Jetty to 9.4.46

2022-04-16 Thread dongjoon
This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new baeaaeb8cbb [SPARK-38784][CORE] Upgrade Jetty to 9.4.46
baeaaeb8cbb is described below

commit baeaaeb8cbb8a69b15fac1df7063186dfa81e6a8
Author: Sean Owen 
AuthorDate: Sat Apr 16 20:31:34 2022 -0700

[SPARK-38784][CORE] Upgrade Jetty to 9.4.46

### What changes were proposed in this pull request?

Upgrade Jetty to 9.4.46

### Why are the changes needed?

Three CVEs, which don't necessarily appear to affect Spark, are fixed in 
this version. Just housekeeping.
CVE-2021-28169
CVE-2021-34428
CVE-2021-34429

### Does this PR introduce _any_ user-facing change?

No

### How was this patch tested?

Existing tests

Closes #36229 from srowen/SPARK-38784.

Authored-by: Sean Owen 
Signed-off-by: Dongjoon Hyun 
(cherry picked from commit 619b7b4345013684e814499f8cec3b99ba9d88c2)
Signed-off-by: Dongjoon Hyun 
---
 dev/deps/spark-deps-hadoop-2-hive-2.3 | 2 +-
 dev/deps/spark-deps-hadoop-3-hive-2.3 | 4 ++--
 pom.xml   | 2 +-
 3 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/dev/deps/spark-deps-hadoop-2-hive-2.3 
b/dev/deps/spark-deps-hadoop-2-hive-2.3
index 9847f794e0b..7499a9b94c0 100644
--- a/dev/deps/spark-deps-hadoop-2-hive-2.3
+++ b/dev/deps/spark-deps-hadoop-2-hive-2.3
@@ -146,7 +146,7 @@ jersey-hk2/2.34//jersey-hk2-2.34.jar
 jersey-server/2.34//jersey-server-2.34.jar
 jetty-sslengine/6.1.26//jetty-sslengine-6.1.26.jar
 jetty-util/6.1.26//jetty-util-6.1.26.jar
-jetty-util/9.4.44.v20210927//jetty-util-9.4.44.v20210927.jar
+jetty-util/9.4.46.v20220331//jetty-util-9.4.46.v20220331.jar
 jetty/6.1.26//jetty-6.1.26.jar
 jline/2.14.6//jline-2.14.6.jar
 joda-time/2.10.13//joda-time-2.10.13.jar
diff --git a/dev/deps/spark-deps-hadoop-3-hive-2.3 
b/dev/deps/spark-deps-hadoop-3-hive-2.3
index 5d26abb88cd..94cd0021223 100644
--- a/dev/deps/spark-deps-hadoop-3-hive-2.3
+++ b/dev/deps/spark-deps-hadoop-3-hive-2.3
@@ -133,8 +133,8 @@ 
jersey-container-servlet/2.34//jersey-container-servlet-2.34.jar
 jersey-hk2/2.34//jersey-hk2-2.34.jar
 jersey-server/2.34//jersey-server-2.34.jar
 jettison/1.1//jettison-1.1.jar
-jetty-util-ajax/9.4.44.v20210927//jetty-util-ajax-9.4.44.v20210927.jar
-jetty-util/9.4.44.v20210927//jetty-util-9.4.44.v20210927.jar
+jetty-util-ajax/9.4.46.v20220331//jetty-util-ajax-9.4.46.v20220331.jar
+jetty-util/9.4.46.v20220331//jetty-util-9.4.46.v20220331.jar
 jline/2.14.6//jline-2.14.6.jar
 joda-time/2.10.13//joda-time-2.10.13.jar
 jodd-core/3.5.2//jodd-core-3.5.2.jar
diff --git a/pom.xml b/pom.xml
index 8d60f880af4..072556a5997 100644
--- a/pom.xml
+++ b/pom.xml
@@ -139,7 +139,7 @@
 10.14.2.0
 1.12.2
 1.7.4
-9.4.44.v20210927
+9.4.46.v20220331
 4.0.3
 0.10.0
 2.5.0


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated: [SPARK-38881][DSTREAMS][KINESIS][PYSPARK] Added Support for CloudWatch MetricsLevel Config

2022-04-16 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
 new 907074bafad [SPARK-38881][DSTREAMS][KINESIS][PYSPARK] Added Support 
for CloudWatch MetricsLevel Config
907074bafad is described below

commit 907074bafad0da3d1c802a4389589658ecf93432
Author: Mark Khaitman 
AuthorDate: Sat Apr 16 21:30:15 2022 -0500

[SPARK-38881][DSTREAMS][KINESIS][PYSPARK] Added Support for CloudWatch 
MetricsLevel Config

JIRA: https://issues.apache.org/jira/browse/SPARK-38881

### What changes were proposed in this pull request?

Exposing a configuration option (metricsLevel) used for CloudWatch metrics 
reporting when consuming from an AWS Kinesis Stream, which is already available 
in Scala/Java Spark APIs

This relates to https://issues.apache.org/jira/browse/SPARK-27420 which was 
merged as part of Spark 3.0.0

### Why are the changes needed?

This change is desirable as it further exposes the metricsLevel config 
parameter that was added for the Scala/Java Spark APIs when working with the 
Kinesis Streaming integration, and makes it available to the PySpark API as 
well.

### Does this PR introduce _any_ user-facing change?

No. Default behavior of MetricsLevel.DETAILED is maintained.

### How was this patch tested?

This change passes all tests, and local testing was done with a development 
Kinesis stream in AWS, in order to confirm that metrics were no longer being 
reported to CloudWatch after specifying MetricsLevel.NONE in the PySpark 
Kinesis streaming context creation, and also worked as it does today when 
leaving the MetricsLevel parameter out, which would result in a default of 
DETAILED, with CloudWatch metrics appearing again.

Built with:
```
# ./build/mvn -pl :spark-streaming-kinesis-asl_2.12 -DskipTests 
-Pkinesis-asl clean install
```

Tested with small pyspark kinesis streaming context + AWS kinesis stream, 
using updated streaming kinesis asl jar:

```
# spark-submit --packages 
org.apache.spark:spark-streaming-kinesis-asl_2.12:3.2.1 --jars 
spark/connector/kinesis-asl/target/spark-streaming-kinesis-asl_2.12-3.4.0-SNAPSHOT.jar
 metricsLevelTesting.py
```

Closes #36201 from mkman84/metricsLevel-pyspark.

Authored-by: Mark Khaitman 
Signed-off-by: Sean Owen 
---
 .../kinesis/KinesisUtilsPythonHelper.scala | 10 ++
 docs/streaming-kinesis-integration.md  | 10 ++
 python/pyspark/streaming/kinesis.py| 22 +-
 python/pyspark/streaming/tests/test_kinesis.py |  5 -
 4 files changed, 37 insertions(+), 10 deletions(-)

diff --git 
a/connector/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisUtilsPythonHelper.scala
 
b/connector/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisUtilsPythonHelper.scala
index 0056438c4ee..8abaef6b834 100644
--- 
a/connector/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisUtilsPythonHelper.scala
+++ 
b/connector/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisUtilsPythonHelper.scala
@@ -17,6 +17,7 @@
 package org.apache.spark.streaming.kinesis
 
 import 
com.amazonaws.services.kinesis.clientlibrary.lib.worker.InitialPositionInStream
+import com.amazonaws.services.kinesis.metrics.interfaces.MetricsLevel
 
 import org.apache.spark.storage.StorageLevel
 import org.apache.spark.streaming.Duration
@@ -37,6 +38,7 @@ private class KinesisUtilsPythonHelper {
   regionName: String,
   initialPositionInStream: Int,
   checkpointInterval: Duration,
+  metricsLevel: Int,
   storageLevel: StorageLevel,
   awsAccessKeyId: String,
   awsSecretKey: String,
@@ -64,6 +66,13 @@ private class KinesisUtilsPythonHelper {
   "InitialPositionInStream.LATEST or 
InitialPositionInStream.TRIM_HORIZON")
 }
 
+val cloudWatchMetricsLevel = metricsLevel match {
+  case 0 => MetricsLevel.DETAILED
+  case 1 => MetricsLevel.SUMMARY
+  case 2 => MetricsLevel.NONE
+  case _ => MetricsLevel.DETAILED
+}
+
 val builder = KinesisInputDStream.builder.
   streamingContext(jssc).
   checkpointAppName(kinesisAppName).
@@ -72,6 +81,7 @@ private class KinesisUtilsPythonHelper {
   regionName(regionName).
   
initialPosition(KinesisInitialPositions.fromKinesisInitialPosition(kinesisInitialPosition)).
   checkpointInterval(checkpointInterval).
+  metricsLevel(cloudWatchMetricsLevel).
   storageLevel(storageLevel)
 
 if (stsAssumeRoleArn != null && stsSessionName != null && stsExternalId != 
null) {
diff --git a/docs/streaming-kinesis-integration.md 
b/docs/streaming-kinesis-integration.md
index dc80ff05226..2ce30d7efe2 100644
--- a/docs/streami

[spark] branch master updated: [SPARK-38920][SQL][TEST] Add ORC blockSize tests to BloomFilterBenchmark

2022-04-16 Thread dongjoon
This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
 new 879aae39409 [SPARK-38920][SQL][TEST] Add ORC blockSize tests to 
BloomFilterBenchmark
879aae39409 is described below

commit 879aae39409ae92f434c3bb4101d66334f9833dd
Author: Dongjoon Hyun 
AuthorDate: Sat Apr 16 19:05:22 2022 -0700

[SPARK-38920][SQL][TEST] Add ORC blockSize tests to BloomFilterBenchmark

### What changes were proposed in this pull request?

This PR aims to improve `BloomFilterBenchmark` by adding more `blockSize` 
combination tests for ORC.

- Java 8: https://github.com/dongjoon-hyun/spark/actions/runs/2178431204
- Java 11: https://github.com/dongjoon-hyun/spark/actions/runs/2178432284
- Java 17: https://github.com/dongjoon-hyun/spark/actions/runs/2178432661

### Why are the changes needed?

For Parquet, we had the benchmark already. This will provide a feature 
parity of the comparison.

### Does this PR introduce _any_ user-facing change?

No.

### How was this patch tested?

Manual test because this is a benchmark.

Closes #36218 from dongjoon-hyun/SPARK-38920.

Authored-by: Dongjoon Hyun 
Signed-off-by: Dongjoon Hyun 
---
 .../BloomFilterBenchmark-jdk11-results.txt | 112 +
 .../BloomFilterBenchmark-jdk17-results.txt | 132 -
 .../benchmarks/BloomFilterBenchmark-results.txt| 112 +
 .../execution/benchmark/BloomFilterBenchmark.scala |  30 +++--
 4 files changed, 304 insertions(+), 82 deletions(-)

diff --git a/sql/core/benchmarks/BloomFilterBenchmark-jdk11-results.txt 
b/sql/core/benchmarks/BloomFilterBenchmark-jdk11-results.txt
index fab16b64870..1bd32b0e7a9 100644
--- a/sql/core/benchmarks/BloomFilterBenchmark-jdk11-results.txt
+++ b/sql/core/benchmarks/BloomFilterBenchmark-jdk11-results.txt
@@ -6,8 +6,8 @@ OpenJDK 64-Bit Server VM 11.0.14+9-LTS on Linux 
5.13.0-1021-azure
 Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz
 Write 100M rows:  Best Time(ms)   Avg Time(ms)   
Stdev(ms)Rate(M/s)   Per Row(ns)   Relative
 

-Without bloom filter  20453  20495 
 60  4.9 204.5   1.0X
-With bloom filter 22539  22694 
218  4.4 225.4   0.9X
+Without bloom filter  15574  15579 
  6  6.4 155.7   1.0X
+With bloom filter 17915  17972 
 80  5.6 179.2   0.9X
 
 
 

@@ -18,8 +18,80 @@ OpenJDK 64-Bit Server VM 11.0.14+9-LTS on Linux 
5.13.0-1021-azure
 Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz
 Read a row from 100M rows:Best Time(ms)   Avg Time(ms)   
Stdev(ms)Rate(M/s)   Per Row(ns)   Relative
 

-Without bloom filter   1708   1800 
129 58.5  17.1   1.0X
-With bloom filter  1324   1357 
 47 75.5  13.2   1.3X
+Without bloom filter, blocksize: 2097152   1667   1675 
 11 60.0  16.7   1.0X
+With bloom filter, blocksize: 2097152  1098   1134 
 50 91.1  11.0   1.5X
+
+
+
+ORC Read
+
+
+OpenJDK 64-Bit Server VM 11.0.14+9-LTS on Linux 5.13.0-1021-azure
+Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz
+Read a row from 100M rows:Best Time(ms)   Avg Time(ms)   
Stdev(ms)Rate(M/s)   Per Row(ns)   Relative
+
+Without bloom filter, blocksize: 4194304   1446   1514 
 97 69.2  14.5   1.0X
+With bloom filter, blocksize: 4194304  1069   1145 
108 93.6  10.7   1.4X
+
+
+
+ORC Read
+
+
+OpenJDK 64-Bit Server VM 11.0.14+9-LTS on Linux 5.13.0-1

[spark] branch master updated: Revert "[SPARK-38750][SQL][TESTS] Test the error class: SECOND_FUNCTION_ARGUMENT_NOT_INTEGER"

2022-04-16 Thread dongjoon
This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
 new d868133f257 Revert "[SPARK-38750][SQL][TESTS] Test the error class: 
SECOND_FUNCTION_ARGUMENT_NOT_INTEGER"
d868133f257 is described below

commit d868133f2573c4411a813579a9afe262bf5c5dfc
Author: Max Gekk 
AuthorDate: Sat Apr 16 18:03:33 2022 -0700

Revert "[SPARK-38750][SQL][TESTS] Test the error class: 
SECOND_FUNCTION_ARGUMENT_NOT_INTEGER"

### What changes were proposed in this pull request?
This reverts commit 
https://github.com/apache/spark/commit/7246d25cc9ec36dafe6b7df16c78b704c5934d84.

### Why are the changes needed?
The new test fails with the error in one of GAs:
```
[info] - SECOND_FUNCTION_ARGUMENT_NOT_INTEGER: the second argument of 
'date_add' function needs to be an integer *** FAILED *** (22 milliseconds)
[info]   Expected exception org.apache.spark.sql.AnalysisException to be 
thrown, but java.lang.NumberFormatException was thrown 
(QueryCompilationErrorsSuite.scala:316)
[info]   org.scalatest.exceptions.TestFailedException:
[info]   at 
org.scalatest.Assertions.newAssertionFailedException(Assertions.scala:472)
```

### Does this PR introduce _any_ user-facing change?
No.

### How was this patch tested?
By existing GAs.

Closes #36227 from 
MaxGekk/fix-test-for-SECOND_FUNCTION_ARGUMENT_NOT_INTEGER.

Authored-by: Max Gekk 
Signed-off-by: Dongjoon Hyun 
---
 .../apache/spark/sql/errors/QueryCompilationErrorsSuite.scala | 11 ---
 1 file changed, 11 deletions(-)

diff --git 
a/sql/core/src/test/scala/org/apache/spark/sql/errors/QueryCompilationErrorsSuite.scala
 
b/sql/core/src/test/scala/org/apache/spark/sql/errors/QueryCompilationErrorsSuite.scala
index 34e3f305530..de671df74c8 100644
--- 
a/sql/core/src/test/scala/org/apache/spark/sql/errors/QueryCompilationErrorsSuite.scala
+++ 
b/sql/core/src/test/scala/org/apache/spark/sql/errors/QueryCompilationErrorsSuite.scala
@@ -310,17 +310,6 @@ class QueryCompilationErrorsSuite extends QueryTest with 
SharedSparkSession {
   }
 }
   }
-
-  test("SECOND_FUNCTION_ARGUMENT_NOT_INTEGER: " +
-"the second argument of 'date_add' function needs to be an integer") {
-val e = intercept[AnalysisException] {
-  sql("select date_add('1982-08-15', 'x')").collect()
-}
-assert(e.getErrorClass === "SECOND_FUNCTION_ARGUMENT_NOT_INTEGER")
-assert(e.getSqlState === "22023")
-assert(e.getMessage ===
-  "The second argument of 'date_add' function needs to be an integer.")
-  }
 }
 
 class MyCastToString extends SparkUserDefinedFunction(


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (8548a8221cf -> eced406b600)

2022-04-16 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


from 8548a8221cf [SPARK-38738][SQL][TESTS] Test the error class: 
INVALID_FRACTION_OF_SECOND
 add eced406b600 [SPARK-38660][PYTHON] PySpark DeprecationWarning: 
distutils Version classes are deprecated

No new revisions were added by this update.

Summary of changes:
 python/pyspark/__init__.py | 5 +
 1 file changed, 5 insertions(+)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated: [SPARK-38738][SQL][TESTS] Test the error class: INVALID_FRACTION_OF_SECOND

2022-04-16 Thread maxgekk
This is an automated email from the ASF dual-hosted git repository.

maxgekk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
 new 8548a8221cf [SPARK-38738][SQL][TESTS] Test the error class: 
INVALID_FRACTION_OF_SECOND
8548a8221cf is described below

commit 8548a8221cf0e00a1801ee105e5a0942b5ecfb56
Author: panbingkun 
AuthorDate: Sat Apr 16 21:49:46 2022 +0300

[SPARK-38738][SQL][TESTS] Test the error class: INVALID_FRACTION_OF_SECOND

## What changes were proposed in this pull request?
This pr aims to add one test for the error class INVALID_FRACTION_OF_SECOND 
to `QueryExecutionErrorsSuite`.

### Why are the changes needed?
The changes improve test coverage, and document expected error messages in 
tests.

### Does this PR introduce any user-facing change?
No.

### How was this patch tested?
By running new test:
```
$ build/sbt "sql/testOnly *QueryExecutionErrorsSuite*"
```

Closes #36211 from panbingkun/SPARK-38738.

Lead-authored-by: panbingkun 
Co-authored-by: Maxim Gekk 
Signed-off-by: Max Gekk 
---
 .../spark/sql/errors/QueryExecutionErrorsSuite.scala   | 14 +-
 1 file changed, 13 insertions(+), 1 deletion(-)

diff --git 
a/sql/core/src/test/scala/org/apache/spark/sql/errors/QueryExecutionErrorsSuite.scala
 
b/sql/core/src/test/scala/org/apache/spark/sql/errors/QueryExecutionErrorsSuite.scala
index d3c242266be..09f655431dc 100644
--- 
a/sql/core/src/test/scala/org/apache/spark/sql/errors/QueryExecutionErrorsSuite.scala
+++ 
b/sql/core/src/test/scala/org/apache/spark/sql/errors/QueryExecutionErrorsSuite.scala
@@ -21,7 +21,7 @@ import java.util.Locale
 
 import test.org.apache.spark.sql.connector.JavaSimpleWritableDataSource
 
-import org.apache.spark.{SparkArithmeticException, SparkException, 
SparkIllegalStateException, SparkRuntimeException, 
SparkUnsupportedOperationException, SparkUpgradeException}
+import org.apache.spark.{SparkArithmeticException, SparkDateTimeException, 
SparkException, SparkIllegalStateException, SparkRuntimeException, 
SparkUnsupportedOperationException, SparkUpgradeException}
 import org.apache.spark.sql.{DataFrame, QueryTest}
 import org.apache.spark.sql.catalyst.util.BadRecordException
 import org.apache.spark.sql.connector.SimpleWritableDataSource
@@ -388,4 +388,16 @@ class QueryExecutionErrorsSuite extends QueryTest
 |""".stripMargin)
 }
   }
+
+  test("INVALID_FRACTION_OF_SECOND: in the function make_timestamp") {
+withSQLConf(SQLConf.ANSI_ENABLED.key -> "true") {
+  val e = intercept[SparkDateTimeException] {
+sql("select make_timestamp(2012, 11, 30, 9, 19, 
60.)").collect()
+  }
+  assert(e.getErrorClass === "INVALID_FRACTION_OF_SECOND")
+  assert(e.getSqlState === "22023")
+  assert(e.getMessage === "The fraction of sec must be zero. Valid range 
is [0, 60]. " +
+"If necessary set spark.sql.ansi.enabled to false to bypass this 
error. ")
+}
+  }
 }


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated: [SPARK-38750][SQL][TESTS] Test the error class: SECOND_FUNCTION_ARGUMENT_NOT_INTEGER

2022-04-16 Thread maxgekk
This is an automated email from the ASF dual-hosted git repository.

maxgekk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
 new 7246d25cc9e [SPARK-38750][SQL][TESTS] Test the error class: 
SECOND_FUNCTION_ARGUMENT_NOT_INTEGER
7246d25cc9e is described below

commit 7246d25cc9ec36dafe6b7df16c78b704c5934d84
Author: panbingkun 
AuthorDate: Sat Apr 16 12:04:31 2022 +0300

[SPARK-38750][SQL][TESTS] Test the error class: 
SECOND_FUNCTION_ARGUMENT_NOT_INTEGER

## What changes were proposed in this pull request?
This PR aims to add a test for the error class 
SECOND_FUNCTION_ARGUMENT_NOT_INTEGER to `QueryCompilationErrorsSuite`.

### Why are the changes needed?
The changes improve test coverage, and document expected error messages in 
tests.

### Does this PR introduce any user-facing change?
No

### How was this patch tested?
By running new test:
```
$ build/sbt "sql/testOnly *QueryCompilationErrorsSuite*"
```

Closes #36209 from panbingkun/SPARK-38750.

Lead-authored-by: panbingkun 
Co-authored-by: Maxim Gekk 
Signed-off-by: Max Gekk 
---
 .../apache/spark/sql/errors/QueryCompilationErrorsSuite.scala | 11 +++
 1 file changed, 11 insertions(+)

diff --git 
a/sql/core/src/test/scala/org/apache/spark/sql/errors/QueryCompilationErrorsSuite.scala
 
b/sql/core/src/test/scala/org/apache/spark/sql/errors/QueryCompilationErrorsSuite.scala
index de671df74c8..34e3f305530 100644
--- 
a/sql/core/src/test/scala/org/apache/spark/sql/errors/QueryCompilationErrorsSuite.scala
+++ 
b/sql/core/src/test/scala/org/apache/spark/sql/errors/QueryCompilationErrorsSuite.scala
@@ -310,6 +310,17 @@ class QueryCompilationErrorsSuite extends QueryTest with 
SharedSparkSession {
   }
 }
   }
+
+  test("SECOND_FUNCTION_ARGUMENT_NOT_INTEGER: " +
+"the second argument of 'date_add' function needs to be an integer") {
+val e = intercept[AnalysisException] {
+  sql("select date_add('1982-08-15', 'x')").collect()
+}
+assert(e.getErrorClass === "SECOND_FUNCTION_ARGUMENT_NOT_INTEGER")
+assert(e.getSqlState === "22023")
+assert(e.getMessage ===
+  "The second argument of 'date_add' function needs to be an integer.")
+  }
 }
 
 class MyCastToString extends SparkUserDefinedFunction(


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated: [SPARK-38866][BUILD] Update ORC to 1.7.4

2022-04-16 Thread dongjoon
This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
 new 7caf487c76a [SPARK-38866][BUILD] Update ORC to 1.7.4
7caf487c76a is described below

commit 7caf487c76abfdc76fc79a3bd4787d2e6c8034eb
Author: William Hyun 
AuthorDate: Sat Apr 16 00:31:41 2022 -0700

[SPARK-38866][BUILD] Update ORC to 1.7.4

### What changes were proposed in this pull request?
This PR aims to update ORC to version 1.7.4.

### Why are the changes needed?
This will bring the following bug fixes.
- https://github.com/apache/orc/milestone/7?closed=1

### Does this PR introduce _any_ user-facing change?
No.

### How was this patch tested?
Pass the CIs.

Closes #36153 from williamhyun/orc174RC0.

Authored-by: William Hyun 
Signed-off-by: Dongjoon Hyun 
---
 dev/deps/spark-deps-hadoop-2-hive-2.3 | 6 +++---
 dev/deps/spark-deps-hadoop-3-hive-2.3 | 6 +++---
 pom.xml   | 2 +-
 3 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/dev/deps/spark-deps-hadoop-2-hive-2.3 
b/dev/deps/spark-deps-hadoop-2-hive-2.3
index 88db7378722..5dc9010a980 100644
--- a/dev/deps/spark-deps-hadoop-2-hive-2.3
+++ b/dev/deps/spark-deps-hadoop-2-hive-2.3
@@ -219,9 +219,9 @@ objenesis/3.2//objenesis-3.2.jar
 okhttp/3.12.12//okhttp-3.12.12.jar
 okio/1.14.0//okio-1.14.0.jar
 opencsv/2.3//opencsv-2.3.jar
-orc-core/1.7.3//orc-core-1.7.3.jar
-orc-mapreduce/1.7.3//orc-mapreduce-1.7.3.jar
-orc-shims/1.7.3//orc-shims-1.7.3.jar
+orc-core/1.7.4//orc-core-1.7.4.jar
+orc-mapreduce/1.7.4//orc-mapreduce-1.7.4.jar
+orc-shims/1.7.4//orc-shims-1.7.4.jar
 oro/2.0.8//oro-2.0.8.jar
 osgi-resource-locator/1.0.3//osgi-resource-locator-1.0.3.jar
 paranamer/2.8//paranamer-2.8.jar
diff --git a/dev/deps/spark-deps-hadoop-3-hive-2.3 
b/dev/deps/spark-deps-hadoop-3-hive-2.3
index 12abaac32dd..b030813dbe9 100644
--- a/dev/deps/spark-deps-hadoop-3-hive-2.3
+++ b/dev/deps/spark-deps-hadoop-3-hive-2.3
@@ -208,9 +208,9 @@ opencsv/2.3//opencsv-2.3.jar
 opentracing-api/0.33.0//opentracing-api-0.33.0.jar
 opentracing-noop/0.33.0//opentracing-noop-0.33.0.jar
 opentracing-util/0.33.0//opentracing-util-0.33.0.jar
-orc-core/1.7.3//orc-core-1.7.3.jar
-orc-mapreduce/1.7.3//orc-mapreduce-1.7.3.jar
-orc-shims/1.7.3//orc-shims-1.7.3.jar
+orc-core/1.7.4//orc-core-1.7.4.jar
+orc-mapreduce/1.7.4//orc-mapreduce-1.7.4.jar
+orc-shims/1.7.4//orc-shims-1.7.4.jar
 oro/2.0.8//oro-2.0.8.jar
 osgi-resource-locator/1.0.3//osgi-resource-locator-1.0.3.jar
 paranamer/2.8//paranamer-2.8.jar
diff --git a/pom.xml b/pom.xml
index 7b757a86deb..3cd9c224ecb 100644
--- a/pom.xml
+++ b/pom.xml
@@ -138,7 +138,7 @@
 
 10.14.2.0
 1.12.2
-1.7.3
+1.7.4
 9.4.44.v20210927
 4.0.3
 0.10.0


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch branch-3.3 updated: [SPARK-38866][BUILD] Update ORC to 1.7.4

2022-04-16 Thread dongjoon
This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new dfb668d2541 [SPARK-38866][BUILD] Update ORC to 1.7.4
dfb668d2541 is described below

commit dfb668d2541c5e15c7b41dfa74c9dea7291fe9e1
Author: William Hyun 
AuthorDate: Sat Apr 16 00:31:41 2022 -0700

[SPARK-38866][BUILD] Update ORC to 1.7.4

### What changes were proposed in this pull request?
This PR aims to update ORC to version 1.7.4.

### Why are the changes needed?
This will bring the following bug fixes.
- https://github.com/apache/orc/milestone/7?closed=1

### Does this PR introduce _any_ user-facing change?
No.

### How was this patch tested?
Pass the CIs.

Closes #36153 from williamhyun/orc174RC0.

Authored-by: William Hyun 
Signed-off-by: Dongjoon Hyun 
(cherry picked from commit 7caf487c76abfdc76fc79a3bd4787d2e6c8034eb)
Signed-off-by: Dongjoon Hyun 
---
 dev/deps/spark-deps-hadoop-2-hive-2.3 | 6 +++---
 dev/deps/spark-deps-hadoop-3-hive-2.3 | 6 +++---
 pom.xml   | 2 +-
 3 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/dev/deps/spark-deps-hadoop-2-hive-2.3 
b/dev/deps/spark-deps-hadoop-2-hive-2.3
index c0b15027430..9847f794e0b 100644
--- a/dev/deps/spark-deps-hadoop-2-hive-2.3
+++ b/dev/deps/spark-deps-hadoop-2-hive-2.3
@@ -219,9 +219,9 @@ objenesis/3.2//objenesis-3.2.jar
 okhttp/3.12.12//okhttp-3.12.12.jar
 okio/1.14.0//okio-1.14.0.jar
 opencsv/2.3//opencsv-2.3.jar
-orc-core/1.7.3//orc-core-1.7.3.jar
-orc-mapreduce/1.7.3//orc-mapreduce-1.7.3.jar
-orc-shims/1.7.3//orc-shims-1.7.3.jar
+orc-core/1.7.4//orc-core-1.7.4.jar
+orc-mapreduce/1.7.4//orc-mapreduce-1.7.4.jar
+orc-shims/1.7.4//orc-shims-1.7.4.jar
 oro/2.0.8//oro-2.0.8.jar
 osgi-resource-locator/1.0.3//osgi-resource-locator-1.0.3.jar
 paranamer/2.8//paranamer-2.8.jar
diff --git a/dev/deps/spark-deps-hadoop-3-hive-2.3 
b/dev/deps/spark-deps-hadoop-3-hive-2.3
index 20a727521aa..5d26abb88cd 100644
--- a/dev/deps/spark-deps-hadoop-3-hive-2.3
+++ b/dev/deps/spark-deps-hadoop-3-hive-2.3
@@ -208,9 +208,9 @@ opencsv/2.3//opencsv-2.3.jar
 opentracing-api/0.33.0//opentracing-api-0.33.0.jar
 opentracing-noop/0.33.0//opentracing-noop-0.33.0.jar
 opentracing-util/0.33.0//opentracing-util-0.33.0.jar
-orc-core/1.7.3//orc-core-1.7.3.jar
-orc-mapreduce/1.7.3//orc-mapreduce-1.7.3.jar
-orc-shims/1.7.3//orc-shims-1.7.3.jar
+orc-core/1.7.4//orc-core-1.7.4.jar
+orc-mapreduce/1.7.4//orc-mapreduce-1.7.4.jar
+orc-shims/1.7.4//orc-shims-1.7.4.jar
 oro/2.0.8//oro-2.0.8.jar
 osgi-resource-locator/1.0.3//osgi-resource-locator-1.0.3.jar
 paranamer/2.8//paranamer-2.8.jar
diff --git a/pom.xml b/pom.xml
index 77fbdc0c5e1..8d60f880af4 100644
--- a/pom.xml
+++ b/pom.xml
@@ -138,7 +138,7 @@
 
 10.14.2.0
 1.12.2
-1.7.3
+1.7.4
 9.4.44.v20210927
 4.0.3
 0.10.0


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org