[spark] branch branch-3.3 updated: [SPARK-38085][SQL][FOLLOWUP] Do not fail too early for DeleteFromTable

2022-05-01 Thread gurwls223
This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new 4c09616f72d [SPARK-38085][SQL][FOLLOWUP] Do not fail too early for 
DeleteFromTable
4c09616f72d is described below

commit 4c09616f72d563f6c731654c8d69a8b36e40983a
Author: Wenchen Fan 
AuthorDate: Mon May 2 11:28:36 2022 +0900

[SPARK-38085][SQL][FOLLOWUP] Do not fail too early for DeleteFromTable

### What changes were proposed in this pull request?

`DeleteFromTable` has been in Spark for a long time and there are existing 
Spark extensions to compile `DeleteFromTable` to physical plans. However, the 
new analyzer rule `RewriteDeleteFromTable` fails very early if the v2 table 
does not support delete. This breaks certain Spark extensions which can still 
execute `DeleteFromTable` for certain v2 tables.

This PR simply removes the error throwing in `RewriteDeleteFromTable`. It's 
a safe change because:
1. the new delete-related rules only match v2 table with 
`SupportsRowLevelOperations`, so won't be affected by this change
2. the planner rule will fail eventually if the v2 table doesn't support 
deletion. Spark eagerly executes commands so Spark users can still see this 
error immediately.

### Why are the changes needed?

To not break existing Spark extesions.

### Does this PR introduce _any_ user-facing change?

no

### How was this patch tested?

existing tests

Closes #36402 from cloud-fan/follow.

Lead-authored-by: Wenchen Fan 
Co-authored-by: Wenchen Fan 
Signed-off-by: Hyukjin Kwon 
(cherry picked from commit 5630f700768432396a948376f5b46b00d4186e1b)
Signed-off-by: Hyukjin Kwon 
---
 .../apache/spark/sql/catalyst/analysis/RewriteDeleteFromTable.scala   | 4 
 1 file changed, 4 deletions(-)

diff --git 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/RewriteDeleteFromTable.scala
 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/RewriteDeleteFromTable.scala
index 85af02e..d473254a08f 100644
--- 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/RewriteDeleteFromTable.scala
+++ 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/RewriteDeleteFromTable.scala
@@ -23,7 +23,6 @@ import 
org.apache.spark.sql.catalyst.plans.logical.{DeleteFromTable, Filter, Log
 import org.apache.spark.sql.connector.catalog.{SupportsDelete, 
SupportsRowLevelOperations, TruncatableTable}
 import org.apache.spark.sql.connector.write.RowLevelOperation.Command.DELETE
 import org.apache.spark.sql.connector.write.RowLevelOperationTable
-import org.apache.spark.sql.errors.QueryCompilationErrors
 import org.apache.spark.sql.execution.datasources.v2.DataSourceV2Relation
 import org.apache.spark.sql.util.CaseInsensitiveStringMap
 
@@ -52,9 +51,6 @@ object RewriteDeleteFromTable extends RewriteRowLevelCommand {
   // don't rewrite as the table supports deletes only with filters
   d
 
-case DataSourceV2Relation(t, _, _, _, _) =>
-  throw QueryCompilationErrors.tableDoesNotSupportDeletesError(t)
-
 case _ =>
   d
   }


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (6d819622dde -> 5630f700768)

2022-05-01 Thread gurwls223
This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


from 6d819622dde [SPARK-39067][BUILD] Upgrade scala-maven-plugin to 4.6.1
 add 5630f700768 [SPARK-38085][SQL][FOLLOWUP] Do not fail too early for 
DeleteFromTable

No new revisions were added by this update.

Summary of changes:
 .../apache/spark/sql/catalyst/analysis/RewriteDeleteFromTable.scala   | 4 
 1 file changed, 4 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated: [SPARK-39067][BUILD] Upgrade scala-maven-plugin to 4.6.1

2022-05-01 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
 new 6d819622dde [SPARK-39067][BUILD] Upgrade scala-maven-plugin to 4.6.1
6d819622dde is described below

commit 6d819622ddeabe022fb2f0aaccfa0cb00db22528
Author: yangjie01 
AuthorDate: Sun May 1 15:20:07 2022 -0500

[SPARK-39067][BUILD] Upgrade scala-maven-plugin to 4.6.1

### What changes were proposed in this pull request?
This pr aims to upgrade scala-maven-plugin to 4.6.1

### Why are the changes needed?
`scala-maven-plugin` 4.6.1 upgrades `zinc` from 1.5.8 to 1.6.1, it also 
adds some other new features at the same time, for example, [Add 
maven.main.skip 
support](https://github.com/davidB/scala-maven-plugin/commit/67f11faa477bc0e5e8cf673a8753b478fe008a09)
 and [Patch target to match Scala 2.11/2.12 scalac option 
syntax](https://github.com/davidB/scala-maven-plugin/commit/36453a1b6b00e90c5e0877568a086c97f799ba12)

Other between 4.5.6 and 4.6.1 as follows:

- https://github.com/davidB/scala-maven-plugin/compare/4.5.6...4.6.1

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
Pass GA

Closes #36406 from LuciferYang/SPARK-39067.

Authored-by: yangjie01 
Signed-off-by: Sean Owen 
---
 pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/pom.xml b/pom.xml
index ad3a382f503..6081f700b62 100644
--- a/pom.xml
+++ b/pom.xml
@@ -166,7 +166,7 @@
   See: SPARK-36547, SPARK-38394.
-->
 
-4.5.6
+4.6.1
 
 true
 1.9.13


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch branch-3.0 updated (46fa4998a4c -> 4e38563d39c)

2022-05-01 Thread viirya
This is an automated email from the ASF dual-hosted git repository.

viirya pushed a change to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/spark.git


from 46fa4998a4c Revert "[SPARK-39032][PYTHON][DOCS] Examples' tag for 
pyspark.sql.functions.when()"
 add 4e38563d39c [SPARK-38918][SQL][3.0] Nested column pruning should 
filter out attributes that do not belong to the current relation

No new revisions were added by this update.

Summary of changes:
 .../expressions/ProjectionOverSchema.scala |  8 +++-
 .../spark/sql/catalyst/optimizer/Optimizer.scala   |  1 +
 .../spark/sql/catalyst/optimizer/objects.scala |  2 +-
 .../sql/execution/datasources/SchemaPruning.scala  |  2 +-
 .../datasources/v2/V2ScanRelationPushDown.scala|  5 ++-
 .../execution/datasources/SchemaPruningSuite.scala | 44 +-
 6 files changed, 55 insertions(+), 7 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (81786a2e960 -> 501519e5a52)

2022-05-01 Thread maxgekk
This is an automated email from the ASF dual-hosted git repository.

maxgekk pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


from 81786a2e960 [SPARK-38737][SQL][TESTS] Test the error classes: 
INVALID_FIELD_NAME
 add 501519e5a52 [SPARK-38729][SQL][TESTS] Test the error class: 
FAILED_SET_ORIGINAL_PERMISSION_BACK

No new revisions were added by this update.

Summary of changes:
 .../spark/sql/errors/QueryCompilationErrors.scala  |  2 +-
 .../sql/errors/QueryExecutionErrorsSuite.scala | 34 +-
 2 files changed, 34 insertions(+), 2 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated: [SPARK-38737][SQL][TESTS] Test the error classes: INVALID_FIELD_NAME

2022-05-01 Thread maxgekk
This is an automated email from the ASF dual-hosted git repository.

maxgekk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
 new 81786a2e960 [SPARK-38737][SQL][TESTS] Test the error classes: 
INVALID_FIELD_NAME
81786a2e960 is described below

commit 81786a2e96018ded474b353c004ac2f63fde
Author: panbingkun 
AuthorDate: Sun May 1 11:35:09 2022 +0300

[SPARK-38737][SQL][TESTS] Test the error classes: INVALID_FIELD_NAME

## What changes were proposed in this pull request?
This PR aims to add a test for the error class INVALID_FIELD_NAME to 
`QueryCompilationErrorsSuite`.

### Why are the changes needed?
The changes improve test coverage, and document expected error messages in 
tests.

### Does this PR introduce any user-facing change?
No

### How was this patch tested?
By running new test:
```
$ build/sbt "sql/testOnly *QueryCompilationErrorsSuite*"
```

Closes #36404 from panbingkun/SPARK-38737.

Authored-by: panbingkun 
Signed-off-by: Max Gekk 
---
 .../spark/sql/errors/QueryCompilationErrorsSuite.scala | 14 ++
 1 file changed, 14 insertions(+)

diff --git 
a/sql/core/src/test/scala/org/apache/spark/sql/errors/QueryCompilationErrorsSuite.scala
 
b/sql/core/src/test/scala/org/apache/spark/sql/errors/QueryCompilationErrorsSuite.scala
index 1115db07f21..8fffccbed40 100644
--- 
a/sql/core/src/test/scala/org/apache/spark/sql/errors/QueryCompilationErrorsSuite.scala
+++ 
b/sql/core/src/test/scala/org/apache/spark/sql/errors/QueryCompilationErrorsSuite.scala
@@ -513,6 +513,20 @@ class QueryCompilationErrorsSuite
   msg = "Invalid pivot value 'struct(col1, dotnet, col2, Experts)': value 
data type " +
 "struct does not match pivot column data type 
int")
   }
+
+  test("INVALID_FIELD_NAME: add a nested field for not struct parent") {
+withTable("t") {
+  sql("CREATE TABLE t(c struct, m string) USING parquet")
+
+  val e = intercept[AnalysisException] {
+sql("ALTER TABLE t ADD COLUMNS (m.n int)")
+  }
+  checkErrorClass(
+exception = e,
+errorClass = "INVALID_FIELD_NAME",
+msg = "Field name m.n is invalid: m is not a struct.; line 1 pos 27")
+}
+  }
 }
 
 class MyCastToString extends SparkUserDefinedFunction(


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated: [SPARK-38700][SQL] Use error classes in the execution errors of save mode

2022-05-01 Thread maxgekk
This is an automated email from the ASF dual-hosted git repository.

maxgekk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
 new b30d1d41414 [SPARK-38700][SQL] Use error classes in the execution 
errors of save mode
b30d1d41414 is described below

commit b30d1d41414e200f1cc7ec9675e5c013bdf5b214
Author: panbingkun 
AuthorDate: Sun May 1 10:34:31 2022 +0300

[SPARK-38700][SQL] Use error classes in the execution errors of save mode

### What changes were proposed in this pull request?
Migrate the following errors in QueryExecutionErrors:

* unsupportedSaveModeError -> UNSUPPORTED_SAVE_MODE

### Why are the changes needed?
Porting execution errors of unsupported saveMode to new error framework.

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
Add new UT.

Closes #36350 from panbingkun/SPARK-38700.

Authored-by: panbingkun 
Signed-off-by: Max Gekk 
---
 core/src/main/resources/error/error-classes.json   | 11 
 .../main/scala/org/apache/spark/ErrorInfo.scala|  6 ++---
 .../spark/sql/errors/QueryExecutionErrors.scala|  9 +--
 .../InsertIntoHadoopFsRelationCommand.scala|  2 +-
 .../sql/errors/QueryExecutionErrorsSuite.scala | 31 --
 5 files changed, 51 insertions(+), 8 deletions(-)

diff --git a/core/src/main/resources/error/error-classes.json 
b/core/src/main/resources/error/error-classes.json
index 4908a9b6c2e..aa38f8b9747 100644
--- a/core/src/main/resources/error/error-classes.json
+++ b/core/src/main/resources/error/error-classes.json
@@ -246,6 +246,17 @@
   "UNSUPPORTED_GROUPING_EXPRESSION" : {
 "message" : [ "grouping()/grouping_id() can only be used with 
GroupingSets/Cube/Rollup" ]
   },
+  "UNSUPPORTED_SAVE_MODE" : {
+"message" : [ "The save mode  is not supported for: " ],
+"subClass" : {
+  "EXISTENT_PATH" : {
+"message" : [ "an existent path." ]
+  },
+  "NON_EXISTENT_PATH" : {
+"message" : [ "a not existent path." ]
+  }
+}
+  },
   "UNTYPED_SCALA_UDF" : {
 "message" : [ "You're using untyped Scala UDF, which does not have the 
input type information. Spark may blindly pass null to the Scala closure with 
primitive-type argument, and the closure will see the default value of the Java 
type for the null argument, e.g. `udf((x: Int) => x, IntegerType)`, the result 
is 0 for null input. To get rid of this error, you could:\n1. use typed Scala 
UDF APIs(without return type parameter), e.g. `udf((x: Int) => x)`\n2. use Java 
UDF APIs, e.g. `udf(ne [...]
   },
diff --git a/core/src/main/scala/org/apache/spark/ErrorInfo.scala 
b/core/src/main/scala/org/apache/spark/ErrorInfo.scala
index a21f33e8833..0447572bb1c 100644
--- a/core/src/main/scala/org/apache/spark/ErrorInfo.scala
+++ b/core/src/main/scala/org/apache/spark/ErrorInfo.scala
@@ -80,9 +80,9 @@ private[spark] object SparkThrowableHelper {
   val errorSubInfo = subClass.getOrElse(subErrorClass,
 throw new IllegalArgumentException(s"Cannot find sub error class 
'$subErrorClass'"))
   val subMessageParameters = messageParameters.tail
-  "[" + errorClass + "." + subErrorClass + "] " + errorInfo.messageFormat +
-
String.format(errorSubInfo.messageFormat.replaceAll("<[a-zA-Z0-9_-]+>", "%s"),
-  subMessageParameters: _*)
+  "[" + errorClass + "." + subErrorClass + "] " + 
String.format((errorInfo.messageFormat +
+errorSubInfo.messageFormat).replaceAll("<[a-zA-Z0-9_-]+>", "%s"),
+subMessageParameters: _*)
 } else {
   "[" + errorClass + "] " + String.format(
 errorInfo.messageFormat.replaceAll("<[a-zA-Z0-9_-]+>", "%s"),
diff --git 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala
 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala
index 225315d3f02..4b8d76e8e6f 100644
--- 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala
+++ 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala
@@ -592,8 +592,13 @@ object QueryExecutionErrors extends QueryErrorsBase {
""".stripMargin)
   }
 
-  def unsupportedSaveModeError(saveMode: String, pathExists: Boolean): 
Throwable = {
-new IllegalStateException(s"unsupported save mode $saveMode ($pathExists)")
+  def saveModeUnsupportedError(saveMode: Any, pathExists: Boolean): Throwable 
= {
+pathExists match {
+  case true => new SparkIllegalArgumentException(errorClass = 
"UNSUPPORTED_SAVE_MODE",
+messageParameters = Array("EXISTENT_PATH", toSQLValue(saveMode, 
StringType)))
+  case _ => new SparkIllegalArgumentException(errorClass = 
"UNSUPPORTED_SAVE_MODE",
+messageParameters = Array("NON_EXISTENT_PATH", toSQLValue(saveMode, 
StringType)))
+