This is an automated email from the ASF dual-hosted git repository.

wenchen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
     new 4abab8b8308 [SPARK-45595] Expose SQLSTATE in error message
4abab8b8308 is described below

commit 4abab8b83085d39674ce790fae02ab806cbda60c
Author: srielau <[email protected]>
AuthorDate: Fri Oct 20 21:02:51 2023 +0800

    [SPARK-45595] Expose SQLSTATE in error message
    
    ### What changes were proposed in this pull request?
    
    In this PR we include the SQLSTATE as part of the error message when
    spark.sql.error.messageFormat = PRETTY (default)
    
    ```
    [DIVIDE_BY_ZERO] Division by zero. Use `try_divide` to tolerate divisor 
being 0 and return NULL instead. If necessary set "spark.sql.ansi.enabled" to 
"false" to bypass this error. SQLSTATE: 22012
    == SQL(line 1, position 8) ==
    SELECT 1/0
           ^^^
    ```
    
    ### Alternatives
    
    Aside from minor changes (like colon vs equal etc) there more options on 
where to place the information.
    
    - Between error class and message
    
    ```
    The state is added right before the message text, using a set of brackets 
(e.g. round):
    [DIVIDE_BY_ZERO](22013) Division by zero. Use `try_divide` to tolerate 
divisor being 0 and return NULL instead. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
    == SQL(line 1, position 8) ==
    SELECT 1/0
           ^^^
    ```
    
    - At the very end after the context
    
    ```
    [DIVIDE_BY_ZERO] Division by zero. Use `try_divide` to tolerate divisor 
being 0 and return NULL instead. If necessary set "spark.sql.ansi.enabled" to 
"false" to bypass this error.
    == SQL(line 1, position 8) ==
    SELECT 1/0
           ^^^
    SQLSTATE: 22013
    ```
    
    Both of these alternatives have issues:
    
    - The SQLSTATE itself is not interesting to a human. It is 5 alphanumerics 
that are hard to remember and have less information than the human-readable 
error class. As such it contributes nothing to anyone who wants to read the 
message.
    - The "context" which is currently optionally tailing the message can be 
length and complex. We may also decide to refine it in the futire. So adding 
SQLSTATE after it makes it hard to find. What, for example, if we want to nest 
error messages and the "stack-trace" should show multiple error messages. In 
that case SQLSTATE would be like a closing bracket, and hard to match.
    
    ### Why are the changes needed?
    
    To provide useful information for users to e.g. catch exceptions based on 
SQLSTATE or look up information.
    
    ### Does this PR introduce _any_ user-facing change?
    
    Yes, error messages change
    
    ### How was this patch tested?
    
    Existing QA
    
    ### Was this patch authored or co-authored using generative AI tooling?
    
    No
    
    Closes #43438 from srielau/SPARK-45595-Expose-SQLSTATE-in-error-message.
    
    Lead-authored-by: srielau <[email protected]>
    Co-authored-by: Serge Rielau <[email protected]>
    Signed-off-by: Wenchen Fan <[email protected]>
---
 common/utils/src/main/resources/error/error-classes.json | 16 ++++++----------
 .../scala/org/apache/spark/SparkThrowableHelper.scala    |  4 +++-
 .../scala/org/apache/spark/sql/avro/AvroSerdeSuite.scala |  2 +-
 .../apache/spark/sql/protobuf/ProtobufSerdeSuite.scala   |  5 +++--
 .../scala/org/apache/spark/SparkThrowableSuite.scala     | 16 ++++++++++------
 .../apache/spark/metrics/sink/GraphiteSinkSuite.scala    | 10 ++++------
 .../test/scala/org/apache/spark/ui/UIUtilsSuite.scala    |  2 +-
 docs/sql-error-conditions.md                             |  6 ++++++
 .../org/apache/spark/sql/catalyst/parser/parsers.scala   |  2 +-
 .../spark/sql/catalyst/trees/SQLQueryContext.scala       |  4 ++--
 .../org/apache/spark/sql/errors/DataTypeErrorsBase.scala |  2 +-
 .../apache/spark/sql/catalyst/parser/AstBuilder.scala    |  4 ++--
 .../apache/spark/sql/errors/QueryCompilationErrors.scala |  4 ++--
 .../spark/sql/catalyst/catalog/SessionCatalogSuite.scala |  5 +++--
 .../sql/catalyst/parser/ExpressionParserSuite.scala      |  7 ++++---
 .../apache/spark/sql/catalyst/trees/TreeNodeSuite.scala  |  2 +-
 .../sql-tests/analyzer-results/ansi/literals.sql.out     |  6 ++++--
 .../sql-tests/analyzer-results/literals.sql.out          |  6 ++++--
 .../resources/sql-tests/results/ansi/literals.sql.out    |  6 ++++--
 .../test/resources/sql-tests/results/literals.sql.out    |  6 ++++--
 .../org/apache/spark/sql/ColumnExpressionSuite.scala     |  2 +-
 .../apache/spark/sql/hive/thriftserver/CliSuite.scala    |  7 ++++---
 .../thriftserver/ThriftServerWithSparkContextSuite.scala |  4 ++--
 .../org/apache/spark/sql/hive/MultiDatabaseSuite.scala   |  8 ++++----
 24 files changed, 77 insertions(+), 59 deletions(-)

diff --git a/common/utils/src/main/resources/error/error-classes.json 
b/common/utils/src/main/resources/error/error-classes.json
index 5731022b6f4..5ef70b583df 100644
--- a/common/utils/src/main/resources/error/error-classes.json
+++ b/common/utils/src/main/resources/error/error-classes.json
@@ -2051,6 +2051,12 @@
     },
     "sqlState" : "42K07"
   },
+  "INVALID_SCHEMA_OR_RELATION_NAME" : {
+    "message" : [
+      "<name> is not a valid name for tables/schemas. Valid names only contain 
alphabet characters, numbers and _."
+    ],
+    "sqlState" : "42602"
+  },
   "INVALID_SET_SYNTAX" : {
     "message" : [
       "Expected format is 'SET', 'SET key', or 'SET key=value'. If you want to 
include special characters in key, or include semicolon in value, please use 
backquotes, e.g., SET `key`=`value`."
@@ -3981,11 +3987,6 @@
       "<msg>."
     ]
   },
-  "_LEGACY_ERROR_TEMP_0061" : {
-    "message" : [
-      "<msg>."
-    ]
-  },
   "_LEGACY_ERROR_TEMP_0062" : {
     "message" : [
       "<msg>."
@@ -4208,11 +4209,6 @@
       "<command> does not support nested column: <column>."
     ]
   },
-  "_LEGACY_ERROR_TEMP_1065" : {
-    "message" : [
-      "`<name>` is not a valid name for tables/databases. Valid names only 
contain alphabet characters, numbers and _."
-    ]
-  },
   "_LEGACY_ERROR_TEMP_1066" : {
     "message" : [
       "<database> is a system preserved database, you cannot create a database 
with this name."
diff --git 
a/common/utils/src/main/scala/org/apache/spark/SparkThrowableHelper.scala 
b/common/utils/src/main/scala/org/apache/spark/SparkThrowableHelper.scala
index f56dcab2e48..b312a1a7e22 100644
--- a/common/utils/src/main/scala/org/apache/spark/SparkThrowableHelper.scala
+++ b/common/utils/src/main/scala/org/apache/spark/SparkThrowableHelper.scala
@@ -51,9 +51,11 @@ private[spark] object SparkThrowableHelper {
       messageParameters: Map[String, String],
       context: String): String = {
     val displayMessage = errorReader.getErrorMessage(errorClass, 
messageParameters)
+    val sqlState = getSqlState(errorClass)
+    val displaySqlState = if (sqlState == null) "" else s" SQLSTATE: $sqlState"
     val displayQueryContext = (if (context.isEmpty) "" else "\n") + context
     val prefix = if (errorClass.startsWith("_LEGACY_ERROR_")) "" else 
s"[$errorClass] "
-    s"$prefix$displayMessage$displayQueryContext"
+    s"$prefix$displayMessage$displaySqlState$displayQueryContext"
   }
 
   def getSqlState(errorClass: String): String = {
diff --git 
a/connector/avro/src/test/scala/org/apache/spark/sql/avro/AvroSerdeSuite.scala 
b/connector/avro/src/test/scala/org/apache/spark/sql/avro/AvroSerdeSuite.scala
index 7f99f3c737c..d9d20b8732f 100644
--- 
a/connector/avro/src/test/scala/org/apache/spark/sql/avro/AvroSerdeSuite.scala
+++ 
b/connector/avro/src/test/scala/org/apache/spark/sql/avro/AvroSerdeSuite.scala
@@ -177,7 +177,7 @@ class AvroSerdeSuite extends SparkFunSuite {
       case Serializer =>
         s"Cannot convert SQL type ${catalystSchema.sql} to Avro type 
$avroSchema."
     }
-    assert(e.getMessage === expectMsg)
+    assert(e.getMessage.contains(expectMsg))
     assert(e.getCause.getMessage === expectedCauseMessage)
   }
 
diff --git 
a/connector/protobuf/src/test/scala/org/apache/spark/sql/protobuf/ProtobufSerdeSuite.scala
 
b/connector/protobuf/src/test/scala/org/apache/spark/sql/protobuf/ProtobufSerdeSuite.scala
index 49d864f88f6..56a980d05fb 100644
--- 
a/connector/protobuf/src/test/scala/org/apache/spark/sql/protobuf/ProtobufSerdeSuite.scala
+++ 
b/connector/protobuf/src/test/scala/org/apache/spark/sql/protobuf/ProtobufSerdeSuite.scala
@@ -262,10 +262,11 @@ class ProtobufSerdeSuite extends SharedSparkSession with 
ProtobufTestBase {
     val expectMsg = serdeFactory match {
       case Deserializer =>
         s"[CANNOT_CONVERT_PROTOBUF_MESSAGE_TYPE_TO_SQL_TYPE] Unable to 
convert" +
-          s" ${protoSchema.getName} of Protobuf to SQL type 
${toSQLType(catalystSchema)}."
+          s" ${protoSchema.getName} of Protobuf to SQL type 
${toSQLType(catalystSchema)}." +
+          " SQLSTATE: 42846"
       case Serializer =>
         s"[UNABLE_TO_CONVERT_TO_PROTOBUF_MESSAGE_TYPE] Unable to convert SQL 
type" +
-          s" ${toSQLType(catalystSchema)} to Protobuf type 
${protoSchema.getName}."
+          s" ${toSQLType(catalystSchema)} to Protobuf type 
${protoSchema.getName}. SQLSTATE: 42K0G"
     }
 
     assert(e.getMessage === expectMsg)
diff --git a/core/src/test/scala/org/apache/spark/SparkThrowableSuite.scala 
b/core/src/test/scala/org/apache/spark/SparkThrowableSuite.scala
index a4120637b69..4d011398c63 100644
--- a/core/src/test/scala/org/apache/spark/SparkThrowableSuite.scala
+++ b/core/src/test/scala/org/apache/spark/SparkThrowableSuite.scala
@@ -413,7 +413,7 @@ class SparkThrowableSuite extends SparkFunSuite {
       "[DIVIDE_BY_ZERO] Division by zero. " +
       "Use `try_divide` to tolerate divisor being 0 and return NULL instead. " 
+
         "If necessary set foo to \"false\" " +
-        "to bypass this error.")
+        "to bypass this error. SQLSTATE: 22012")
   }
 
   test("Error message is formatted") {
@@ -423,7 +423,8 @@ class SparkThrowableSuite extends SparkFunSuite {
         Map("objectName" -> "`foo`", "proposal" -> "`bar`, `baz`")
       ) ==
       "[UNRESOLVED_COLUMN.WITH_SUGGESTION] A column, variable, or function 
parameter with " +
-        "name `foo` cannot be resolved. Did you mean one of the following? 
[`bar`, `baz`]."
+        "name `foo` cannot be resolved. Did you mean one of the following? 
[`bar`, `baz`]." +
+      " SQLSTATE: 42703"
     )
 
     assert(
@@ -435,7 +436,8 @@ class SparkThrowableSuite extends SparkFunSuite {
         ""
       ) ==
       "[UNRESOLVED_COLUMN.WITH_SUGGESTION] A column, variable, or function 
parameter with " +
-        "name `foo` cannot be resolved. Did you mean one of the following? 
[`bar`, `baz`]."
+        "name `foo` cannot be resolved. Did you mean one of the following? 
[`bar`, `baz`]." +
+        " SQLSTATE: 42703"
     )
   }
 
@@ -446,7 +448,8 @@ class SparkThrowableSuite extends SparkFunSuite {
         Map("objectName" -> "`foo`", "proposal" -> "`${bar}`, `baz`")
       ) ==
         "[UNRESOLVED_COLUMN.WITH_SUGGESTION] A column, variable, or function 
parameter with " +
-          "name `foo` cannot be resolved. Did you mean one of the following? 
[`${bar}`, `baz`]."
+          "name `foo` cannot be resolved. Did you mean one of the following? 
[`${bar}`, `baz`]." +
+          " SQLSTATE: 42703"
     )
   }
 
@@ -513,8 +516,9 @@ class SparkThrowableSuite extends SparkFunSuite {
 
     assert(SparkThrowableHelper.getMessage(e, PRETTY) ===
       "[DIVIDE_BY_ZERO] Division by zero. Use `try_divide` to tolerate divisor 
being 0 " +
-      "and return NULL instead. If necessary set CONFIG to \"false\" to bypass 
this error." +
-      "\nQuery summary")
+        "and return NULL instead. If necessary set CONFIG to \"false\" to 
bypass this error." +
+        " SQLSTATE: 22012" +
+        "\nQuery summary")
     // scalastyle:off line.size.limit
     assert(SparkThrowableHelper.getMessage(e, MINIMAL) ===
       """{
diff --git 
a/core/src/test/scala/org/apache/spark/metrics/sink/GraphiteSinkSuite.scala 
b/core/src/test/scala/org/apache/spark/metrics/sink/GraphiteSinkSuite.scala
index 0416854e19d..55d82aed5c3 100644
--- a/core/src/test/scala/org/apache/spark/metrics/sink/GraphiteSinkSuite.scala
+++ b/core/src/test/scala/org/apache/spark/metrics/sink/GraphiteSinkSuite.scala
@@ -88,9 +88,8 @@ class GraphiteSinkSuite extends SparkFunSuite {
     val e = intercept[SparkException] {
       new GraphiteSink(props, registry)
     }
-    assert(e.getErrorClass === "GRAPHITE_SINK_PROPERTY_MISSING")
-    assert(e.getMessage ===
-      "[GRAPHITE_SINK_PROPERTY_MISSING] Graphite sink requires 'host' 
property.")
+    checkError(e, errorClass = "GRAPHITE_SINK_PROPERTY_MISSING",
+      parameters = Map("property" -> "host"))
   }
 
   test("GraphiteSink without port") {
@@ -101,9 +100,8 @@ class GraphiteSinkSuite extends SparkFunSuite {
     val e = intercept[SparkException] {
       new GraphiteSink(props, registry)
     }
-    assert(e.getErrorClass === "GRAPHITE_SINK_PROPERTY_MISSING")
-    assert(e.getMessage ===
-      "[GRAPHITE_SINK_PROPERTY_MISSING] Graphite sink requires 'port' 
property.")
+    checkError(e, errorClass = "GRAPHITE_SINK_PROPERTY_MISSING",
+      parameters = Map("property" -> "port"))
   }
 
   test("GraphiteSink with invalid protocol") {
diff --git a/core/src/test/scala/org/apache/spark/ui/UIUtilsSuite.scala 
b/core/src/test/scala/org/apache/spark/ui/UIUtilsSuite.scala
index aecd25f6c8d..88a6fcad457 100644
--- a/core/src/test/scala/org/apache/spark/ui/UIUtilsSuite.scala
+++ b/core/src/test/scala/org/apache/spark/ui/UIUtilsSuite.scala
@@ -192,7 +192,7 @@ class UIUtilsSuite extends SparkFunSuite {
 
   // scalastyle:off line.size.limit
   test("SPARK-44367: Extract errorClass from errorMsg with errorMessageCell") {
-    val e1 = "Job aborted due to stage failure: Task 0 in stage 1.0 failed 1 
times, most recent failure: Lost task 0.0 in stage 1.0 (TID 1) (10.221.98.22 
executor driver): org.apache.spark.SparkArithmeticException: [DIVIDE_BY_ZERO] 
Division by zero. Use `try_divide` to tolerate divisor being 0 and return NULL 
instead. If necessary set \"spark.sql.ansi.enabled\" to \"false\" to bypass 
this error.\n== SQL(line 1, position 8) ==\nselect a/b from src\n       
^^^\n\n\tat org.apache.spark.sql. [...]
+    val e1 = "Job aborted due to stage failure: Task 0 in stage 1.0 failed 1 
times, most recent failure: Lost task 0.0 in stage 1.0 (TID 1) (10.221.98.22 
executor driver): org.apache.spark.SparkArithmeticException: [DIVIDE_BY_ZERO] 
Division by zero. Use `try_divide` to tolerate divisor being 0 and return NULL 
instead. If necessary set \"spark.sql.ansi.enabled\" to \"false\" to bypass 
this error.\n== SQL (line 1, position 8) ==\nselect a/b from src\n       
^^^\n\n\tat org.apache.spark.sql [...]
     val cell1 = UIUtils.errorMessageCell(e1)
     assert(cell1 === <td>{"DIVIDE_BY_ZERO"}{UIUtils.detailsUINode(isMultiline 
= true, e1)}</td>)
 
diff --git a/docs/sql-error-conditions.md b/docs/sql-error-conditions.md
index d2100b5505b..ce39fce85e7 100644
--- a/docs/sql-error-conditions.md
+++ b/docs/sql-error-conditions.md
@@ -1182,6 +1182,12 @@ The input schema `<inputSchema>` is not a valid schema 
string.
 
 For more details see 
[INVALID_SCHEMA](sql-error-conditions-invalid-schema-error-class.html)
 
+### INVALID_SCHEMA_OR_RELATION_NAME
+
+[SQLSTATE: 
42602](sql-error-conditions-sqlstates.html#class-42-syntax-error-or-access-rule-violation)
+
+`<name>` is not a valid name for tables/schemas. Valid names only contain 
alphabet characters, numbers and _.
+
 ### INVALID_SET_SYNTAX
 
 [SQLSTATE: 
42000](sql-error-conditions-sqlstates.html#class-42-syntax-error-or-access-rule-violation)
diff --git 
a/sql/api/src/main/scala/org/apache/spark/sql/catalyst/parser/parsers.scala 
b/sql/api/src/main/scala/org/apache/spark/sql/catalyst/parser/parsers.scala
index 8e4d2ab1615..51d2b4beab2 100644
--- a/sql/api/src/main/scala/org/apache/spark/sql/catalyst/parser/parsers.scala
+++ b/sql/api/src/main/scala/org/apache/spark/sql/catalyst/parser/parsers.scala
@@ -230,7 +230,7 @@ class ParseException(
     builder ++= "\n" ++= message
     start match {
       case Origin(Some(l), Some(p), _, _, _, _, _) =>
-        builder ++= s"(line $l, pos $p)\n"
+        builder ++= s" (line $l, pos $p)\n"
         command.foreach { cmd =>
           val (above, below) = cmd.split("\n").splitAt(l)
           builder ++= "\n== SQL ==\n"
diff --git 
a/sql/api/src/main/scala/org/apache/spark/sql/catalyst/trees/SQLQueryContext.scala
 
b/sql/api/src/main/scala/org/apache/spark/sql/catalyst/trees/SQLQueryContext.scala
index 99889cf7dae..5b29cb3dde7 100644
--- 
a/sql/api/src/main/scala/org/apache/spark/sql/catalyst/trees/SQLQueryContext.scala
+++ 
b/sql/api/src/main/scala/org/apache/spark/sql/catalyst/trees/SQLQueryContext.scala
@@ -36,7 +36,7 @@ case class SQLQueryContext(
 
   /**
    * The SQL query context of current node. For example:
-   * == SQL of VIEW v1(line 1, position 25) ==
+   * == SQL of VIEW v1 (line 1, position 25) ==
    * SELECT '' AS five, i.f1, i.f1 - int('2') AS x FROM INT4_TBL i
    *                          ^^^^^^^^^^^^^^^
    */
@@ -48,7 +48,7 @@ case class SQLQueryContext(
       val positionContext = if (line.isDefined && startPosition.isDefined) {
         // Note that the line number starts from 1, while the start position 
starts from 0.
         // Here we increase the start position by 1 for consistency.
-        s"(line ${line.get}, position ${startPosition.get + 1})"
+        s" (line ${line.get}, position ${startPosition.get + 1})"
       } else {
         ""
       }
diff --git 
a/sql/api/src/main/scala/org/apache/spark/sql/errors/DataTypeErrorsBase.scala 
b/sql/api/src/main/scala/org/apache/spark/sql/errors/DataTypeErrorsBase.scala
index aed3c681365..911d900053c 100644
--- 
a/sql/api/src/main/scala/org/apache/spark/sql/errors/DataTypeErrorsBase.scala
+++ 
b/sql/api/src/main/scala/org/apache/spark/sql/errors/DataTypeErrorsBase.scala
@@ -89,7 +89,7 @@ private[sql] trait DataTypeErrorsBase {
     "\"" + elem + "\""
   }
 
-    def getSummary(sqlContext: SQLQueryContext): String = {
+  def getSummary(sqlContext: SQLQueryContext): String = {
     if (sqlContext == null) "" else sqlContext.summary
   }
 
diff --git 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/parser/AstBuilder.scala
 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/parser/AstBuilder.scala
index 9abca8b95cf..8ce58ef7688 100644
--- 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/parser/AstBuilder.scala
+++ 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/parser/AstBuilder.scala
@@ -2767,8 +2767,8 @@ class AstBuilder extends DataTypeAstBuilder with 
SQLConfHelper with Logging {
     } catch {
       case e: SparkArithmeticException =>
         throw new ParseException(
-          errorClass = "_LEGACY_ERROR_TEMP_0061",
-          messageParameters = Map("msg" -> e.getMessage),
+          errorClass = e.getErrorClass,
+          messageParameters = e.getMessageParameters.asScala.toMap,
           ctx)
     }
   }
diff --git 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryCompilationErrors.scala
 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryCompilationErrors.scala
index 1009c499aa3..92b1ace67d4 100644
--- 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryCompilationErrors.scala
+++ 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryCompilationErrors.scala
@@ -992,8 +992,8 @@ private[sql] object QueryCompilationErrors extends 
QueryErrorsBase with Compilat
 
   def invalidNameForTableOrDatabaseError(name: String): Throwable = {
     new AnalysisException(
-      errorClass = "_LEGACY_ERROR_TEMP_1065",
-      messageParameters = Map("name" -> name))
+      errorClass = "INVALID_SCHEMA_OR_RELATION_NAME",
+      messageParameters = Map("name" -> toSQLId(name)))
   }
 
   def cannotCreateDatabaseWithSameNameAsPreservedDatabaseError(database: 
String): Throwable = {
diff --git 
a/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/catalog/SessionCatalogSuite.scala
 
b/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/catalog/SessionCatalogSuite.scala
index b668386bc47..e9a60ff17fc 100644
--- 
a/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/catalog/SessionCatalogSuite.scala
+++ 
b/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/catalog/SessionCatalogSuite.scala
@@ -28,6 +28,7 @@ import org.apache.spark.sql.catalyst.expressions._
 import org.apache.spark.sql.catalyst.parser.CatalystSqlParser
 import org.apache.spark.sql.catalyst.plans.logical.{LeafCommand, LogicalPlan, 
Project, Range, SubqueryAlias, View}
 import org.apache.spark.sql.catalyst.util.ResolveDefaultColumns
+import org.apache.spark.sql.catalyst.util.TypeUtils.toSQLId
 import org.apache.spark.sql.connector.catalog.CatalogManager
 import 
org.apache.spark.sql.connector.catalog.CatalogManager.SESSION_CATALOG_NAME
 import org.apache.spark.sql.connector.catalog.SupportsNamespaces.PROP_OWNER
@@ -120,8 +121,8 @@ abstract class SessionCatalogSuite extends AnalysisTest 
with Eventually {
       exception = intercept[AnalysisException] {
         func(name)
       },
-      errorClass = "_LEGACY_ERROR_TEMP_1065",
-      parameters = Map("name" -> name)
+      errorClass = "INVALID_SCHEMA_OR_RELATION_NAME",
+      parameters = Map("name" -> toSQLId(name))
     )
   }
 
diff --git 
a/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/parser/ExpressionParserSuite.scala
 
b/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/parser/ExpressionParserSuite.scala
index 1b9c2709ecd..fe5d024a6b3 100644
--- 
a/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/parser/ExpressionParserSuite.scala
+++ 
b/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/parser/ExpressionParserSuite.scala
@@ -823,9 +823,10 @@ class ExpressionParserSuite extends AnalysisTest {
     assertEqual("123.08BD", Literal(BigDecimal("123.08").underlying()))
     checkError(
       exception = parseException("1.20E-38BD"),
-      errorClass = "_LEGACY_ERROR_TEMP_0061",
-      parameters = Map("msg" ->
-        "[DECIMAL_PRECISION_EXCEEDS_MAX_PRECISION] Decimal precision 40 
exceeds max precision 38."),
+      errorClass = "DECIMAL_PRECISION_EXCEEDS_MAX_PRECISION",
+      parameters = Map(
+        "precision" -> "40",
+        "maxPrecision" -> "38"),
       context = ExpectedContext(
         fragment = "1.20E-38BD",
         start = 0,
diff --git 
a/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/trees/TreeNodeSuite.scala
 
b/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/trees/TreeNodeSuite.scala
index c2f7287758d..33ba4f05972 100644
--- 
a/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/trees/TreeNodeSuite.scala
+++ 
b/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/trees/TreeNodeSuite.scala
@@ -891,7 +891,7 @@ class TreeNodeSuite extends SparkFunSuite with SQLHelper {
       objectType = Some("VIEW"),
       objectName = Some("some_view"))
     val expectedSummary =
-      """== SQL of VIEW some_view(line 3, position 39) ==
+      """== SQL of VIEW some_view (line 3, position 39) ==
         |...7890 + 1234567890 + 1234567890, cast('a'
         |                                   ^^^^^^^^
         |as /* comment */
diff --git 
a/sql/core/src/test/resources/sql-tests/analyzer-results/ansi/literals.sql.out 
b/sql/core/src/test/resources/sql-tests/analyzer-results/ansi/literals.sql.out
index 6bf956d26ae..48368ca1172 100644
--- 
a/sql/core/src/test/resources/sql-tests/analyzer-results/ansi/literals.sql.out
+++ 
b/sql/core/src/test/resources/sql-tests/analyzer-results/ansi/literals.sql.out
@@ -427,9 +427,11 @@ select 1.20E-38BD
 -- !query analysis
 org.apache.spark.sql.catalyst.parser.ParseException
 {
-  "errorClass" : "_LEGACY_ERROR_TEMP_0061",
+  "errorClass" : "DECIMAL_PRECISION_EXCEEDS_MAX_PRECISION",
+  "sqlState" : "22003",
   "messageParameters" : {
-    "msg" : "[DECIMAL_PRECISION_EXCEEDS_MAX_PRECISION] Decimal precision 40 
exceeds max precision 38."
+    "maxPrecision" : "38",
+    "precision" : "40"
   },
   "queryContext" : [ {
     "objectType" : "",
diff --git 
a/sql/core/src/test/resources/sql-tests/analyzer-results/literals.sql.out 
b/sql/core/src/test/resources/sql-tests/analyzer-results/literals.sql.out
index 6bf956d26ae..48368ca1172 100644
--- a/sql/core/src/test/resources/sql-tests/analyzer-results/literals.sql.out
+++ b/sql/core/src/test/resources/sql-tests/analyzer-results/literals.sql.out
@@ -427,9 +427,11 @@ select 1.20E-38BD
 -- !query analysis
 org.apache.spark.sql.catalyst.parser.ParseException
 {
-  "errorClass" : "_LEGACY_ERROR_TEMP_0061",
+  "errorClass" : "DECIMAL_PRECISION_EXCEEDS_MAX_PRECISION",
+  "sqlState" : "22003",
   "messageParameters" : {
-    "msg" : "[DECIMAL_PRECISION_EXCEEDS_MAX_PRECISION] Decimal precision 40 
exceeds max precision 38."
+    "maxPrecision" : "38",
+    "precision" : "40"
   },
   "queryContext" : [ {
     "objectType" : "",
diff --git 
a/sql/core/src/test/resources/sql-tests/results/ansi/literals.sql.out 
b/sql/core/src/test/resources/sql-tests/results/ansi/literals.sql.out
index a3a4d714525..3006d30d0a0 100644
--- a/sql/core/src/test/resources/sql-tests/results/ansi/literals.sql.out
+++ b/sql/core/src/test/resources/sql-tests/results/ansi/literals.sql.out
@@ -482,9 +482,11 @@ struct<>
 -- !query output
 org.apache.spark.sql.catalyst.parser.ParseException
 {
-  "errorClass" : "_LEGACY_ERROR_TEMP_0061",
+  "errorClass" : "DECIMAL_PRECISION_EXCEEDS_MAX_PRECISION",
+  "sqlState" : "22003",
   "messageParameters" : {
-    "msg" : "[DECIMAL_PRECISION_EXCEEDS_MAX_PRECISION] Decimal precision 40 
exceeds max precision 38."
+    "maxPrecision" : "38",
+    "precision" : "40"
   },
   "queryContext" : [ {
     "objectType" : "",
diff --git a/sql/core/src/test/resources/sql-tests/results/literals.sql.out 
b/sql/core/src/test/resources/sql-tests/results/literals.sql.out
index a3a4d714525..3006d30d0a0 100644
--- a/sql/core/src/test/resources/sql-tests/results/literals.sql.out
+++ b/sql/core/src/test/resources/sql-tests/results/literals.sql.out
@@ -482,9 +482,11 @@ struct<>
 -- !query output
 org.apache.spark.sql.catalyst.parser.ParseException
 {
-  "errorClass" : "_LEGACY_ERROR_TEMP_0061",
+  "errorClass" : "DECIMAL_PRECISION_EXCEEDS_MAX_PRECISION",
+  "sqlState" : "22003",
   "messageParameters" : {
-    "msg" : "[DECIMAL_PRECISION_EXCEEDS_MAX_PRECISION] Decimal precision 40 
exceeds max precision 38."
+    "maxPrecision" : "38",
+    "precision" : "40"
   },
   "queryContext" : [ {
     "objectType" : "",
diff --git 
a/sql/core/src/test/scala/org/apache/spark/sql/ColumnExpressionSuite.scala 
b/sql/core/src/test/scala/org/apache/spark/sql/ColumnExpressionSuite.scala
index 8a10050336c..772eb9ff009 100644
--- a/sql/core/src/test/scala/org/apache/spark/sql/ColumnExpressionSuite.scala
+++ b/sql/core/src/test/scala/org/apache/spark/sql/ColumnExpressionSuite.scala
@@ -2570,7 +2570,7 @@ class ColumnExpressionSuite extends QueryTest with 
SharedSparkSession {
 
     assert(e3.getCause.isInstanceOf[RuntimeException])
     assert(e3.getCause.getMessage.matches(
-      "\\[USER_RAISED_EXCEPTION\\] '\\(a#\\d+ > b#\\d+\\)' is not true!"))
+      "\\[USER_RAISED_EXCEPTION\\] '\\(a#\\d+ > b#\\d+\\)' is not true! 
SQLSTATE: P0001"))
   }
 
   test("raise_error") {
diff --git 
a/sql/hive-thriftserver/src/test/scala/org/apache/spark/sql/hive/thriftserver/CliSuite.scala
 
b/sql/hive-thriftserver/src/test/scala/org/apache/spark/sql/hive/thriftserver/CliSuite.scala
index d5045cb511c..5391965ded2 100644
--- 
a/sql/hive-thriftserver/src/test/scala/org/apache/spark/sql/hive/thriftserver/CliSuite.scala
+++ 
b/sql/hive-thriftserver/src/test/scala/org/apache/spark/sql/hive/thriftserver/CliSuite.scala
@@ -639,7 +639,8 @@ class CliSuite extends SparkFunSuite {
 
   test("SPARK-37694: delete [jar|file|archive] shall use spark sql processor") 
{
     runCliWithin(2.minute, errorResponses = Seq("ParseException"))(
-      "delete jar dummy.jar;" -> "Syntax error at or near 'jar': missing 
'FROM'.(line 1, pos 7)")
+      "delete jar dummy.jar;" ->
+        "Syntax error at or near 'jar': missing 'FROM'. SQLSTATE: 42601 (line 
1, pos 7)")
   }
 
   test("SPARK-37906: Spark SQL CLI should not pass final comment") {
@@ -714,7 +715,7 @@ class CliSuite extends SparkFunSuite {
       format = ErrorMessageFormat.PRETTY,
       errorMessage =
         """[DIVIDE_BY_ZERO] Division by zero. Use `try_divide` to tolerate 
divisor being 0 and return NULL instead. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
-          |== SQL(line 1, position 8) ==
+          |== SQL (line 1, position 8) ==
           |select 1 / 0
           |       ^^^^^
           |""".stripMargin,
@@ -723,7 +724,7 @@ class CliSuite extends SparkFunSuite {
       format = ErrorMessageFormat.PRETTY,
       errorMessage =
         """[DIVIDE_BY_ZERO] Division by zero. Use `try_divide` to tolerate 
divisor being 0 and return NULL instead. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
-          |== SQL(line 1, position 8) ==
+          |== SQL (line 1, position 8) ==
           |select 1 / 0
           |       ^^^^^
           |
diff --git 
a/sql/hive-thriftserver/src/test/scala/org/apache/spark/sql/hive/thriftserver/ThriftServerWithSparkContextSuite.scala
 
b/sql/hive-thriftserver/src/test/scala/org/apache/spark/sql/hive/thriftserver/ThriftServerWithSparkContextSuite.scala
index 0589f9de609..72e6fae92cb 100644
--- 
a/sql/hive-thriftserver/src/test/scala/org/apache/spark/sql/hive/thriftserver/ThriftServerWithSparkContextSuite.scala
+++ 
b/sql/hive-thriftserver/src/test/scala/org/apache/spark/sql/hive/thriftserver/ThriftServerWithSparkContextSuite.scala
@@ -163,8 +163,8 @@ trait ThriftServerWithSparkContextSuite extends 
SharedThriftServer {
       val e1 = intercept[HiveSQLException](exec(sql))
       // scalastyle:off line.size.limit
       assert(e1.getMessage ===
-        """Error running query: [DIVIDE_BY_ZERO] 
org.apache.spark.SparkArithmeticException: [DIVIDE_BY_ZERO] Division by zero. 
Use `try_divide` to tolerate divisor being 0 and return NULL instead. If 
necessary set "spark.sql.ansi.enabled" to "false" to bypass this error.
-          |== SQL(line 1, position 8) ==
+        """Error running query: [DIVIDE_BY_ZERO] 
org.apache.spark.SparkArithmeticException: [DIVIDE_BY_ZERO] Division by zero. 
Use `try_divide` to tolerate divisor being 0 and return NULL instead. If 
necessary set "spark.sql.ansi.enabled" to "false" to bypass this error. 
SQLSTATE: 22012
+          |== SQL (line 1, position 8) ==
           |select 1 / 0
           |       ^^^^^
           |""".stripMargin)
diff --git 
a/sql/hive/src/test/scala/org/apache/spark/sql/hive/MultiDatabaseSuite.scala 
b/sql/hive/src/test/scala/org/apache/spark/sql/hive/MultiDatabaseSuite.scala
index f43d5317aa7..3d5e2851fa7 100644
--- a/sql/hive/src/test/scala/org/apache/spark/sql/hive/MultiDatabaseSuite.scala
+++ b/sql/hive/src/test/scala/org/apache/spark/sql/hive/MultiDatabaseSuite.scala
@@ -287,7 +287,7 @@ class MultiDatabaseSuite extends QueryTest with 
SQLTestUtils with TestHiveSingle
 
     withTempDir { dir =>
       {
-        val message = intercept[AnalysisException] {
+        val e = intercept[AnalysisException] {
           sql(
             s"""
             |CREATE TABLE `d:b`.`t:a` (a int)
@@ -296,9 +296,9 @@ class MultiDatabaseSuite extends QueryTest with 
SQLTestUtils with TestHiveSingle
             |  path '${dir.toURI}'
             |)
             """.stripMargin)
-        }.getMessage
-        assert(message.contains("`t:a` is not a valid name for 
tables/databases. " +
-          "Valid names only contain alphabet characters, numbers and _."))
+        }
+        checkError(e, errorClass = "INVALID_SCHEMA_OR_RELATION_NAME",
+          parameters = Map("name" -> "`t:a`"))
       }
 
       {


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to