MaxGekk commented on code in PR #44358:
URL: https://github.com/apache/spark/pull/44358#discussion_r1430105163
##########
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala:
##########
@@ -1180,12 +1180,15 @@ object JdbcUtils extends Logging with SQLConfHelper {
}
}
- def classifyException[T](message: String, dialect: JdbcDialect)(f: => T): T
= {
+ def classifyException[T](
Review Comment:
> We have the well-defined FAILED_JDBC error now, what do we expect the
dialect to overwrite?
We know which operation from Spark side failed but we don't know the root
cause of the issue. A dialect can "understand" DBMS specific error better, and
map some cases to Spark exceptions. Here is a simple example:
- Spark creates an index, and there is the error class
`FAILED_JDBC.CREATE_INDEX` for the cases of any unclassified failures, but
- Postgres dialect knows that the SQL state `42P07` from `SQLException`
belongs to existence of the object or in our case it means the index exists
already. So, it can provide more precise exception
`IndexAlreadyExistsException`
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]