MaxGekk commented on code in PR #44449:
URL: https://github.com/apache/spark/pull/44449#discussion_r1444308138
##########
sql/core/src/main/scala/org/apache/spark/sql/jdbc/JdbcDialects.scala:
##########
@@ -633,13 +633,31 @@ abstract class JdbcDialect extends Serializable with
Logging {
* @param e The dialect specific exception.
* @param errorClass The error class assigned in the case of an unclassified
`e`
* @param messageParameters The message parameters of `errorClass`
+ * @param description The error description
* @return `AnalysisException` or its sub-class.
*/
def classifyException(
e: Throwable,
errorClass: String,
- messageParameters: Map[String, String]): AnalysisException = {
- new AnalysisException(errorClass, messageParameters, cause = Some(e))
+ messageParameters: Map[String, String],
+ description: String): AnalysisException = {
+ classifyException(description, e)
Review Comment:
This might be not fully compatible with existing JDBC dialects because
values in `messageParameters` has been preprocessed already like quoting, and a
JDBC dialect could use regexp for parsing parameters from
`mesage`/`description`. Theoretically, it can break the regexps.
And if we pass the quoting values to an JDBC dialect, it can throw some
Spark exception which performs quoting inside like `PostgresDialect` throws
`NonEmptyNamespaceException`. The last one does quoting inside its constructor.
Apparently, we will get double quoting values which is a bug already.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]