srielau commented on code in PR #37887:
URL: https://github.com/apache/spark/pull/37887#discussion_r995911916
##########
core/src/main/resources/error/error-classes.json:
##########
@@ -536,12 +563,71 @@
"Failed to set original permission <permission> back to the created
path: <path>. Exception: <message>"
]
},
+ "ROUTINE_ALREADY_EXISTS" : {
+ "message" : [
+ "Cannot create the function <routineName> because it already exists.",
+ "Choose a different name, drop or replace the existing function, or add
the IF NOT EXISTS clause to tolerate a pre-existing function."
+ ],
+ "sqlState" : "42000"
+ },
+ "ROUTINE_NOT_FOUND" : {
+ "message" : [
+ "The function <routineName> cannot be found. Verify the spelling and
correctness of the schema and catalog.",
+ "If you did not qualify the name with a schema and catalog, verify the
current_schema() output, or qualify the name with the correct schema and
catalog.",
+ "To tolerate the error on drop use DROP FUNCTION IF EXISTS."
+ ],
+ "sqlState" : "42000"
+ },
+ "SCHEMA_ALREADY_EXISTS" : {
+ "message" : [
+ "Cannot create schema <schemaName> because it already exists.",
+ "Choose a different name, drop the existing schema, or add the IF NOT
EXISTS clause to tolerate pre-existing schema."
+ ],
+ "sqlState" : "42000"
+ },
+ "SCHEMA_NOT_EMPTY" : {
+ "message" : [
+ "Cannot drop a schema <schemaName> because it contains objects.",
+ "Use DROP SCHEMA ... CASCADE to drop the schema and all its objects."
+ ],
+ "sqlState" : "42000"
+ },
+ "SCHEMA_NOT_FOUND" : {
+ "message" : [
+ "The schema <schemaName> cannot be found. Verify the spelling and
correctness of the schema and catalog.",
+ "If you did not qualify the name with a catalog, verify the
current_schema() output, or qualify the name with the correct catalog.",
+ "To tolerate the error on drop use DROP SCHEMA IF EXISTS."
+ ],
+ "sqlState" : "42000"
+ },
"SECOND_FUNCTION_ARGUMENT_NOT_INTEGER" : {
"message" : [
"The second argument of <functionName> function needs to be an integer."
],
"sqlState" : "22023"
},
+ "TABLE_OR_VIEW_ALREADY_EXISTS" : {
+ "message" : [
+ "Cannot create table or view <relationName> because it already exists.",
+ "Choose a different name, drop or replace the existing object, or add
the IF NOT EXISTS clause to tolerate pre-existing objects."
+ ],
+ "sqlState" : "42000"
+ },
+ "TABLE_OR_VIEW_NOT_FOUND" : {
+ "message" : [
+ "The table or view <relationName> cannot be found. Verify the spelling
and correctness of the schema and catalog.",
+ "If you did not qualify the name with a schema, verify the
current_schema() output, or qualify the name with the correct schema and
catalog.",
+ "To tolerate the error on drop use DROP VIEW IF EXISTS or DROP TABLE IF
EXISTS."
+ ],
+ "sqlState" : "42000"
+ },
+ "TEMP_TABLE_OR_VIEW_ALREADY_EXISTS" : {
+ "message" : [
+ "Cannot create the temporary view <relationName> because it already
exists.",
Review Comment:
We should be careful what we wish for here.. When a catalog runs into the
primary key violation on the "relation" scope it may not even know what the
object is that it is trying to create. So we would force code to drag metadata
around where it is not needed.
Either way in general we can can always suberror class state and objects,
but s that really useful?
This classification may be mathematically cool, but I don't think it makes
the error messages better.
In fact it can make them worse because we want application to CATCH these
errors. If there are too many the exception handlers get lost (in fact I do
have regrets abount approving the UNRESOLVED_COLUMN.WITH(OUT)RECOMMENDATION.
IMHO we totally overshot and made it worse, just t avoid an empty list.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]