srielau commented on code in PR #37887:
URL: https://github.com/apache/spark/pull/37887#discussion_r990241574


##########
core/src/main/resources/error/error-classes.json:
##########
@@ -536,12 +563,71 @@
       "Failed to set original permission <permission> back to the created 
path: <path>. Exception: <message>"
     ]
   },
+  "ROUTINE_ALREADY_EXISTS" : {
+    "message" : [
+      "Cannot create the function <routineName> because it already exists.",
+      "Choose a different name, drop or replace the existing function, or add 
the IF NOT EXISTS clause to tolerate a pre-existing function."
+    ],
+    "sqlState" : "42000"
+  },
+  "ROUTINE_NOT_FOUND" : {
+    "message" : [
+      "The function <routineName> cannot be found. Verify the spelling and 
correctness of the schema and catalog.",
+      "If you did not qualify the name with a schema and catalog, verify the 
current_schema() output, or qualify the name with the correct schema and 
catalog.",
+      "To tolerate the error on drop use DROP FUNCTION IF EXISTS."
+    ],
+    "sqlState" : "42000"
+  },
+  "SCHEMA_ALREADY_EXISTS" : {

Review Comment:
   I list this as a concern in the description of the PR. IIUC @cloud-fan  PR 
to push errors into the catalog will enable us context specific errors.
   The SQL Standard naming is: catalog.schema.(relation|type|..).
   I was not able to discern when we go into the namespace codepath vs schema 
codepath.
   CREATE SCHEMA gave a namespace error.
   
   Ideally I'd say we add NAMEPACE_NOT_FOUND. NAMESPACE_ALREADY_EXISTS and 
raise the appropriate code. This should be a follow on PR.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to