SavicStefan opened a new pull request, #53830:
URL: https://github.com/apache/spark/pull/53830

   
   ### What changes were proposed in this pull request?
   Instead of explicitly calling `tableExists` in 
`JDBCTableCatalog.loadTable(...)`, this change now detects whether a table 
exists by catching `SQLException` from `getQueryOutputSchema` and checking it 
with the dialect-specific `isObjectNotFoundException` method.
   
   By checking `isObjectNotFoundException` before `isSyntaxErrorBestEffort`, 
the code can reliably distinguish between the case where table does not exists 
and other SQL syntax errors. This order is important because when a table does 
not exist, the exception raised can also match the criteria for 
`isSyntaxErrorBestEffort`.
   
   ### Why are the changes needed?
   This change removes the redundant tableExists call, since we can determine 
whether a table exists based on the error thrown from `getQueryOutputSchema`.
   
   With this change now only makes one JDBC API call instead of two, improving 
efficiency by eliminating the need for a separate table existence check.
   
   ### Does this PR introduce _any_ user-facing change?
   <!--
   Note that it means *any* user-facing change including all aspects such as 
new features, bug fixes, or other behavior changes. Documentation-only updates 
are not considered user-facing changes.
   
   If yes, please clarify the previous behavior and the change this PR proposes 
- provide the console output, description and/or an example to show the 
behavior difference if possible.
   If possible, please also clarify if this is a user-facing change compared to 
the released Spark versions or within the unreleased branches such as master.
   If no, write 'No'.
   -->
   
   
   ### How was this patch tested?
   <!--
   If tests were added, say they were added here. Please make sure to add some 
test cases that check the changes thoroughly including negative and positive 
cases if possible.
   If it was tested in a way different from regular unit tests, please clarify 
how you tested step by step, ideally copy and paste-able, so that other 
reviewers can test and check, and descendants can verify in the future.
   If tests were not added, please describe why they were not added and/or why 
it was difficult to add.
   If benchmark tests were added, please run the benchmarks in GitHub Actions 
for the consistent environment, and the instructions could accord to: 
https://spark.apache.org/developer-tools.html#github-workflow-benchmarks.
   -->
   It was tested using existing and new tests in `JDBCTableCatalogSuite`.
   
   ### Was this patch authored or co-authored using generative AI tooling?
   <!--
   If generative AI tooling has been used in the process of authoring this 
patch, please include the
   phrase: 'Generated-by: ' followed by the name of the tool and its version.
   If no, write 'No'.
   Please refer to the [ASF Generative Tooling 
Guidance](https://www.apache.org/legal/generative-tooling.html) for details.
   -->
   No.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to