cloud-fan commented on issue #25747: [SPARK-29039][SQL] centralize the catalog and table lookup logic URL: https://github.com/apache/spark/pull/25747#issuecomment-535862464 After playing with the code a bit more, I think we should write the code in a way that is future-proof. In the future, Spark should: 1. fully migrate session catalog to the v2 catalog API, so we don't need to distinguish v2 catalog and session catalog anymore. 2. DS v1 is still there, so we need different commands to deal with v1 and v2 tables. At that time, the DDL/DML commands resolution process should be 1. parsed to `XYZStatement` 2. resolve the catalog from the multipart identifier, and convert to `XYZCommand` if no table lookup is needed (like CREATE TABLE) 3. look up the table from the resolved catalog, and convert statement to v1 or v2 command depending on the table type (v1 or v2), via 2 different rules (one in catalyst and one in sql/core) For now, session catalog needs special handling, so we need an extra rule to hijack step 2 to convert to v1 commands if the resolved catalog is session catalog. I'll push a new commit to follow this principle.
---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected] With regards, Apache Git Services --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
