rdblue commented on pull request #31541:
URL: https://github.com/apache/spark/pull/31541#issuecomment-781682563


   > why it has to be session catalog? Can your case be solved on custom 
catalog instead?
   
   The purpose is to be able to continue using the built-in catalog for v1 
tables, but also use it for v2 tables. The built-in catalog is the only one 
that can load v1 tables. Others must go through the v2 API for everything. So 
if you have tables in your catalog that still use v1, you have to use it for 
backward compatibility. But, projects like Delta and Iceberg store tables in 
the same metastore along-side v1 tables and need to use the v2 API. That's why 
we allow extending this catalog and delegating to the built-in one.
   
   This use case wasn't called out explicitly because we didn't really think 
about a `spark_catalog` extension supporting other namespaces. But there's 
nothing wrong with doing that in principle and I think it is a better 
separation of concerns to enforce catalog restrictions in that catalog rather 
than in rules.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to