rdblue commented on pull request #1783:
URL: https://github.com/apache/iceberg/pull/1783#issuecomment-732396674


   > How important or even valid is it to load non-path based tables through 
our source in Spark 3?
   
   The purpose is to maintain compatibility with 2.4. If you have a working job 
that used IcebergSource to load a table from the Hive catalog, then that should 
continue to work.
   
   > It is a bit unclear to me how we are going to produce the catalog name 
from SupportsCatalogOptions.
   
   The same way that we lookup catalogs in Spark: If the first part is a 
catalog, return it. Otherwise, use a default.
   
   The problem with this is how we want to maintain compatibility with Spark 
2.4. In 2.4, the behavior is that the default catalog is the Hive session 
catalog and that can't be changed. In 3, I think there is a good argument that 
since the session catalog is the default unless you configure a different one 
(spark.sql.defaultCatalog) then we can respect the default catalog.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to